Service
Apr 1, 22:46 UTC Completed - The scheduled maintenance has been completed. Apr 1, 21:47 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary. Apr 1, 21:46 UTC Scheduled - We will be undergoing scheduled maintenance during this time. Apr 1, 21:46 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary. Apr 1, 21:45 UTC Scheduled - Cluster operations will be delayed due to DNS maintenance activiti...
Apr 1, 12:55 UTC Resolved - Some Atlas users may be unable to access Charts and the Visualization page due to repeated login prompts or unauthorized errors when attempting to sign in. Our engineers have identified a recent change as the likely cause, have rolled it back, and are monitoring the service to confirm that normal access has been restored.
Mar 31, 21:19 UTC Resolved - This incident has been resolved. Mar 19, 22:15 UTC Monitoring - A fix has been implemented and we are monitoring the results. Users who were previously unable to log in via SSO should now be able to authenticate successfully. Mar 19, 00:24 UTC Identified - Impact: Some users may be unable to log into the Atlas for Government UI using SSO. Specifically, users accessing Atlas for Government via a saved SSO login URL may receive an error, and users attempting to log in ...
Mar 30, 23:10 UTC Resolved - We are no longer seeing capacity issues in Azure East US 2. This issue is now resolved. Mar 30, 21:47 UTC Identified - Identified: Some clusters in the Azure East US 2 region will see delays provisioning new clusters or adding additional nodes to their existing clusters due to Azure capacity constraints. What you might see: Delays in provisioning new clusters or adding additional nodes in Azure East US 2 region. User action: As a workaround, users can provision clust...
Mar 27, 13:21 UTC Resolved - We are no longer seeing intermittent spikes in HTTP 503 errors. Mar 26, 19:12 UTC Update - We continue to monitor our systems for intermittent spikes in 503 HTTP errors following our deployed fix. We will provide another update tomorrow by 3 PM UTC. Mar 25, 23:39 UTC Monitoring - The team has identified the root cause of intermittent spikes of HTTP 503 errors and has applied a fix. Mar 25, 21:31 UTC Investigating - We are currently investigating intermittent spikes o...
Mar 19, 19:19 UTC Resolved - This incident has been resolved. Mar 19, 19:14 UTC Update - We are continuing to monitor for any further issues. Mar 19, 19:01 UTC Monitoring - A fix has been implemented and we are monitoring the results. Mar 19, 18:25 UTC Identified - The issue has been identified and we are working on a fix. Mar 19, 18:11 UTC Investigating - We are currently investigating an issue when Atlas Data Federation tries to read data from customer Azure blob storage containers.
Mar 19, 15:28 UTC Resolved - This incident has been resolved. Mar 19, 15:17 UTC Monitoring - A fix has been implemented and we are monitoring the results. Mar 19, 14:42 UTC Identified - We have identified an issue affecting the Clusters page in MongoDB Atlas. Affected users may see a blank page when trying to view their Clusters page. Cluster health is unaffected.
Mar 11, 20:10 UTC Resolved - At approximately 10:30 AM CT, we identified an issue affecting stream processing within the AWS US-EAST-1 region. As of now, normal operations have resumed. We have initiated a full investigation to determine the root cause and prevent future recurrence.
Mar 11, 02:30 UTC Resolved - This incident has been resolved. Mar 11, 02:21 UTC Monitoring - A fix has been implemented and we are monitoring the results. Mar 11, 00:45 UTC Identified - The issue has been identified and a fix is being implemented. Mar 10, 23:18 UTC Investigating - Atlas cluster snapshots are failing. We are actively investigating.
Mar 2, 16:28 UTC Resolved - This incident has been resolved. Email notifications are being sent successfully. Mar 2, 15:50 UTC Investigating - We are currently investigating this issue. Some users may not be receiving emails from MongoDB. We are looking into the cause and will provide an update as soon as we know more.
Mar 2, 12:00 UTC Resolved - Users may have seen tools and integrations that rely on older MongoDB Atlas Admin API versions (for example, 2023-01-01 and 2023-02-01) start failing with HTTP 410 responses indicating that the requested version was no longer available. This affected workflows using the MongoDB Atlas Admin API through the Terraform provider, the Atlas CLI, and custom automation. This behavior was triggered by our planned retirement process for older API versions, but the resulting us...
Mar 2, 03:05 UTC Resolved - This incident has been resolved. Mar 1, 05:37 UTC Monitoring - A fix has been implemented and we are monitoring the results. Feb 28, 19:53 UTC Investigating - We are currently investigating an issue with Atlas App Services Device Sync. Affected users of this deprecated service may see SSL validation errors. We will update this post with more details as they become available.
Apr 8, 15:33 UTC Update - Two Availability Zones in AWS me-central-1 (UAE) continue to experience significant impairments. In addition, workloads in me-south-1 (Bahrain) are currently not operational, cannot be modified, and are not able to migrate to other regions or support other disaster recovery actions such as backup and restore. We strongly recommend that customers with workloads in AWS me-central-1 act now to move their clusters to alternate regions instead of waiting for full availabili...
Feb 26, 20:04 UTC Resolved - The issue has been resolved Feb 26, 19:51 UTC Monitoring - A fix has been implemented and we're monitoring the results. Feb 26, 19:21 UTC Investigating - We are currently experiencing degraded service with our DataDog integration for several DataDog regions. This may result in incomplete metrics, or issues with alerting for customers using the DataDog integration. We are investigating this issue.
Feb 20, 16:59 UTC Resolved - This incident has been resolved. Feb 20, 16:54 UTC Monitoring - A fix has been implemented and we are monitoring the results. Feb 19, 21:35 UTC Update - We are continuing to investigate reports of Atlas Data Federation and Online Archive schema tables failing to load. As a workaround, please connect directly and run the sqlGetSchema command: https://www.mongodb.com/docs/sql-interface/schema/view/ If that does not work, you may need to generate the schema first: https...
Feb 13, 17:57 UTC Resolved - The DNS records have been backfilled and clusters that had these IP addresses should be recovered. Feb 13, 16:42 UTC Identified - We have identified the root cause and are working on backfilling those records with AWS support. We have also disabled that ip range for new clusters so moving forward this issue will not be hit for new AWS clusters. Feb 13, 16:11 UTC Investigating - We are investigating an issue with DNS resolution for public IP addresses within 31.89.0....
Feb 13, 07:55 UTC Resolved - We have identified and resolved the issue. Atlas UI will load without a delay now. Feb 13, 07:20 UTC Investigating - We are currently investigating reports of the slow loading of the Atlas web UI.
Feb 12, 23:10 UTC Resolved - This incident has been resolved. Feb 12, 23:08 UTC Monitoring - We have implemented the fix. Customers will now be able to create new clusters on AWS. Feb 12, 23:00 UTC Identified - We have identified the root cause and are implementing a fix to remediate the situation Feb 12, 22:10 UTC Investigating - We are currently investigating elevated errors for customers attempting to create a cluster on AWS
Feb 7, 11:43 UTC Resolved - Many nodes have recovered in the impacted region, however there are still some impacted nodes that are recovering. Azure's latest update at 11:05 UTC on 07 February 2026 indicates they are observing signs of partial recovery and that recovery efforts remain in progress. We are resolving the status post as we expect our system to automatically remediate any lingering issues. Please follow the Azure status post for further updates on the regional outage remediation eff...
Feb 4, 21:06 UTC Resolved - The root cause has been identified and the issue is now resolved. Feb 3, 16:12 UTC Update - We are continuing to investigate the issue. A subset of customers may experience incomplete metric data in their DataDog integration. We'll share updates as more information becomes available. Jan 30, 01:00 UTC Investigating - We are currently experiencing degraded service with our DataDog integration for several DataDog regions. This may result in incomplete metrics, or issu...