Back to Dashboard
Auto-refresh in 5:00
๐Ÿ”„

CircleCI

Developer & DevOps

Operational
Response Time
244ms
Recent Updates
20
Uptime
19.22%

Recent Activity

Delay in Jobs starting

Feb 10, 16:13 UTC Resolved - This incident has been resolved. Feb 10, 15:54 UTC Monitoring - We're recovered and are clearing up the backlog. Feb 10, 15:42 UTC Update - The issue has been identified and we are pushing out a fix. Thank you for your patience. Feb 10, 15:41 UTC Identified - The issue has been identified and we are pushing out a fix. Thank you for your patience. Feb 10, 15:32 UTC Update - We are continuing to investigate this issue. Feb 10, 15:26 UTC Update - We are continuing to in...

We are investigating issues with delays in pages loading

Feb 9, 20:25 UTC Resolved - Following GitHub's service recovery, all CircleCI functionality has returned to normal operation. Jobs are processing as expected across all resource classes. Feb 9, 20:13 UTC Update - Customers using Github may continue to experience delays in Pipelines & UI. Feb 9, 20:05 UTC Monitoring - We are starting to see some recovery and jobs are beginning to be triggered by Github again. Feb 9, 19:59 UTC Update - We are currently impacted by an ongoing Github outage. We ...

Delays affecting Linux and Remote Docker jobs

Feb 3, 18:58 UTC Resolved - This incident has been resolved. Thank you for your patience. Feb 3, 18:19 UTC Monitoring - The capacity constraints affecting Linux and Remote Docker job execution have been mitigated. Jobs are now starting within expected timeframes. We continue to monitor the situation to ensure stability. - What's impacted: Linux and Remote Docker job executionย  - working within normal parameters - What's happening: Service levels have returned to normal after implementing mitig...

Email Notifications Delayed

Feb 2, 22:18 UTC Resolved - The issue affecting email notifications has been resolved. Build completion emails and plan-related notifications are now being delivered normally. We apologize for any inconvenience this may have caused. Feb 2, 22:05 UTC Update - We are continuing to monitor for any further issues. Feb 2, 22:05 UTC Monitoring - Our upstream provider has resolved the issue affecting their system. We are currently monitoring email notification delivery to confirm full restoration. B...

Job start delays for arm-medium, large, xlarge, 2xl resources

Jan 29, 19:37 UTC Resolved - This incident has been resolved. Jan 29, 19:35 UTC Update - We are continuing to monitor for any further issues. Jan 29, 19:35 UTC Monitoring - A fix has been implemented and we are monitoring the results. Jan 29, 19:29 UTC Identified - We are experiencing capacity constraints affecting arm-medium, large, xlarge, 2xl, resulting in job start delays of up to 5 minutes. Current Status: - arm-medium, large, xlarge, 2xl: Experiencing delays up to 5 minutes due to capacity...

Deprecation of Mac M1 and M2 resource classes

Jan 27, 00:00 UTC Completed - The scheduled maintenance has been completed. Jan 26, 00:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary. Dec 11, 23:10 UTC Scheduled - As part of our ongoing infrastructure improvements, we will be deprecating Mac M1 and M2 resource classes on February 16th, 2026. Ahead of the deprecation date, we will be performing 24-hour brownouts from 00:00:01 to 23:59:59 UTC, during which these resources will be unavail...

Deprecation of Mac M1 and M2 resource classes

Jan 13, 00:00 UTC Completed - The scheduled maintenance has been completed. Jan 12, 00:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary. Dec 11, 23:07 UTC Scheduled - As part of our ongoing infrastructure improvements, we will be deprecating Mac M1 and M2 resource classes on February 16th, 2026. Ahead of the deprecation date, we will be performing 24-hour brownouts from 00:00:01 to 23:59:59 UTC, during which these resources will be unavail...

Delays in job start

Jan 12, 18:08 UTC Resolved - We identified and resolved an issue that caused jobs to be delayed. During this period, some customers experienced longer than normal job start times while we performed database optimization work. Our team has completed the necessary tuning and service performance has returned to normal levels. We apologize for any inconvenience this may have caused.

Issues with jobs starting for certain Bitbucket users

Jan 9, 23:25 UTC Resolved - This issue has been resolved. Organizations using Bitbucket are now able to build successfully following mitigation actions we implemented earlier today. What happened: We identified an issue where retrieving user identities from Bitbucket was encountering rate limiting for accounts with extensive project configurations. This was caused by recent changes to Bitbucket's rate limiting behavior that were enforced in late December 2025. Our mitigating actions helped reso...

Higher wait times for Linux, Remote Docker and Mac resource classes

Dec 16, 22:12 UTC Resolved - This incident has been resolved and wait times have returned to normal. We appreciate your patience. Dec 16, 22:00 UTC Monitoring - Wait times for Linux, Remote Docker and Mac are starting to return to normal. We're going to keep our eye on things for just a bit longer and will update again shortly. Dec 16, 21:49 UTC Identified - We've ID'd the problem and we're rolling out a fix. Dec 16, 21:17 UTC Investigating - Customers using Machine, Linux Remote Docker, and Mac...

Deprecation of Mac M1 and M2 resource classes

Dec 16, 00:00 UTC Completed - The scheduled maintenance has been completed. Dec 15, 16:43 UTC Update - During our maintenance, we identified an issue with the opt-out feature for the Mac M1/M2 brownouts. Organizations attempting to disable brownouts via Organization Settings > Advanced > Enable image brownouts may have found that the setting was non-functional. A fix has been deployed and the opt-out feature should now be working as expected. Dec 15, 00:00 UTC In progress - Scheduled maintenanc...

Higher wait times for Linux, Remote Docker and ARM resource classes

Dec 15, 21:30 UTC Resolved - Between 21:14 and 21:26 UTC, December 15, 2025, customers using Machine, Linux Remote Docker, and ARM resource classes experienced elevated wait times when starting jobs. Wait times reached up to 5 minutes during this period due to the delays in our infrastructure scaling to meet the demand. The issue has been resolved and wait times have returned to normal levels. Jobs are now starting within expected timeframes. No further action is required from customers, and no...

Usage API Data Unavailable for December 11, 2025

Dec 12, 19:07 UTC Resolved - The Usage API issue has been resolved. Data is loaded for yesterday 12/11. Thank you for your patience. Dec 12, 17:49 UTC Identified - The issue has been identified. Expected time to resolution: 90 minutes Dec 12, 17:22 UTC Investigating - We are currently investigating an issue affecting Usage API data availability for December 11, 2025. Customers querying usage data for this date will not receive results at this time. Usage data prior to December 11 remains accessi...

Issue with Jobs

Dec 5, 00:15 UTC Resolved - The incident has been resolved. Thank you for your patience. The "infra-fail" should no longer be occurring. Dec 5, 00:06 UTC Monitoring - A fix has been implemented and we are monitoring the results. Dec 4, 23:36 UTC Identified - We are slowly returning to normal. If you had jobs that failed with "infra-fail", those jobs can be rerun. We thank you for your patience while our engineers worked to get our system back to stability. Dec 4, 22:53 UTC Investigating - We...

MacOS jobs without a resource class set will not run

Dec 4, 17:49 UTC Resolved - This incident has been resolved. Dec 4, 17:43 UTC Monitoring - All jobs should now be running normally. The backlog of jobs has cleared and jobs should be running in typical time. We will continue monitoring to ensure consistent service. Thank you for your patience. Dec 4, 17:19 UTC Update - We are continuing to see recovery, but some customers may still experience delays in jobs running on M4 executors. We appreciate your patience whilst we work through the backlo...

Jobs not starting

Dec 3, 23:48 UTC Resolved - We have resolved the issues affecting job triggering, workflow starts, and API queries. Our systems have been stabilized and are operating normally. What was impacted: Job triggering, workflow starts, API queries, and pipeline page loading experienced disruptions for some customers. This affected all resource classes and executors. Resolution: We implemented mitigation measures to address high volume workflow queries impacting our internal systems and increased syste...

Pipelines page not loading - Cont.

Dec 3, 21:58 UTC Resolved - The issues affecting the pipelines page display are related to a broader incident impacting our systems. We have opened a separate incident tracking job triggering and API status issues, which encompasses the pipelines page loading problems. Please follow https://status.circleci.com/incidents/jq4bgq2sjt1r for ongoing updates. Dec 3, 21:31 UTC Update - We are continuing to investigate this issue. Dec 3, 21:31 UTC Investigating - We are seeing some issues loading the...

Jobs stuck in running state

Dec 3, 18:12 UTC Resolved - Between 16:20 and 16:32 UTC, job triggering and workflow starts experienced disruptions across all resource classes due to memory pressure on our internal job distributor systems. We identified the issue and scaled our infrastructure to handle the load. Services returned to normal operation at 16:32 UTC. What was impacted: Job triggering and workflow starts were disrupted for 12 minutes. Some workflows and jobs appeared stuck in a running state during this window. Re...

Pipelines page not loading

Dec 3, 15:30 UTC Resolved - This incident has been resolved. Things should be back to normal. Dec 3, 13:37 UTC Identified - We are seeing some issues loading the pipelines page. This is intermittent and won't affect most users. No work is being affected, just the display of pipelines. We have identified the issue and are working on a fix.

Duplicate Notifications

Dec 2, 02:00 UTC Resolved - Between Dec 2, 2025 16:20 UTC and Dec 3, 2025 14:27 UTC, a change deployed to our workflow execution system caused duplicate notifications to be sent to some customers and triggered unexpected auto-reruns for a small number of projects. Impact: - Some customers received multiple failure notification emails for the same workflow. - The number of duplicate notifications varied based on how many jobs in the workflow were affected (e.g., marked skipped or cancelled). - ...

Visit Official Status Page