Back to Dashboard
Auto-refresh in 5:00
🔍

Sentry

Developer & DevOps

Operational
Response Time
109ms
Recent Updates
20
Uptime
64.53%

Recent Activity

SSO login broken with Chromium browser

Apr 13, 21:52 UTC Resolved - Login with SSO in Chromium browser was broken. Users experienced form submission failures due to a bug in CSP. The problem is now fixed. Apr 13, 20:31 UTC Identified - Login with SSO in Chromium browser is broken right now. Signing in using cmd or ctrl + click on the "Login with Provider" button is a workaround you can use while our team fixes the issue.

Data ingestion and alerts delayed for US customers

Apr 13, 19:57 UTC Resolved - Ingestion and alerts should be back to normal Apr 13, 19:13 UTC Investigating - US customers may experience delay in data ingestion and alerts. The issue is resolved and we are burning the backlog.

Ingestion Delay – All Data Types

Apr 10, 11:44 UTC Resolved - Following the ingestion delay experienced between 08:40 – 08:55 UTC, all systems are now fully operational. Data ingestion is running normally across all data types. Apr 10, 09:20 UTC Monitoring - We experienced a delay in data ingestion affecting all data types between 08:40 – 08:55 UTC.

Subcription management page not working

Apr 7, 23:42 UTC Resolved - Hot fix merged, all customers can access their subscription pages now. Apr 7, 22:28 UTC Monitoring - We are rolling out a fix Apr 7, 21:36 UTC Identified - We have identified an issue for customers trying to access their subscription page. we are working on a fix

US delays and ingestion and issues in querying across pipelines

Apr 7, 21:23 UTC Resolved - This incident has been resolved Apr 7, 21:02 UTC Monitoring - We performed a manual change that caused degraded performance and delayed ingestion in spans, crons, replays, and uptime that caused issues between 13:45 and 13:52 PT. The change has been cancelled and operations have returned to normal

Error & Span processing delays in US

Apr 6, 21:26 UTC Resolved - This incident has been resolved and backlogs have been processed. Apr 6, 20:52 UTC Monitoring - We have identified the source of postgres load and mitigated it. We are continuing to monitor conditions as backlogs process. Apr 6, 20:04 UTC Update - We are continuing to mitigate load in our postgres clusters. Apr 6, 19:04 UTC Update - We are continuing to mitigate load in our postgres clusters. Apr 6, 18:26 UTC Update - We're continuing to mitigate load in our cach...

Ingestion and alerts delayed in US

Mar 21, 01:09 UTC Resolved - Ingestions and alerts are caught up and the incident is resolved. Mar 20, 21:42 UTC Monitoring - Customers may experience some delay in ingestion and alerts. We are currently burning the backlog

Ingestion delayed for transactions, uptime, spans, crons in US

Mar 16, 22:21 UTC Resolved - The incident has been resolved Mar 16, 22:11 UTC Monitoring - Ingestion is back to normal, alerts may still be delayed Mar 16, 21:39 UTC Identified - We are actively mitigating the issue Mar 16, 20:17 UTC Investigating - We are currently investigating the issue. Some alerts may be dropped if they are for older time windows.

Alerts delayed for US customers

Mar 9, 20:37 UTC Resolved - Ingestion and alert latency is back to normal. Mar 9, 18:17 UTC Monitoring - We are burning the backlog and actively monitoring the progress. Mar 9, 17:59 UTC Identified - US customers may experience delay in alerts. We've identified the issue and will be putting in a fix.

Ingestion delays in US and EU regions

Mar 2, 21:09 UTC Resolved - This incident has been resolved. Mar 2, 20:57 UTC Update - EU ingestion has been restored and latency is back to normal levels. US continues to recover and will likely be caught up within the next hour. Mar 2, 20:29 UTC Monitoring - We have implemented a fix and are monitoring Mar 2, 20:09 UTC Identified - The issue has been identified and the fix is being implemented. Mar 2, 19:57 UTC Investigating - We are currently investigating this issue.

Intermittent dashboard failures & increased US ingest latency

Feb 26, 20:56 UTC Resolved - Ingestion backlog has finished processing and our system is now operating normally. Feb 26, 20:02 UTC Monitoring - Our cloud provider has resolved an underlying problem and our dashboard availability issues have been resolved. We're continuing to process our ingestion backlog and monitor the situation. Feb 26, 19:15 UTC Investigating - We're investigating intermittent failures loading our dashboard (all regions) and increased latency for ingestion of all events type...

Ingestion Issue in US

Feb 26, 19:41 UTC Resolved - We have identified that the core problem is related to the Intermittent dashboard failures. Please follow https://status.sentry.io/incidents/z3g2bjxxwv9l for the latest updates. In the meantime, this will be marked as resolved. Feb 26, 19:07 UTC Update - We also identified that transaction ingestion was also affected. Feb 26, 18:59 UTC Investigating - We are experiencing an ingestion issue with spans, logs, and metrics. Our teams are currently investigating the probl...

Increased error rate in South America ingestion region

Feb 24, 13:31 UTC Resolved - This incident has been resolved. Ingestion in our South America point-of-presence is fully functional. Feb 24, 13:00 UTC Investigating - We are currently investigating the issue.

Calls to Seer in DE are failing

Feb 22, 04:36 UTC Resolved - This incident has been resolved. Feb 22, 04:32 UTC Monitoring - A fix has been implemented and we are monitoring the results. Feb 22, 04:21 UTC Identified - Due to a transient error, some calls to Seer are failing in EU. We identified the issue and are in the process of deploying a fix.

Ingestion delays for spans, logs, traces, and metrics in US

Feb 13, 00:57 UTC Resolved - We have resolved the issue, and our system is now working as expected Feb 13, 00:38 UTC Update - Ingestion is back to normal, alerts may still be delayed. We will continue to monitor the recovery. Feb 12, 23:44 UTC Update - We are still continuing to process the backlog and are still monitoring the recovery. Feb 12, 21:35 UTC Update - We are still continuing to process the backlog and are still monitoring the recovery. Feb 12, 19:17 UTC Update - We are still continui...

EU explore page performance degraded

Feb 9, 19:39 UTC Resolved - The incident has been resolved Feb 9, 19:33 UTC Investigating - We are currently investigating

Investigating Issues with Dashboard

Jan 29, 23:43 UTC Resolved - We have resolved the issue and all systems are working as expected. Jan 29, 23:24 UTC Monitoring - We have identified an issue related to database contention and have issued a fix. We are continuing to monitor the system as it returns to health. Jan 29, 23:05 UTC Update - We are investigating an issue where the Sentry Dashboard may be slow to load. Jan 29, 23:02 UTC Investigating - We are currently investigating this issue.

Delay in span ingestion

Jan 27, 18:56 UTC Resolved - Ingestion has recovered. Jan 27, 18:03 UTC Identified - The issue has been identified and a fix is being implemented. Jan 27, 14:51 UTC Investigating - We are investigating a delay in ingesting spans in the US region.

Crons ingestion backlogged

Jan 24, 00:27 UTC Resolved - Backlog has been processed and crons should be running as normal Jan 23, 21:46 UTC Identified - The issue has been identified and we working through backlogs. Jan 23, 18:38 UTC Investigating - We are currently investigating this issue.

Ingestion IP addresses updates

Jan 14, 00:00 UTC Completed - The scheduled maintenance has been completed. Jan 5, 16:55 UTC Update - We completed the brownout successfully. The involved DNS records are back to the previous addresses and the schedule for the final switch has been confirmed for Tuesday, January 13th, 08:00-19:00 UTC. Jan 5, 00:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary. Nov 20, 09:35 UTC Scheduled - We're going to update the IP addresses of the fo...

Visit Official Status Page