Live Developer Tools Outage Feed
Real-time intelligence on the developer infrastructure that ships your code — version control, package registries, hosting, and backend platforms.
This week's developer tools outage roundup →- Major
Shared Pooler scheduled maintenance in sa-east-1
THIS IS A SCHEDULED EVENT May 14, 18:00 - 20:00 UTCMay 7, 22:16 UTCScheduled - There will be scheduled maintenance on May 14 from 18:00-20:00 UTC (15:00-17:00 BRT) on the Shared Pooler(V1) for the sa-east-1 region.This maintenance upgrades the Shared Pooler to a new version (V2) that provides better scalability and uptime.The Shared Pooler will be unavailable during this time period for anyone connecting to their projects using it. Your projects remain available and connections via the Dedicated Pooler and Direct Connections will continue to work. How to determine whether you are affected:Only
sa-east - Major
Shared Pooler scheduled maintenance in ap-southeast-1
THIS IS A SCHEDULED EVENT May 13, 07:00 - 09:00 UTCMay 7, 22:13 UTCScheduled - There will be scheduled maintenance on May 13 from 7:00-9:00 UTC (15:00-17:00 SGT) on the Shared Pooler(V1) for the ap-southeast-1 region.This maintenance upgrades the Shared Pooler to a new version (V2) that provides better scalability and uptime.The Shared Pooler will be unavailable during this time period for anyone connecting to their projects using it. Your projects remain available and connections via the Dedicated Pooler and Direct Connections will continue to work. How to determine whether you are affected:O
ap-southeast - Degraded
Degraded Service in IAD Region
May 10, 16:33 UTCResolved - Our network experienced degraded service in the IAD region between 12:53 UTC and 13:53 UTC. This issue is now resolved.
us-east - Info
Increase in DNS Resolution Errors
May 10, 08:07 UTCIdentified - We're experiencing an increase in DNS resolution errors impacting domains on our Standard Edge Network. A fix has been deployed, and we're monitoring the results.
- Degraded
Increase in Agent Runner Errors
May 10, 07:15 UTCInvestigating - We are seeing an increase in Agent Runner failures for both initial and follow-up prompts. We are currently investigating this issue.
- Info
Vercel Queues, and Vercel Workflow runs were delayed
May 9, 04:45 UTCMonitoring - Between 1:30 to 3:44 UTC, messages in Vercel Queues in iad1 were enqueued but not processed, and Vercel Workflows were blocked from making progress (i.e. remaining in pending / active states). This is recovering, Vercel Queues backlogs are being processed, and Vercel Workflows are unblocked.
- Info
Issues Processing Account Changes
May 9, 00:58 UTCResolved - We experienced issues processing account changes between 19:09 and 19:33 UTC. Users may have encountered issues when attempting to use Agent Runners, purchase credits, view credit usage, or complete account transactions. This issue is now resolved.
- Info
SSL Certificate Generation Delays
May 8, 19:15 UTCResolved - Between 18:36 and 19:04 UTC, issuing SSL certificates was delayed for new domains. These certificates have now been issued. Certificate renewals were not affected.
- Info
Delays Processing Builds
May 8, 16:33 UTCMonitoring - We have applied a fix and are observing recovery across all builds. We'll provide additional updates as-needed.May 8, 16:30 UTCIdentified - We've identified an issue where some customers may experience delays in builds starting and/or builds stuck in an initializing state. We are applying a fix and will provide additional updates as they become available.
- Degraded
Customer Projects Experiencing Upgrade Issues
May 8, 13:26 UTCUpdate - We are continuing to work on a fix for this issue.May 8, 13:25 UTCIdentified - Our engineering team are investigating issues with customer projects being stuck when upgrading Postgres version. We advise that you do not upgrade your project until the incident is resolved.
- Degraded
Elevated Errors in IAD region
May 8, 02:53 UTCInvestigating - We're seeing an increased level of errors in the IAD region starting around 02:10 UTC for both our Standard and HP Edge Networks due to an upstream issue at AWS: https://health.aws.amazon.com/health/status.
us-east - Degraded
Multiple Atlassian services are experiencing issues
May 8, 01:31 UTCInvestigating - We are experiencing issues with multiple Atlassian products. Our teams are investigating further and more updates including will be shared within 1 hour.
- Degraded
Elevated function invocation failures in IAD1
May 8, 01:18 UTCIdentified - Some Vercel functions that run in the IAD1 region are experiencing elevated invocation failures. We are investigating the issue and will share more information as it becomes available.
- Info
Supavisor connectivity in us-east-1
May 8, 00:50 UTCInvestigating - Increased connection times for Supavisor since about 00:10 UTC is affecting a limited number of projects in us-east-1
us-east - Degraded
Elevated Errors Creating New Deployments
May 7, 21:14 UTCInvestigating - We are investigating an issue affecting new deployments that are stuck in the provisioning state. We will share more information as it becomes available.
- Degraded
Background processing delays
May 7, 2026 15:44 UTCInvestigating - We are investigating an issue background jobs being processed with delay. The problem is being tracked in https://gitlab.com/gitlab-com/gl-infra/production/-/work_items/22017.
- Degraded
CCR and CCA failing to start for PR comments
May 7, 06:56 UTCResolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.May 7, 06:14 UTCUpdate - Copilot code review and cloud agents are starting again for pull requests, we are monitoring for full recovery.May 7, 06:13 UTCMonitoring - The degradation has been mitigated. We are monitoring to ensure stability.May 7, 05:02 UTCInvestigating - We are investigating reports of impacted performance for some GitHub services.
- Maintenance
Maintenance — OIDC Token Minting
THIS IS A SCHEDULED EVENT May 7, 00:00 - 01:00 UTCMay 6, 01:56 UTCScheduled - On May 7th between 00:00 UTC and 01:00 UTC, we will be performing scheduled maintenance that may briefly affect OIDC token minting. Customers with jobs that rely on OIDC tokens may experience token failures during a short period within this window.
- Major
GitHub incident with Pull Requests
May 6, 21:26 UTCResolved - This incident has been resolved.May 6, 17:54 UTCUpdate - We are continuing to monitor for any further issues.May 6, 17:54 UTCMonitoring - GitHub is reporting a major incident for degraded pull request availability. This does not impact running CircleCI builds for newly created pull requests or push events.
- Info
Intermittent errors observed on GitLab Next
May 6, 2026 19:01 UTCIdentified - A configuration update may cause some intermittent error messages on GitLab Next (Canary). We have identified the cause and the fix is in progress. More: https://gitlab.com/gitlab-com/gl-infra/production/-/work_items/22008May 6, 2026 19:44 UTCIdentified - The fix is still in progress. For now, the workaround is to use GitLab - Current instead of GitLab - Next. More: https://gitlab.com/gitlab-com/gl-infra/production/-/work_items/22008May 6, 2026 20:46 UTCResolved - We've applied our fix and confirmed this is no longer occurring. We will now mark this as resolve
- Info
Some Pipelines are being lost
May 6, 20:13 UTCResolved - This incident is resolved. Thank you for your patience, and we apologize for any inconvenience.May 6, 20:03 UTCMonitoring - The rollback was successful. Pipelines are flowing normally again. Monitoring for a bit.May 6, 19:43 UTCUpdate - We are continuing to investigate this issue.May 6, 19:42 UTCInvestigating - We are having an issue with some pipelines being lost. We are rolling back the affected service, and will share more information when we have it.
- Degraded
Incident with Pull Requests
May 6, 19:04 UTCResolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.May 6, 19:04 UTCUpdate - Mitigations have been fully applied and we are seeing full recovery of functionality on Pull Request threads. We are continuing to monitor to ensure sustained recovery.May 6, 17:52 UTCUpdate - Creation of new Pull Request threads (including line and file comments) continues to be affected although we are seeing partial recovery.A mitigation is being applied to conti
- Degraded
Elevated Errors and Latency
May 6, 16:49 UTCResolved - This incident has been resolved.May 6, 15:39 UTCMonitoring - We've implemented a fix and have begun to see improvement starting at 15:15 UTC. We're continuing to monitor the results.May 6, 15:29 UTCInvestigating - We're experiencing elevated errors and latency starting at 15:00 UTC. We are currently investigating this issue.
- Degraded
Disruption with some GitHub services
May 6, 11:59 UTCResolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.May 6, 11:59 UTCUpdate - We have applied a mitigation and Copilot services have recovered.May 6, 11:25 UTCUpdate - We are investigating issues with the ability to start Copilot Cloud Agent sessions and view them.May 6, 11:21 UTCInvestigating - We are investigating reports of impacted performance for some GitHub services.
- Degraded
Incident with Actions, we are investigating reports of degraded availability
May 6, 09:44 UTCResolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.May 6, 09:44 UTCUpdate - Actions wait times have fully recovered.May 6, 09:19 UTCMonitoring - The degradation affecting Actions has been mitigated. We are monitoring to ensure stability.May 6, 09:08 UTCUpdate - We've applied a mitigation to fix the issues with queuing and running Actions jobs. We are seeing improvements in telemetry and are monitoring for full recovery.May 6, 08:00 UTC
- Info
Failures to load data across multiple services on the Vercel dashboard, API, and CLI
May 6, 00:38 UTCResolved - All services remain operational. New data is being processed as expected. The majority of the backfill is complete and a long tail of data will be processed within the next couple of hours.May 5, 17:14 UTCUpdate - All systems are operational. We are continuing to monitor the backfill process, which is expected to take a couple of hours. New data is being processed as expected.May 5, 17:12 UTCUpdate - We are continuing to monitor for any further issues.May 5, 16:50 UTCMonitoring - Impacted services have recovered. We are continuing to monitoring, while backfilling rem
- Info
Increased error rate in Workflows
May 6, 00:19 UTCResolved - This incident has been resolved.May 6, 00:13 UTCMonitoring - A fix has been implemented and we are monitoring the results.May 5, 23:30 UTCIdentified - We are experiencing increased error rates with Workflows running on Vercel. We have identified the issue and are implementing a fix.
- Degraded
Agent Runner failures on new projects
May 5, 23:12 UTCResolved - This incident has been resolved.May 5, 21:58 UTCMonitoring - A fix has been implemented and we are monitoring the results.May 5, 20:53 UTCInvestigating - We are investigating increased Agent Runner failures affecting builds for some newly created projects. We will share updates as we learn more.
- Degraded
Increased Latency and Failures for SSH Git Operations
May 5, 18:35 UTCResolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.May 5, 18:35 UTCUpdate - We've completed our mitigation to prevent further impact. At this time the incident is considered resolved.May 5, 18:25 UTCMonitoring - The degradation affecting Git Operations has been mitigated. We are monitoring to ensure stability.May 5, 17:26 UTCUpdate - We're continuing to work on preventing further impact from the earlier issue. No SSH-based impact
- Degraded
Incident with Actions
May 5, 17:26 UTCResolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.May 5, 17:11 UTCUpdate - Actions is experiencing degraded performance. We are continuing to investigate.May 5, 17:11 UTCUpdate - Standard hosted runners have now reached full recovery. Hosted Runners with Private Networking in the East US region remain degraded as we continue working with our compute provider to restore capacity. Hosted Runners with private networking can fail over to a dif
- Maintenance
Dashboard and Management API maintenance
May 5, 14:56 UTCCompleted - The scheduled maintenance has now been completed.May 5, 14:00 UTCIn progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Apr 29, 13:27 UTCUpdate - We have postponed this maintenance to Tuesday, May 5 1400-1500 UTC.Apr 28, 09:27 UTCScheduled - We will be carrying out scheduled database migrations on our Management API on Wednesday, April 29, 2026 between 14:00 to 15:00 UTC.Existing customer projects will not be affected and will continue to operate normally. During the maintenance period, all write activity from our API, Das
- Info
Request Failures in ICN1 (Seoul, South Korea)
May 4, 21:30 UTCResolved - Between 21:45–21:52 UTC, some users may have experienced request failures in ICN1. The issue has been identified, a fix has been applied, and the issue has been resolved.
ap-northeast - Info
Incident with Issues and Webhooks
May 4, 16:40 UTCResolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.May 4, 16:36 UTCMonitoring - The degradation has been mitigated. We are monitoring to ensure stability.May 4, 16:35 UTCUpdate - Webhooks is operating normally.May 4, 16:35 UTCUpdate - The degradation affecting Codespaces has been mitigated. We are monitoring to ensure stability.May 4, 16:34 UTCUpdate - The degradation affecting Issues has been mitigated. We are monitoring to ensure stabilit
- Maintenance
Edge Functions scheduled maintenance in eu-central-1 and us-east-1
May 4, 05:00 UTCCompleted - The scheduled maintenance has been completed.May 4, 04:00 UTCIn progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.May 1, 17:18 UTCScheduled - Upstream provider regular scheduled maintenance on instances in us-east-1 and eu-central-1. We do not expect downtime or impact to Edge Functions invocations or deploys during this window. We will monitor and update its progress.
us-easteu-central - Info
Supavisor connectivity in eu-central-1
May 3, 22:30 UTCResolved - From 15:30 to 7:00 UTC, for projects in eu-central-1, a Supavisor node experienced connectivity issues and was not able to connect to customer databases, causing checkout timeouts and auth query failed errors. This has been resolved and Supavisor is fully available.
eu-central
Other live feeds
Continuously updated from official provider status pages.