Is GitHub Down Right Now?
Live status and incident history for GitHub, sourced directly from their official status page. Updated continuously.
Recent incidents (30 days)
- Degraded3d ago
CCR and CCA failing to start for PR comments
May 7, 06:56 UTCResolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.May 7, 06:14 UTCUpdate - Copilot code review and cloud agents are starting again for pull requests, we are monitoring for full recovery.May 7, 06:13 UTCMonitoring - The degradation has been mitigated. We are monitoring to ensure stability.May 7, 05:02 UTCInvestigating - We are investigating reports of impacted performance for some GitHub services.
- Degraded3d ago
Incident with Pull Requests
May 6, 19:04 UTCResolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.May 6, 19:04 UTCUpdate - Mitigations have been fully applied and we are seeing full recovery of functionality on Pull Request threads. We are continuing to monitor to ensure sustained recovery.May 6, 17:52 UTCUpdate - Creation of new Pull Request threads (including line and file comments) continues to be affected although we are seeing partial recovery.A mitigation is being applied to conti
- Degraded4d ago
Disruption with some GitHub services
May 6, 11:59 UTCResolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.May 6, 11:59 UTCUpdate - We have applied a mitigation and Copilot services have recovered.May 6, 11:25 UTCUpdate - We are investigating issues with the ability to start Copilot Cloud Agent sessions and view them.May 6, 11:21 UTCInvestigating - We are investigating reports of impacted performance for some GitHub services.
- Degradedinvestigating·4d ago
Incident with Actions, we are investigating reports of degraded availability
May 6, 09:44 UTCResolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.May 6, 09:44 UTCUpdate - Actions wait times have fully recovered.May 6, 09:19 UTCMonitoring - The degradation affecting Actions has been mitigated. We are monitoring to ensure stability.May 6, 09:08 UTCUpdate - We've applied a mitigation to fix the issues with queuing and running Actions jobs. We are seeing improvements in telemetry and are monitoring for full recovery.May 6, 08:00 UTC
- Degraded4d ago
Increased Latency and Failures for SSH Git Operations
May 5, 18:35 UTCResolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.May 5, 18:35 UTCUpdate - We've completed our mitigation to prevent further impact. At this time the incident is considered resolved.May 5, 18:25 UTCMonitoring - The degradation affecting Git Operations has been mitigated. We are monitoring to ensure stability.May 5, 17:26 UTCUpdate - We're continuing to work on preventing further impact from the earlier issue. No SSH-based impact
- Degraded5d ago
Incident with Actions
May 5, 17:26 UTCResolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.May 5, 17:11 UTCUpdate - Actions is experiencing degraded performance. We are continuing to investigate.May 5, 17:11 UTCUpdate - Standard hosted runners have now reached full recovery. Hosted Runners with Private Networking in the East US region remain degraded as we continue working with our compute provider to restore capacity. Hosted Runners with private networking can fail over to a dif
- Info6d ago
Incident with Issues and Webhooks
May 4, 16:40 UTCResolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.May 4, 16:36 UTCMonitoring - The degradation has been mitigated. We are monitoring to ensure stability.May 4, 16:35 UTCUpdate - Webhooks is operating normally.May 4, 16:35 UTCUpdate - The degradation affecting Codespaces has been mitigated. We are monitoring to ensure stability.May 4, 16:34 UTCUpdate - The degradation affecting Issues has been mitigated. We are monitoring to ensure stabilit
- Info9d ago
Incomplete pull request results in repositories
May 1, 04:15 UTCResolved - On April 28, 2026, at approximately 14:07 UTC, GitHub received reports that pull requests were missing from search results across global and repository /pulls pages. The issue was caused by a manually invoked repair job intended for a single repository, which was executed without the required safety flags. During execution of the repair job, the database query remained correctly scoped to the repo’s PR IDs. However, the Elasticsearch reconciliation logic did not apply the same scope. It interpreted the min and max PR IDs as a continuous range, causing unrelated PR do
- Info12d ago
Disruption with some GitHub services
Apr 28, 17:09 UTCResolved - On April 28, 2026, from approximately 12:41 UTC to 17:09 UTC, GitHub Actions jobs using Standard Ubuntu 22 and Ubuntu 24 hosted runners experienced run start delays. Approximately 8% of hosted runner jobs using Ubuntu 22 and Ubuntu 24 experienced delays greater than 5 minutes or failures. Larger and self-hosted runners were not impacted.This was caused by a performance regression introduced in the VM reimage process. That reimage delay lowered the overall capacity of runners available to pick up new jobs. This was mitigated with a rollback to a known good image vers
- Degraded12d ago
GitHub search is degraded
Apr 27, 22:46 UTCResolved - On April 27, 2026 between 16:15 UTC and 22:46 UTC, GitHub search services experienced degraded connectivity due to saturation of the load balancing tier deployed in front of our search infrastructure. This resulted in intermittent failures for services relying on our search data including Issues, Pull Requests, Projects, Repositories, Actions, Package Registry and Dependabot Alerts. The impact was varied by search target, with services seeing up to 65% of searches timing out or returning an error between 16:15 UTC and 18:00 UTC. We detected the drop in search result
- Info12d ago
Disruption with some GitHub services
Apr 27, 19:02 UTCResolved - On April 22, 2026 from 18:49 to 19:32 UTC , the Copilot Cloud Agent service began failing during session execution for users running the Agent HQ Codex agent. Codex agent sessions failed to start for all entry points (issue assignment, @copilot comment mentions). 0.5% of total Copilot Cloud Agent jobs were impacted (~2,000 failed jobs). Copilot and other agent sessions were unaffected.This was caused by a model resolution mismatch in Codex agent sessions, resulting in an incompatible model being used at runtime. A mitigation was deployed to select a stable default m
- Info15d ago
Delays with Actions Jobs for Larger Runners using VNet Injection in the East US region
Apr 25, 00:36 UTCResolved - On April 24, 2026, from approximately 11:39 UTC to April 25, 2026 at 00:15 UTC, GitHub Actions experienced delays and timeouts for Larger Hosted Runner jobs using VNet injection in the East US region without a failover region configured. Standard and Self-hosted runners were not impacted. This was caused by backend failures in our compute provider’s provisioning, scaling, and update operations for VMs in the East US region and mitigated by a rollback across all affected Availability Zones. More detail is available at https://azure.status.microsoft/en-us/status/histo
- Info16d ago
Incident with Pull Requests
Apr 23, 21:43 UTCResolved - On April 23, 2026, between 16:05 UTC and 20:43 UTC, the Pull Requests service experienced a regression affecting merge queue operations. PRs merged via merge queue using the squash merge method produced incorrect merge commits when the merge group contained more than one PR. In affected cases, changes from previously merged PRs and prior commits were inadvertently reverted by subsequent merges.During the impact window 2,092 pull requests were affected. The issue did not affect pull requests merged outside of merge queue, nor merge queue groups using the merge or reb
- Info16d ago
Disruption with users unable to start Claude and Codex agent task from the web
Apr 23, 19:42 UTCResolved - Between 18:45 and 19:42 UTC on April 23, users were unable to start new agent tasks using either Claude or Codex agent on github.com. This was caused by a code change to how Copilot mission control routes task creation requests. Ongoing agent tasks and other Copilot agent features were not affected. We mitigated the impact by reverting the breaking change. We are adding extra monitoring and integration test coverage for the task creation path to prevent future recurrence.Apr 23, 19:33 UTCUpdate - We have identified the root cause of the issue and are working on miti
- Degraded16d ago
Incident with multiple GitHub services
Apr 23, 17:30 UTCResolved - On April 23, 2026, between 16:03 UTC and 17:27 UTC, multiple GitHub services experienced elevated error rates and degraded performance due to DNS resolution failures originating from our DNS infrastructure in our VA3 datacenter. Approximately 5–7% of overall traffic was affected during the impact window: - Webhooks: ~0.35% of API requests returned 5xx (peak ~0.39%). ~0.88% of requests exceeded 3s latency; at peak, >3s responses represented ~10% of Webhooks API traffic. - Copilot Metrics: ~9% of Copilot Insights dashboard requests returned 5xx. - Copilot cloud agents
- Degradedinvestigating·17d ago
Investigating errors on GitHub
Apr 23, 15:18 UTCResolved - On April 23, 2026 between 14:30 UTC and 15:18 UTC multiple services were degraded on github.com. During this time approximately 1.5% of all web requests resulted in a 5xx status and unicorn pages for github.com users. We also saw elevated error rates across Actions workflow runs, Copilot, Codespaces and Packages, leading to degraded experiences during this timeframe. Codespaces impact peaked at 45% failures for create requests and 65% failures for resume requests. Packages impact was mainly Maven related with 50% failure rates in downloads and 70% failure rates in u
- Degraded17d ago
Disruption with some GitHub services
Apr 22, 22:43 UTCResolved - On April 22, 2026, between 09:00 UTC and 22:05 UTC, the Copilot coding agent and pull request comment event processing were degraded. During this period, approximately 0.5% of total pull request and issue comments mentioned @copilot (~23,000 invocations), explicitly requested work from the Copilot coding agent but were not acted upon.Creating, viewing, and replying to pull request comments was unaffected, and other Copilotfunctionality continued to operate normally. The impact was limited to @copilot mentions on pull request comments not triggering Copilot coding ag
- Info17d ago
Disruption with Copilot chat and Copilot Coding Agent
Apr 22, 19:18 UTCResolved - On April 22, 2026, between 15:16 UTC and 19:18 UTC, users experienced errors when interacting with Copilot Chat on github.com and Copilot Cloud Agent. During this time, users were unable to use Copilot Chat or Copilot Cloud Agent. Copilot Memory (in preview) was not available to Copilot agent sessions during this time. The issue was caused by an infrastructure configuration change that resulted in connectivity issues with our databases. The team identified the cause and restored connectivity to the database. Copilot Chat and Cloud Agent for github.com were restored
- Degraded18d ago
Disruption with projects service
Apr 22, 01:24 UTCResolved - On April 21, 2026, between 13:35 UTC and 01:24 UTC the following day the projects service was degraded. During this time period, projects may have been out of sync and users may have experienced delays in changes to projects and their items. Delays in reflected changes peaked at approximately 45 minutes. The delays were caused by serialization errors that failed events and triggered a flood of resyncs, overloading our event processing layers.We mitigated the incident by speeding up processing time for incoming changes and otherwise waiting for all changes to be proc
- Degraded19d ago
Partial degradation for code scanning default setup and for code quality
Apr 21, 05:04 UTCResolved - On April 20, 2026 between 10:28 UTC and 15:04 UTC GitHub experienced degraded service for code scanning default setup, code quality, and project boards. Repair of affected project boards additionally lasted until April 21, 05:04 UTC During this time, code scanning default setup and code quality analyses were not triggered on newly opened pull requests. Additionally, newly created issues were not appearing on project boards. The cause was a serialization error that prevented proper triggering of code scanning, code quality analyses, and project board updates. We miti
- Degraded23d ago
Disruption with some GitHub services
Apr 17, 15:18 UTCResolved - On April 17, 2026, between 14:46 UTC and 15:12 UTC, users experienced a degraded web experience on GitHub.com. During this time, approximately 1.5% of web requests resulted in errors, with some users encountering slow page loads or failed requests. The issue was caused by capacity saturation of a caching component in one of our data center regions. We mitigated the issue by redirecting traffic to an unaffected region and rolling back a recent deployment. The incident was fully resolved at 15:18 UTC. We are taking steps to provide appropriate capacity for this cachin
- Degraded23d ago
Incident with Codespaces
Apr 16, 18:28 UTCResolved - On April 16, 2026 between 09:30 UTC and 17:15 UTC, users experienced failures when attempting to connect to GitHub Codespaces via the VS Code editor. During this time, approximately 40% of codespace start operations failed. Users connecting via SSH were not impacted. The issue was caused by a failure in an upstream download service that prevented the VS Code Server from being retrieved during codespace startup. The impact was mitigated by implementing a workaround to use an alternative download path when the primary endpoint is degraded. We are working with the upst
- Info26d ago
Disruption with some GitHub services
Apr 14, 06:08 UTCResolved - On April 14, between 00:58 UTC and 06:08 UTC, GitHub Enterprise Cloud customers experienced 500 errors when attempting to access Copilot Insights pages which was caused by an authentication failure in our metrics pipeline. We fully mitigated the issue and validated the fix in production. Approximately 709 users were impacted. The total impact duration was approximately 5 hours and 10 minutes. Our investigation determined the incident was caused by a change in a tenant credential which caused authentication errors to retrieve the required data needed on our Copilot I
- Degraded26d ago
Incident with Pages
Apr 13, 20:35 UTCResolved - On Sunday April 13th, 2026, between 18:53 UTC and 20:30 UTC, the GitHub Pages service experienced elevated error rates. On average, the error rate was 10.58% and peaked at 12.77% of requests to the service, resulting in approximately 17.5 million failed requests returning HTTP 500 errors. This was due to an automated DNS management tool (octodns) erroneously deleting a DNS record for a Pages backend storage host after its upstream data source intermittently failed to return the record, causing the tool to treat it as stale and remove it.We mitigated the incident by
- Degraded26d ago
Disruption with some GitHub services
Apr 13, 17:40 UTCResolved - On April 13, 2026, between 14:41 UTC and 17:29 UTC, the Copilot service experienced degraded performance. All Copilot users were impacted by increased latency, and approximately 20% experienced request failures when interacting with Copilot Cloud Agent (CCA). On average, request latency increased to approximately 950ms. The GitHub User Dashboard also displayed intermittent errors loading Copilot quota information. CCA and the User Dashboard were impacted for approximately 2 hours and 56 minutes. This was due to an infrastructure change that reduced the available com
DownRightNow tracks GitHub continuously by polling their official status feed. We are an independent monitoring service and not affiliated with GitHub.