Live Cloud & CDN Outage Feed
Live operational intelligence for the cloud and CDN providers that power the modern internet. Updated continuously from official provider status pages.
This week's cloud & cdn outage roundup →- Maintenance
ICN (Seoul) on 2026-05-19
THIS IS A SCHEDULED EVENT May 19, 17:00 - 23:00 UTCMay 6, 08:18 UTCScheduled - We will be performing scheduled maintenance in ICN (Seoul) datacenter on 2026-05-19 between 17:00 and 23:00 UTC.Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unava
ap-northeast - Maintenance
MEL (Melbourne) on 2026-05-13
THIS IS A SCHEDULED EVENT May 13, 16:00 - 21:00 UTCMay 6, 03:48 UTCScheduled - We will be performing scheduled maintenance in MEL (Melbourne) datacenter on 2026-05-13 between 16:00 and 21:00 UTC.Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily u
- Maintenance
SYD (Sydney) on 2026-05-13
THIS IS A SCHEDULED EVENT May 13, 15:00 - 20:00 UTCMay 7, 07:28 UTCScheduled - We will be performing scheduled maintenance in SYD (Sydney) datacenter on 2026-05-13 between 15:00 and 20:00 UTC.Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unav
ap-southeast - Maintenance
LAX (Los Angeles) on 2026-05-13
THIS IS A SCHEDULED EVENT May 13, 09:00 - 11:00 UTCMay 8, 14:31 UTCScheduled - We will be performing scheduled maintenance in LAX (Los Angeles) datacenter on 2026-05-13 between 09:00 and 11:00 UTC.Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily
- Maintenance
CAN (Guangzhou) on 2026-05-13
THIS IS A SCHEDULED EVENT May 13, 08:30 - 22:45 UTCMay 7, 10:50 UTCScheduled - We will be performing scheduled maintenance in CAN (Guangzhou) datacenter on 2026-05-13 between 08:30 and 22:45 UTC.Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily u
- Maintenance
SCL (Santiago) on 2026-05-13
THIS IS A SCHEDULED EVENT May 13, 08:30 - 22:45 UTCMay 7, 10:50 UTCScheduled - We will be performing scheduled maintenance in SCL (Santiago) datacenter on 2026-05-13 between 08:30 and 22:45 UTC.Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily un
- Maintenance
LHR (London) on 2026-05-13
THIS IS A SCHEDULED EVENT May 13, 00:00 - 06:00 UTCMay 8, 02:50 UTCScheduled - We will be performing scheduled maintenance in LHR (London) datacenter on 2026-05-13 between 00:00 and 06:00 UTC.Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unav
eu-west - Maintenance
PHX (Phoenix) on 2026-05-12
THIS IS A SCHEDULED EVENT May 12, 09:00 - 11:30 UTCMay 6, 14:34 UTCScheduled - We will be performing scheduled maintenance in PHX (Phoenix) datacenter on 2026-05-12 between 09:00 and 11:30 UTC.Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily una
- Maintenance
LAX (Los Angeles) on 2026-05-12
THIS IS A SCHEDULED EVENT May 12, 07:00 - 15:00 UTCMay 5, 18:15 UTCScheduled - We will be performing scheduled maintenance in LAX (Los Angeles) datacenter on 2026-05-12 between 07:00 and 15:00 UTC.Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily
- Maintenance
YUL (Montréal) on 2026-05-12
THIS IS A SCHEDULED EVENT May 12, 05:00 - 13:00 UTCMay 7, 06:22 UTCScheduled - We will be performing scheduled maintenance in YUL (Montréal) datacenter on 2026-05-12 between 05:00 and 13:00 UTC.Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily un
- Maintenance
LHR (London) on 2026-05-12
THIS IS A SCHEDULED EVENT May 12, 00:30 - 07:00 UTCMay 9, 12:18 UTCScheduled - We will be performing scheduled maintenance in LHR (London) datacenter on 2026-05-12 between 00:30 and 07:00 UTC.Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unav
eu-west - Maintenance
SYD (Sydney) on 2026-05-11
THIS IS A SCHEDULED EVENT May 11, 15:00 UTC - May 12, 07:00 UTCApr 30, 06:32 UTCScheduled - We will be performing scheduled maintenance in SYD (Sydney) datacenter between 2026-05-11 15:00 and 2026-05-12 07:00 UTC.Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datac
ap-southeast - Degraded
Network Performance Issues
May 10, 03:28 UTCInvestigating - Cloudflare is investigating issues with network performance in Chicago. We are working to analyse and mitigate this problem. More updates to follow shortly.
- Degraded
Control Panel Errors - Unable to Enable 2FA and Google/GitHub
May 9, 11:15 UTCInvestigating - Our Engineering team is investigating an issue affecting the ability to enable Two-Factor Authentication (2FA) and Google/GitHub authentication through the Control Panel. During this time, users may encounter errors while enabling these authentication methods and could also experience issues accessing teams with secure sign-in enabled.We apologize for the inconvenience and will provide an update as soon as more information becomes available.
- Degraded
Service is operating normally: [RESOLVED] Increased Error Rate and Latency
Starting May 7 4:20 PM PDT, we experienced increased impaired EC2 instances and degraded EBS volumes in a single facility (data center) within a single Availability Zone (use1-az4) in the US-EAST-1 Region. The issue was caused by a thermal event resulting in a loss of power. As part of our recovery effort, we shifted traffic away from the impacted Availability Zone for most services at May 7 5:06 PM. AWS services, like Elastic Load Balancing, Elastic Kubernetes Service, ElastiCache, Redshift, OpenSearch, Managed Streaming for Apache Kafka among others, that depend on the affected EC2 instances
us-east - Degraded
Service impact: Increased Error Rate and Latency
We have begun to see improvements in the overall number of affected EC2 instances and degraded EBS volumes in a single Availability Zone (use1-az4) in the US-EAST-1 Region. The steps taken to supply additional cooling capacity have been showing steady signs of progress. Some EBS Volumes and EC2 instances affected by the issue will continue to experience impairments while we continue to drive these efforts. We continue to recommend that customers who require immediate recovery restore from EBS snapshots and/or replace affected resources by launching new replacement resources. In parallel, we ha
us-east - Major
Let's Encrypt Outage Affecting Certificate Issuance and Managed Databases Operations
May 8, 20:46 UTCIdentified - Our Engineering team is aware of an upstream outage with Let's Encrypt (see https://letsencrypt.status.io/) which impacts the following services: - Inability to create new Let's Encrypt certificates for Spaces, Load Balancers, and App Platform Custom Domains- Stuck or delayed creates/forks/restores on Mongo, PG, and MySQL databases. Please note that operations related to Managed Databases and App Platform Custom Domains will automatically retry and should complete successfully once the upstream outage is resolved. We'll continue to monitor this situa
- Info
Certificate issuance paused for Let's Encrypt
May 8, 20:22 UTCMonitoring - Due to an ongoing incident, Let's Encrypt has paused all certificate issuance. This primarily impacts wildcard domains on Render. In rare instances, non-wildcard custom domains may be using Let's Encrypt for their certificates.
- Info
Cloudflare Access processing delayed audit logs
May 8, 19:53 UTCResolved - This incident has been resolved.May 4, 22:02 UTCUpdate - Cloudflare engineering is continuing to run a process that is ingesting the missing logs. No data was lost and new log collection post incident are unaffected.May 1, 22:32 UTCUpdate - We are continuing to monitor for any further issues.Apr 29, 20:15 UTCUpdate - Cloudflare Access is currently processing a backlog of audit logs from April 28, 17:00 UTC through April 29, 19:00 UTC. We expect this processing to continue through the weekend and will provide our next update on Monday. Please be assured that all Acces
- Major
Certificate Issuance through Let's Encrypt unavailable
May 8, 19:46 UTCInvestigating - We are aware of an issue affecting certificate issuance through Let's Encrypt. Customers using Universal SSL, Advanced Certificate Manager, or Cloudflare for SaaS may experience delays or failures when issuing or renewing certificates that use Let's Encrypt as the Certificate Authority.Certificates issued through other CAs (such as Google Trust Services) are not affected. Existing certificates that have already been issued remain valid and are unaffected.We are actively monitoring Let's Encrypt availability and will issue certificates as soon as s
- Degraded
Service impact: Increased Error Rate and Latency
We continue to work towards the recovery of the impaired EC2 instances and degraded EBS volumes in a single Availability Zone (use1-az4) in the US-EAST-1 Region though efforts are slower than we had previously anticipated. We are taking measured steps to ensure that cooling capacity is brought online in a safe and controlled manner. As a result, EBS Volumes and EC2 instances affected by the issue will continue to experience impairments. We continue to recommend that customers who require immediate recovery restore from EBS snapshots and/or replace affected resourced by launching new replacemen
us-east - Info
Service impact: Increased Error Rate and Latency
We are experiencing an increase in timeouts to Amazon Managed Streaming for Apache Kafka partitions on a subset of clusters as a result of the ongoing issue in a single Availability Zone (use1-az4) in the US-EAST-1 Region. We are working in parallel to determine a path towards mitigation for affected clusters. We will provide an additional update by 12:30 PM or sooner.
us-east - Degraded
Multiple Services in NYC2
May 8, 18:01 UTCInvestigating - We are currently investigating an issue affecting multiple services in our NYC2 region. Our engineering team is aware of the situation and is working to identify the root cause and restore full connectivity as quickly as possible.Users with resources in the NYC2 region may experience issues with Droplet connectivity, API requests, or other services.We will provide additional updates as more information becomes available. We apologize for any inconvenience this may cause.
- Info
Service impact: Increased Error Rate and Latency
We have observed complete recovery of increased error rates and query failures for Redshift clusters in the US-EAST-1 Region. We were able to resolve the impact independently of the ongoing efforts to recover the affected hardware in the use1-az4 Availability Zone. The issue affecting Redshift has been resolved and the service is operating normally. We will provide an additional update regarding the efforts towards hardware restoration by 12:30 PM or sooner.
us-east - Degraded
Service impact: Increased Error Rate and Latency
We continue our efforts to work towards the recovery of the impaired EC2 instances and degraded EBS volumes in a single Availability Zone (use1-az4) in the US-EAST-1 Region. We are making progress towards the restoration of the cooling system capacity that is required to recover the affected hardware in the impacted zone. Some customers will continue to see their affected EC2 instances and EBS volumes as impaired until the affected racks are recovered. We continue to recommend that customers who require immediate recovery restore from EBS snapshots and/or replace affected resources by launchin
us-east - Major
Service impact: Increased Error Rate and Latency
We continue working to resolve the impaired EC2 instances and degraded EBS volumes in a single Availability Zone (use1-az4) in the US-EAST-1 Region caused by a thermal event. During such an event, servers automatically shut down when the temperatures exceeded the operating thresholds in order to protect the hardware. We are actively working to bring additional cooling system capacity online, which will enable us to recover the remaining affected hardware in the impacted zone. Some customers will continue to see their affected EC2 instances and EBS volumes as impaired until we achieve full reco
us-east - Info
[DirtyFrag] Linux Privilege Escalation Vulnerability
May 8, 13:40 UTCInvestigating - Akamai is aware of the recently disclosed “DirtyFrag”[1] vulnerability that followed the “CopyFail”[2] disclosure. This vulnerability is very similar in nature and has a similar impact, exploit path, and mitigation approach. We have not observed any related malicious exploits targeting our infrastructure and are continuing to address the vulnerability across our product portfolio and internal systems.As with “CopyFail”, we are advising customers to consider most Linux distributions to be at-risk until patched. Since the “DirtyFrag” vulnerability was disclosed pr
- Degraded
Service impact: Increased Error Rate and Latency
We continue to make progress towards resolving the impaired EC2 instances and degraded EBS volumes in a single Availability Zone (use1-az4) in the US-EAST-1 Region. At this time, we wanted to provide some more details on the issue. Beginning on May 7 at 4:20 PM PDT, we began experiencing an increase in instance impairments within the affected zone due to the loss of power during a thermal event. Engineers were automatically engaged within minutes and immediately began investigating multiple mitigations. By 9:12 PM PDT, we restored power to a subset of the affected infrastructure and observed s
us-east - Degraded
Wrangler users may experience frequent log out
May 8, 10:36 UTCIdentified - The issue has been identified, and we are working on a fixMay 8, 10:35 UTCInvestigating - We are currently investigating an issue where Wrangler users experience frequent log out.
- Degraded
Service impact: Increased Error Rate and Latency
Mitigation efforts remain underway to resolve the impaired EC2 instances and degraded EBS volumes in a single Availability Zone (use1-az4) in the US-EAST-1 Region. These EC2 instances and EBS volumes were impacted due to a loss of power during the thermal event. The work to bring additional cooling system capacity online, which will enable us to recover the remaining affected infrastructure in a controlled and safe manner, is taking longer than we had initially anticipated. Some services, such as IoT Core, ELB, NAT Gateway, and Redshift, have seen significant improvements in the recovery of th
us-east - Degraded
Service impact: Increased Error Rate and Latency
We continue to make progress in resolving the impaired EC2 instances in the affected Availability Zone (use1-az4) in the US-EAST-1 Region, and are working towards full recovery. We are actively working to bring additional cooling system capacity online, which will enable us to recover the remaining affected racks in a controlled and safe manner. In the impacted Availability Zone, EC2 Instances, EBS Volumes, and other AWS Services may continue to experience elevated error rates and latencies for some workflows. Customers will continue to see some of their affected EC2 instances and EBS volumes
us-east - Maintenance
DUS (Düsseldorf) on 2026-05-08
THIS IS A SCHEDULED EVENT May 8, 06:30 - 16:30 UTCMay 7, 10:04 UTCUpdate - We will be performing scheduled maintenance in DUS (Düsseldorf) datacenter on 2026-05-08 between 06:30 and 16:30 UTC.Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unav
- Degraded
Service impact: Increased Error Rate and Latency
We are observing early signs of recovery. We continue to work towards restoring temperatures to normal levels and bring impacted racks back online in the affected Availability Zone (use1-az4) in the US-EAST-1 Region. We have been able to get additional cooling system capacity online, which has allowed us to recover some affected racks and are actively working to recover additional racks in a controlled and safe manner. In the impacted Availability Zone, EC2 Instances, EBS Volumes, and other AWS Services may continue to experience elevated error rates and latencies for some workflows until full
us-east - Degraded
Service impact: Increased Error Rate and Latency
We are actively working to restore temperatures to normal levels in the affected Availability Zone (use1-az4) in the US-EAST-1 Region, though progress is slower than originally anticipated. Since our last update we have made incremental progress to restore cooling systems within the affected AZ, which will not be visible to external customers but are required for the restoration of affected services. In the impacted Availability Zone, EC2 Instances, EBS Volumes, and other AWS Services are also experiencing elevated error rates and latencies for some workflows. As part of our recovery effort, w
us-east - Maintenance
OTP (Bucharest) on 2026-05-08
THIS IS A SCHEDULED EVENT May 8, 02:00 - 08:00 UTCMay 8, 01:44 UTCScheduled - We will be performing scheduled maintenance in OTP (Bucharest) datacenter on 2026-05-08 between 02:00 and 08:00 UTC.Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily un
- Info
Service impact: Increased Error Rate and Latency
We continue to work towards mitigating the increased temperatures to its normal levels in the affected Availability Zone (use1-az4) in the US-EAST-1 Region. Other AWS services that depend on the affected EC2 instances and EBS volumes in this Availability Zone, may also experience impairments. We have weighed away traffic for most services at this time. We recommend customers utilize one of the other Availability Zones in the US-EAST-1 Region at this time, as existing instances in other AZ's remain unaffected by this issue. Customers may experience longer than usual provisioning times. We will
us-east - Info
Service impact: Increased Error Rate and Latency
We continue to investigate instance impairments to a single Availability Zone (use1-az4) in the US-EAST-1 Region. We have experienced an increase in temperatures within a single data center, which in some cases has caused impairments for instances in the Availability Zone. EC2 instances and EBS volumes that were hosted on hardware that has been affected by the loss of power during the thermal event. Other AWS services that depend on the affected EC2 instances and EBS volumes in this Availability Zone, may also experience impairments. We will continue to provide updates as recovery continues.
us-east - Maintenance
LHR (London) on 2026-05-08
THIS IS A SCHEDULED EVENT May 8, 00:30 - 07:00 UTCMay 1, 08:44 UTCScheduled - We will be performing scheduled maintenance in LHR (London) datacenter on 2026-05-08 between 00:30 and 07:00 UTC.Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unava
eu-west - Degraded
Service impact: Increased Error Rate and Latency
We are investigating instance impairments in a single Availability Zone (use1-az4) in the US-EAST-1 Region. Other Availability Zones are not affected by the event and we are working to resolve the issue.
us-east - Maintenance
WAW (Warsaw) on 2026-05-08
THIS IS A SCHEDULED EVENT May 8, 00:00 - 04:00 UTCMay 6, 13:38 UTCScheduled - We will be performing scheduled maintenance in WAW (Warsaw) datacenter on 2026-05-08 between 00:00 and 04:00 UTC.Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unava
- Maintenance
AMS (Amsterdam) on 2026-05-08
THIS IS A SCHEDULED EVENT May 8, 00:00 - 08:00 UTCMay 5, 15:57 UTCScheduled - We will be performing scheduled maintenance in AMS (Amsterdam) datacenter on 2026-05-08 between 00:00 and 08:00 UTC.Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily un
- Info
Network Connectivity issues in Tokyo
May 7, 21:00 UTCResolved - Cloudflare has resolved issues with network performance in Tokyo. Impact time: 2026-05-07 21:13 to 21:38 UTC
ap-northeast - Degraded
Core Infrastructure Maintenance May 7, 2026, 15:00 UTC
May 7, 15:20 UTCIn progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.May 7, 15:13 UTCScheduled - Start: 2026-05-07 15:00 UTCEnd: 2026-05-07 21:00 UTCOur Engineering team is performing maintenance on core control plane infrastructure. Please note that the existing infrastructure will continue running without issue. This maintenance may impact create, read, update, and delete (CRUD) operations in all regions.Expected Impact:During the maintenance window, users may experience brief periods of increased latency with the following platform operations:Cl
- Maintenance
Cloudflare Stream Scheduled Maintenance
THIS IS A SCHEDULED EVENT May 7, 12:00 - 13:00 UTCMay 5, 23:35 UTCScheduled - Cloudflare will be performing scheduled maintenance on Stream's infrastructure. During this window, customers may encounter errors uploading or editing videos, starting new live streams, and provisioning new signing keys. Video playback will not be affected.
- Maintenance
YUL (Montréal) on 2026-05-07
THIS IS A SCHEDULED EVENT May 7, 05:00 - 12:00 UTCMay 5, 17:37 UTCUpdate - We will be performing scheduled maintenance in YUL (Montréal) datacenter on 2026-05-07 between 05:00 and 12:00 UTC.Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unavai
- Info
HA Postgres Unavailability
May 7, 02:54 UTCResolved - Everything is back to normal, and the incident is now resolved. We are planning long-term mitigations to prevent this issue from recurring, and we will share the results of our investigation with all affected customers.May 7, 02:47 UTCUpdate - All affected instances are back online, though a few databases may still show a stale 'Unready' status. We are working to refresh the status.May 7, 02:38 UTCUpdate - We're continuing to see instance recovery across the fleet. Render Workflows is now fully operational.May 7, 02:34 UTCUpdate - We're bringing t
- Maintenance
LHR (London) on 2026-05-07
THIS IS A SCHEDULED EVENT May 7, 00:30 - 07:00 UTCMay 1, 05:46 UTCScheduled - We will be performing scheduled maintenance in LHR (London) datacenter on 2026-05-07 between 00:30 and 07:00 UTC.Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unava
eu-west - Maintenance
HAM (Hamburg) on 2026-05-07
THIS IS A SCHEDULED EVENT May 7, 00:00 - 06:00 UTCMay 6, 07:34 UTCScheduled - We will be performing scheduled maintenance in HAM (Hamburg) datacenter on 2026-05-07 between 00:00 and 06:00 UTC.Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance window as network interfaces in this datacentre may become temporarily unav
- Maintenance
SLC (Salt Lake City) on 2026-05-06
May 6, 23:00 UTCCompleted - The scheduled maintenance has been completed.May 6, 15:15 UTCIn progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.May 6, 14:58 UTCScheduled - We will be performing scheduled maintenance in SLC (Salt Lake City) datacenter on 2026-05-06 between 15:15 and 23:00 UTC.Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are
- Degraded
Cloudflare One Dashboard Analytics Intermittently Not Loading
May 6, 22:54 UTCResolved - This incident has been resolved.May 6, 21:13 UTCInvestigating - We are investigating intermittent loading issues affecting Cloudflare Access and Gateway Analytics within the Cloudflare One dashboard. This is a display-only issue specifically impacting the rendering of rollup analytics. Please be assured that all data collection remains uninterrupted. Access and Gateway analytics are fully operational.
- Degraded
Cloudflare One Client Connectivity Issue in Chicago (ORD) area
May 6, 22:28 UTCIdentified - Cloudflare is investigating issues with Cloudflare One Connectivity. Users connecting to Chicago datacenter may experience connectivity issues or a degraded Internet experience.
- Major
Cloudflare Dashboard Workers Editor Unavailable
May 6, 16:55 UTCResolved - This incident has been resolved.May 6, 16:50 UTCMonitoring - A fix has been implemented and we are monitoring the results.May 6, 16:26 UTCIdentified - Cloudflare has identified an issue affecting the Workers Editor. Customers can continue to deploy and update Worker code using Wrangler, Terraform, or the API. We are working to mitigate the issue and will provide updates as our investigation progresses.
- Info
Resolution Issues for .de Domains
May 6, 15:09 UTCResolved - This incident has been resolved.May 6, 12:40 UTCUpdate - We have removed our mitigations and we are monitoring the results.May 5, 22:37 UTCMonitoring - The issue has been identified as a DNSSEC signing problem at DENIC, the organization responsible for the .DE top-level domain. Cloudflare has temporarily disabled DNSSEC validation for .de domains on 1.1.1.1 resolver (as per RFC 7646) in order to allow .DE names to continue to resolve. DNSSEC validation will be re-enabled when the signing problems at DENIC are known to have been resolved.See RFC 7646 for more details:
- Maintenance
SYD (Sydney) on 2026-05-06
May 6, 15:00 UTCIn progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Apr 30, 06:12 UTCScheduled - We will be performing scheduled maintenance in SYD (Sydney) datacenter between 2026-05-06 15:00 and 2026-05-07 07:00 UTC.Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting this traffic to fail over elsewhere during this maintenance wi
ap-southeast - Info
Network Performance Issues in Eastern North American region
May 6, 14:30 UTCResolved - We have resolved the network performance issues in Eastern North America region that occurred between 14:35 and 16:28 UTC. Traffic flow has returned to normal levels. We appreciate your patience and apologize for the inconvenience.
- Degraded
Certificate Provisioning System (CPS) issues
May 6, 12:01 UTCResolved - We can confirm that the issue was mitigated at approximately 11:30 UTC on May 6, 2026, and the service has resumed normal operation. Customers and partners can view additional details about the incident by logging in to: https://community.akamai.com/customers/s/feed/0D5a7000016uVhvCAE or reaching out to Akamai Support. We apologize for the impact and thank you for your patience and continued support. We are committed to making continuous improvements to make our systems better and prevent a recurrence of this issue.May 6, 11:21 UTCInvestigating - We are investigating
- Maintenance
MSP (Minneapolis) on 2026-05-06
May 6, 11:00 UTCCompleted - The scheduled maintenance has been completed.May 6, 06:00 UTCIn progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Apr 29, 08:41 UTCScheduled - We will be performing scheduled maintenance in MSP (Minneapolis) datacenter on 2026-05-06 between 06:00 and 11:00 UTC.Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are ex
- Major
R2 enablement temporarily degraded on Dash (09:30–10:06 UTC)
May 6, 09:30 UTCResolved - Between 09:30 and 10:06 UTC, the R2 entitlements service experienced a brief outage that prevented customers from enabling R2 subscriptions from the Cloudflare Dashboard. Existing R2 workloads were unaffected. The issue has been resolved and enablement is fully operational.
- Maintenance
CDG (Paris) on 2026-05-06
May 6, 08:00 UTCCompleted - The scheduled maintenance has been completed.May 6, 00:00 UTCIn progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.May 3, 22:59 UTCScheduled - We will be performing scheduled maintenance in CDG (Paris) datacenter on 2026-05-06 between 00:00 and 08:00 UTC.Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are expecting
- Maintenance
PHL (Philadelphia) on 2026-05-06
May 6, 03:45 UTCCompleted - The scheduled maintenance has been completed.May 6, 03:00 UTCIn progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.May 5, 17:43 UTCScheduled - We will be performing scheduled maintenance in PHL (Philadelphia) datacenter on 2026-05-06 between 03:00 and 03:45 UTC.Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are ex
- Degraded
Workers API Issue
May 6, 01:37 UTCResolved - This incident has been resolved.May 6, 01:18 UTCMonitoring - A fix has been implemented and we are monitoring the results.May 6, 01:08 UTCInvestigating - Cloudflare is investigating Worker API failures. Customers might experience an error using the Worker API. Workers that are already running in production are not affected. More updates to follow shortly.
- Maintenance
BOM (Mumbai) on 2026-05-05
May 6, 01:00 UTCCompleted - The scheduled maintenance has been completed.May 5, 20:00 UTCIn progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Apr 29, 08:41 UTCScheduled - We will be performing scheduled maintenance in BOM (Mumbai) datacenter between 2026-05-05 20:00 and 2026-05-06 01:00 UTC.Traffic might be re-routed from this location, hence there is a possibility of a slight increase in latency during this maintenance window for end-users in the affected region. For PNI / CNI customers connecting with us in this location, please make sure you are
- Info
(Copy Fail) Linux Kernel Local Privilege Escalation Vulnerability [CVE-2026-31431]
May 5, 22:19 UTCResolved - We have completed our investigation and response to the “Copy Fail” Linux kernel local privilege escalation vulnerability (CVE-2026-31431). We have published a documentation article with detailed guidance on available mitigations and recommended actions for affected systems. Customers can find more information and step-by-step instructions here: https://www.linode.com/docs/guides/cve-2026-31431-copy-fail-mitigation/We encourage all customers to review the article and apply the appropriate mitigations to their environments. If you have questions or need assistance, pl
- Maintenance
Core Infrastructure Maintenance May 4th, 2026, 13:00 UTC
May 4, 21:01 UTCCompleted - The scheduled maintenance has been completed.May 4, 13:01 UTCIn progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.May 2, 13:25 UTCScheduled - Start: 2026-05-04 13:00 UTCEnd: 2026-05-04 21:00 UTCDuring the above window, our Engineering team will be performing maintenance on core control plane infrastructure. Please note that the existing infrastructure will continue running without issue. This maintenance may impact create, read, update, and delete (CRUD) operations in all regions.Expected Impact:During the maintenance wi
- Maintenance
Core Infrastructure Maintenance May 4, 2026, 13:00 UTC
May 4, 21:00 UTCCompleted - The scheduled maintenance has been completed.May 4, 13:00 UTCIn progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.May 2, 09:15 UTCScheduled - Start: 2026-05-04 13:00 UTCEnd: 2025-05-04 21:00 UTCDuring the above window, our Engineering team will be performing maintenance on core control plane infrastructure. Please note that the existing infrastructure will continue running without issue. This maintenance may impact create, read, update, and delete (CRUD) operations in all regions.Expected Impact:During the maintenance wi
- Maintenance
SFO2 Network Maintenance
May 4, 15:00 UTCCompleted - The scheduled maintenance has been completed.May 4, 13:00 UTCIn progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.May 2, 12:39 UTCScheduled - Start: 2026-05-04 13:00 UTCEnd: 2026-05-04 15:00 UTCDuring the above window, our Networking team will be making changes to the core networking infrastructure to improve performance and scalability in the SFO2 region.Expected impact:We do not anticipate any downtime for Droplets or Droplet-related services, including Managed Databases, Load Balancers, App Platform, and Managed Kuber
- Degraded
Emergency Network Maintenance - OSA
May 4, 15:00 UTCCompleted - The scheduled maintenance has been completed.May 4, 13:00 UTCIn progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.May 1, 06:42 UTCScheduled - On Monday, May 4th from 13:00 to 15:00 UTC, we will be performing network maintenance in our OSA region. While we do not expect any downtime, a brief period of increased latency or packet loss may occur during this window.
- Info
Reporting Issues affecting URL Traffic and URL Responses reports
May 4, 12:24 UTCResolved - We can confirm that the issue was mitigated at 06:50 UTC on May 4, 2026 and the service has resumed normal operation. Customers and partners can view additional details about the incident by logging in to: https://community.akamai.com/customers/s/feed/0D5a7000015pThhCAE or reaching out to Akamai Support. We apologize for the impact and thank you for your patience and continued support. We are committed to making continuous improvements to make our systems better and prevent a recurrence of this issue.May 1, 08:14 UTCUpdate - We are continuing to investigate this issu
- Info
Service Issue - Connectivity Issues - Chennai (IN-MAA)
May 4, 03:43 UTCResolved - The network component has been replaced. The incident is now resolved.Apr 29, 22:13 UTCUpdate - We were able to identify a solution which resulted in additional capacity for the Chennai, IN region and did not experience any degradation over the peak hours on April 29, 2026, we anticipate this to be the case going forward, but will continue providing updates until we successfully replace the impacted network component. The replacement of the network component has been briefly delayed. We are working quickly to implement a fix, and we will provide an update as soon as
- Info
Intermittent Edge Delivery Delays in Chennai, IN (MAA) Region
May 4, 03:05 UTCResolved - We can confirm that the issue was mitigated at 12:00 UTC on 2 May, 2026 and the service has resumed normal operation. Customers and partners can view additional details about the incident by logging in to: https://community.akamai.com/customers/s/feed/0D5a7000015jW2OCAU or reaching out to Akamai Support. We apologize for the impact and thank you for your patience and continued support. We are committed to making continuous improvements to make our systems better and prevent a recurrence of this issue.Apr 29, 22:13 UTCUpdate - We were able to identify a solution which
Other live feeds
Continuously updated from official provider status pages.