Is AWS Down Right Now?
Live status and incident history for AWS, sourced directly from their official status page. Updated continuously.
Recent incidents (30 days)
- Degradedresolved·1d ago
Service is operating normally: [RESOLVED] Increased Error Rate and Latency
Starting May 7 4:20 PM PDT, we experienced increased impaired EC2 instances and degraded EBS volumes in a single facility (data center) within a single Availability Zone (use1-az4) in the US-EAST-1 Region. The issue was caused by a thermal event resulting in a loss of power. As part of our recovery effort, we shifted traffic away from the impacted Availability Zone for most services at May 7 5:06 PM. AWS services, like Elastic Load Balancing, Elastic Kubernetes Service, ElastiCache, Redshift, OpenSearch, Managed Streaming for Apache Kafka among others, that depend on the affected EC2 instances
us-east - Degraded1d ago
Service impact: Increased Error Rate and Latency
We have begun to see improvements in the overall number of affected EC2 instances and degraded EBS volumes in a single Availability Zone (use1-az4) in the US-EAST-1 Region. The steps taken to supply additional cooling capacity have been showing steady signs of progress. Some EBS Volumes and EC2 instances affected by the issue will continue to experience impairments while we continue to drive these efforts. We continue to recommend that customers who require immediate recovery restore from EBS snapshots and/or replace affected resources by launching new replacement resources. In parallel, we ha
us-east - Degraded1d ago
Service impact: Increased Error Rate and Latency
We continue to work towards the recovery of the impaired EC2 instances and degraded EBS volumes in a single Availability Zone (use1-az4) in the US-EAST-1 Region though efforts are slower than we had previously anticipated. We are taking measured steps to ensure that cooling capacity is brought online in a safe and controlled manner. As a result, EBS Volumes and EC2 instances affected by the issue will continue to experience impairments. We continue to recommend that customers who require immediate recovery restore from EBS snapshots and/or replace affected resourced by launching new replacemen
us-east - Info1d ago
Service impact: Increased Error Rate and Latency
We are experiencing an increase in timeouts to Amazon Managed Streaming for Apache Kafka partitions on a subset of clusters as a result of the ongoing issue in a single Availability Zone (use1-az4) in the US-EAST-1 Region. We are working in parallel to determine a path towards mitigation for affected clusters. We will provide an additional update by 12:30 PM or sooner.
us-east - Info2d ago
Service impact: Increased Error Rate and Latency
We have observed complete recovery of increased error rates and query failures for Redshift clusters in the US-EAST-1 Region. We were able to resolve the impact independently of the ongoing efforts to recover the affected hardware in the use1-az4 Availability Zone. The issue affecting Redshift has been resolved and the service is operating normally. We will provide an additional update regarding the efforts towards hardware restoration by 12:30 PM or sooner.
us-east - Degraded2d ago
Service impact: Increased Error Rate and Latency
We continue our efforts to work towards the recovery of the impaired EC2 instances and degraded EBS volumes in a single Availability Zone (use1-az4) in the US-EAST-1 Region. We are making progress towards the restoration of the cooling system capacity that is required to recover the affected hardware in the impacted zone. Some customers will continue to see their affected EC2 instances and EBS volumes as impaired until the affected racks are recovered. We continue to recommend that customers who require immediate recovery restore from EBS snapshots and/or replace affected resources by launchin
us-east - Major2d ago
Service impact: Increased Error Rate and Latency
We continue working to resolve the impaired EC2 instances and degraded EBS volumes in a single Availability Zone (use1-az4) in the US-EAST-1 Region caused by a thermal event. During such an event, servers automatically shut down when the temperatures exceeded the operating thresholds in order to protect the hardware. We are actively working to bring additional cooling system capacity online, which will enable us to recover the remaining affected hardware in the impacted zone. Some customers will continue to see their affected EC2 instances and EBS volumes as impaired until we achieve full reco
us-east - Degraded2d ago
Service impact: Increased Error Rate and Latency
We continue to make progress towards resolving the impaired EC2 instances and degraded EBS volumes in a single Availability Zone (use1-az4) in the US-EAST-1 Region. At this time, we wanted to provide some more details on the issue. Beginning on May 7 at 4:20 PM PDT, we began experiencing an increase in instance impairments within the affected zone due to the loss of power during a thermal event. Engineers were automatically engaged within minutes and immediately began investigating multiple mitigations. By 9:12 PM PDT, we restored power to a subset of the affected infrastructure and observed s
us-east - Degraded2d ago
Service impact: Increased Error Rate and Latency
Mitigation efforts remain underway to resolve the impaired EC2 instances and degraded EBS volumes in a single Availability Zone (use1-az4) in the US-EAST-1 Region. These EC2 instances and EBS volumes were impacted due to a loss of power during the thermal event. The work to bring additional cooling system capacity online, which will enable us to recover the remaining affected infrastructure in a controlled and safe manner, is taking longer than we had initially anticipated. Some services, such as IoT Core, ELB, NAT Gateway, and Redshift, have seen significant improvements in the recovery of th
us-east - Degraded2d ago
Service impact: Increased Error Rate and Latency
We continue to make progress in resolving the impaired EC2 instances in the affected Availability Zone (use1-az4) in the US-EAST-1 Region, and are working towards full recovery. We are actively working to bring additional cooling system capacity online, which will enable us to recover the remaining affected racks in a controlled and safe manner. In the impacted Availability Zone, EC2 Instances, EBS Volumes, and other AWS Services may continue to experience elevated error rates and latencies for some workflows. Customers will continue to see some of their affected EC2 instances and EBS volumes
us-east - Degraded2d ago
Service impact: Increased Error Rate and Latency
We are observing early signs of recovery. We continue to work towards restoring temperatures to normal levels and bring impacted racks back online in the affected Availability Zone (use1-az4) in the US-EAST-1 Region. We have been able to get additional cooling system capacity online, which has allowed us to recover some affected racks and are actively working to recover additional racks in a controlled and safe manner. In the impacted Availability Zone, EC2 Instances, EBS Volumes, and other AWS Services may continue to experience elevated error rates and latencies for some workflows until full
us-east - Degraded2d ago
Service impact: Increased Error Rate and Latency
We are actively working to restore temperatures to normal levels in the affected Availability Zone (use1-az4) in the US-EAST-1 Region, though progress is slower than originally anticipated. Since our last update we have made incremental progress to restore cooling systems within the affected AZ, which will not be visible to external customers but are required for the restoration of affected services. In the impacted Availability Zone, EC2 Instances, EBS Volumes, and other AWS Services are also experiencing elevated error rates and latencies for some workflows. As part of our recovery effort, w
us-east - Info2d ago
Service impact: Increased Error Rate and Latency
We continue to work towards mitigating the increased temperatures to its normal levels in the affected Availability Zone (use1-az4) in the US-EAST-1 Region. Other AWS services that depend on the affected EC2 instances and EBS volumes in this Availability Zone, may also experience impairments. We have weighed away traffic for most services at this time. We recommend customers utilize one of the other Availability Zones in the US-EAST-1 Region at this time, as existing instances in other AZ's remain unaffected by this issue. Customers may experience longer than usual provisioning times. We will
us-east - Info2d ago
Service impact: Increased Error Rate and Latency
We continue to investigate instance impairments to a single Availability Zone (use1-az4) in the US-EAST-1 Region. We have experienced an increase in temperatures within a single data center, which in some cases has caused impairments for instances in the Availability Zone. EC2 instances and EBS volumes that were hosted on hardware that has been affected by the loss of power during the thermal event. Other AWS services that depend on the affected EC2 instances and EBS volumes in this Availability Zone, may also experience impairments. We will continue to provide updates as recovery continues.
us-east - Degraded2d ago
Service impact: Increased Error Rate and Latency
We are investigating instance impairments in a single Availability Zone (use1-az4) in the US-EAST-1 Region. Other Availability Zones are not affected by the event and we are working to resolve the issue.
us-east - Info10d ago
Service disruption: Increased Error Rates
We are providing an update on the ongoing service disruption. The Middle East (UAE) Region (ME-CENTRAL-1) has suffered damage as a result of the conflict in the Middle East and is currently unable to reliably support customer applications. While some workloads continue to function normally, we strongly recommend customers migrate all accessible resources to other Regions and restore inaccessible resources from remote backups as soon as possible. Relevant billing operations are currently suspended while we restore normal operations in this AWS Region. This process is expected to take several mo
- Major10d ago
Service disruption: Increased Connectivity Issues and API Error Rates
We are providing an update on the ongoing service disruption. The Middle East (Bahrain) Region (ME-SOUTH-1) has suffered damage due to the conflict in the Middle East and is currently unavailable. Customers should recover their resources in other Regions from remote backups. Relevant billing operations are currently suspended while we restore normal operations in this AWS Region. This process is expected to take several months.
DownRightNow tracks AWS continuously by polling their official status feed. We are an independent monitoring service and not affiliated with AWS.