Resolved
Resolved

We believe all affected services should now be back online.

Early indications are that a scheduled power maintenance - which was not supposed to be disruptive - went wrong and this took out both our A+B power feeds in all of our racks in this datacentre location. As this is our key site for compute, this had a significant impact on virtual machines, for both internal and customer VMs.

When we arrived on site the facility had no idea at that time what was wrong or how long it would take to restore power, and so we were making arrangements to bring key functions back online when the facility engineers managed to restore power to our racks between 13:30 and 14:00, while we were offsite getting parts. We were back on site for 14:25.

As the compute blades booted before the SAN (upon which the Virtual Machines are stored) was ready, manual intervention was required to get hosted virtual machines back online.

The resulting boot storm from all of the VMs booting in rapid succession (and many of them requiring to do disk checks on start) meant that some virtual machines (including the Volta DNS resolvers) were not available until circa 15:05.

We were already in the middle of a process of reorganising some key services as well as rearchitecting the entire layer 3 network which would have reduced the impact of this service on those services not hosted or terminated directly within Volta itself, and although you would like to hope the likelihood of this sort of thing occurring again is extremely small, we will now do all we can to complete this programme of works as soon as possible.

Obviously the entire point of us having two feeds is that we should not experience events like this. We are expecting a full RFO from the facility in due course. There were follow on scheduled works planned for tomorrow which have been cancelled for the time being.

Please accept our apologies for the inconvenience caused by this outage.

We are still on site assisting customers who need help with their equipment and will be for a while, so if you believe you still have services offline, please call in using the emergency support route - 020 3026 2626 opt 9 (not announced)

Avatar for Phillip Baker
Phillip Baker
Updated

Power has been restored and we are seeing services recovering.

Our engineers are still onsite to ensure everything comes back online.

More updates when available.

Avatar for
Identified

We have confirmed that there is a total loss of power to our racks in Volta.

This has been escalated to the building managers.

We are doing what we can to try and get temporary power to restore some services if possible.

Apologies for the inconvenience and we will update as soon as we have any more information

Avatar for
Updated

We are still investigating the cause of this incident.

Indications are this is a complete loss of power at a data centre. Engineers have been rerouted

We will update as soon as we know more

Avatar for
Updated

An engineer is on their way to the datacentre.

Avatar for Phillip Baker
Phillip Baker
Investigating

We are investigating some sort of widespread network disruption.

Apologies for the inconvenience caused - more as we have it.

Avatar for Phillip Baker
Phillip Baker
Began at:

Affected components
  • Core Network Functions
    • Layer 3 (SOV)
    • Layer 3 (THE)
    • Layer 3 (THN)
    • Layer 3 (VLT)
    • Layer 2 (SOV)
    • Layer 2 (THE)
    • Layer 2 (THN)
    • Layer 2 (VLT)
  • Connectivity
    • Backhaul Services
    • Broadband
    • Leased Lines, EFM & EoFTTC
  • Hosting
    • Client Area & Billing System
    • CDP Backup Hosts
    • cPanel Shared & Reseller Hosting
    • Dedicated Servers
    • High Volume Mail
    • Managed Shared Hosting
    • Virtual Server Platform (VLT)