Building a Highly Available Private Cloud

Enterprises want to have a highly reliable cloud infrastructure. Companies like Google and Amazon have been able to build and deploy such clouds at a very large scale, and offer them as public clouds; but there is no existing solution that allows enterprises to have a similar system in-house. Some alternatives like OpenStack and VMware vCloud Suite require expert admins to ensure availability of the private cloud components but these existing models are too limiting. As enterprises embrace web-scale IT, they require an automated self-healing private cloud with minimal manual monitoring and remediation.

A traditional OpenStack-based private cloud is comprised of several services providing compute, storage, and network virtualization. The current state of providing high availability services relies on external components which are bolted-on (e.g. a load-balancer can be used to front multiple instances of a stateless service). In the event of a component failure, the load-balancer routes requests to the still running instance. The solution is not self-healing and requires manual intervention to recover completely by restarting the failed component. Stateful services like database and messaging require additional software and configuration to achieve high availability. All this extra configuration leads to heterogeneous nodes in the cluster, making cluster management more challenging.


In  contrast, Zerostack has designed a symmetric self-healing OpenStack-based private cloud solution. A Service Monitor is responsible for keeping the various services healthy and running. It does this by doing health checks on the OpenStack services and repairing them on failures. The architecture is symmetric and any node in a ZeroStack cluster can run the monitor as well as any of the cloud services. All the nodes in the cluster are part of the management layer which employs a consensus protocol to select one node in the system to run the Service Monitor. The consensus protocol also ensures that there is only one monitor in a Zerostack cluster at any time, even in the presence of network partitions; to avoid multiple instances from taking conflicting actions.

If the node on which the monitor is running crashes, the consensus protocol elects a new node to run it to ensure that there is always a monitor instance running in the cluster. The consensus protocol is also used to store the monitor state, enabling a new instance to start off exactly where the previous instance had stopped. The Service Monitor runs multiple health checks on the various services and takes appropriate remedial measures, like restarting services or migrating them to healthy nodes from crashed or degraded ones. The monitor performs detailed service-specific health checks of the various services, as it has been purpose-built to monitor these services. This is better than the high-level health checks carried out by a generic load-balancer used in traditional OpenStack HA solutions. This allows the Service Monitor to observe and automatically respond to various anomalies in the system, which would otherwise require manual intervention.

The services that are migrated across nodes should remain reachable to the other services and clients in the cluster. We achieve this by creating a virtual network interface for each service with an associated virtual IP, which does not change even if the service migrates across nodes. For stateful services, the data is stored on distributed storage reachable from all the nodes; this ensures that the services can access their data even if they migrate across nodes.

To test our architecture, we regularly run multiple automated tests that introduce various faults — including service crashes, node reboots, node crashes and network disconnects — on a running OpenStack cluster, and verify that the cluster is able to recover from all of these faults quickly. Our design enables the cluster to recover from various faults automatically, thereby requiring minimal manual intervention. Removing the operational overhead frees organizations to spend more time and resources in consuming their private cloud, not running it.

In future blog posts, we will dive deeper into some of the key components of our architecture, and also share the practical knowledge we have gained while running OpenStack based private clouds.

If you are interested in working on such problems; please reach out to us at:

One thought on “Building a Highly Available Private Cloud

Leave a Reply