Six Kubernetes Pain Points and How to Solve Them

Companies want to implement modern enterprise applications that can be used anytime, anywhere by always-connected users who demand instant access and continuously improved services. Developing and deploying such applications requires development teams to move fast and deploy software efficiently, while IT teams have to keep pace and also learn to operate at large scale.

Containers come to the rescue

While the concept has been around for a couple of decades, containers staged a comeback in the last five years because they are ideally suited for the new world of massively scalable cloud-native applications. Because they share the operating system kernel, containers are extremely lightweight, start much faster and use a fraction of the memory compared to virtual machines which need an entire operating system to boot up. More importantly, they enable applications to be abstracted from the environment in which they actually run. Containerization provides a clean separation of concerns, as developers focus on their application logic and dependencies while IT operations teams can focus on deployment and management without bothering with application details.

Enter Kubernetes

Deploying and managing these containers is still a significant challenge. In the past couple of years, Kubernetes burst onto the scene and became the de facto leader as the open-source container orchestrator for deploying and managing containers at scale. The hype has reached such a peak now that there are as many as 30 Kubernetes distribution vendors and over 20 Container-as-a-Service companies out there. All the major public clouds (AWS, Azure, and Google Cloud) provide Container-as-a-Service based on Kubernetes.

Six Pain Points ZeroStack uniquely Solves for Kubernetes Deployments

With more than 30 Kubernetes solutions in the marketplace, it’s tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable.

Far from it.

There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments:

Enterprises have diverse infrastructures

Bringing up a single Kubernetes cluster on a homogenous infrastructure is relatively easy with the current solutions in the market. But the reality is that organizations have diverse infrastructures using different server, storage, and networking vendors.

In this situation, automating infrastructure deployment, setting up, configuring, and upgrading Kubernetes to work consistently is not going to be easy.  

ZeroStack solves this pain point by providing a unifying platform that abstracts the diversity of underlying infrastructure (physical server, storage, and networking) and a standard open API access to infrastructure resources.

One Kubernetes cluster doesn’t address all needs

Organizations have diverse applications teams, application portfolios, and sometimes conflicting user requirements. One Kubernetes cluster is not going to meet all of those needs. Companies will need to deploy multiple, independent Kubernetes clusters with possibly different underlying CPU, memory, and storage footprints. If deploying and operating one cluster on diverse hardware is hard enough, doing so with multiple clusters is going to be a nightmare!

ZeroStack solves this pain point by providing the notion of logical business units that can be assigned to different application teams. Each application team gets full self-service capability within quota limits imposed by their IT teams. They can automatically deploy their own Kubernetes cluster with a few clicks, independently of other teams. It’s likely that each team will have their own infrastructure that is different from others, but this is abstracted by ZeroStack.

Development teams are often distributed in multiple sites and geographies

Companies do not build a single huge Kubernetes cluster for all of their development teams spread around the world. Building such a cluster in one single location has DR implications, not to mention latency and country-specific data regulation challenges. Typically, companies want to build out separate  local clusters based on location, type of application, data locality requirements, and need for separate dev, test, and production environments. Having a central pane of glass and management becomes crucial in this situation for operational efficiency, simplifying deployment, and upgrading these clusters. Having strict isolation and role-based access control is often a security requirement.

ZeroStack solves this pain point by providing a central way to manage diverse infrastructures in multiple sites and providing the ability to deploy and manage multiple Kubernetes clusters within those sites. Access rights to each of these environments are managed through strict BU-level and Project-level RBAC and security controls. Furthermore, ZeroStack provides monitoring and analytics of the underlying infrastructure for all of those Kubernetes clusters. Once, the ZeroStack platform is installed on customers servers, users can  deploy Kubernetes remotely from the ZeroStack App Store in a completely automated fashion. All of the analytics, monitoring, and other operational activities required to manage the underlying infrastructure is available from the SaaS-based Z-brain component of ZeroStack. No need to install any of this software on-premises.

Container Orchestration is just one part of running cloud-native applications and infrastructure operations

Developing, deploying, and operating large-scale enterprise cloud-native applications requires more than just container orchestration. For example, IT operations teams still need to set up firewalls, load balancers, databases, and DNS services just to name a few. They still need to manage infrastructure operations such as physical host maintenance, disk additions/removals/replacements,and  physical host additions/removals/replacements. They still need to do capacity planning, and they still need to monitor resource allocation & utilization and performance of compute, storage, and networking. Kubernetes is not concerned with or addresses any of these challenges.

ZeroStack provides AI-driven manageability for all the underlying infrastructure that runs Kubernetes. IT operations teams are provided with all the intelligence they need to optimize sizing, perform predictive capacity planning, and implement seamless failure management with the self-healing architecture. IT operations folks can easily deploy load balancers, set up DNS services, and install & configure virtual firewalls within ZeroStack. Since ZeroStack runs both VMs and containers on the same platform, persistent data stores,SQL, and Postgres databases can be deployed into VMs and made accessible to containerized applications. Again, all of this can be accomplished remotely from the Z-Brain.

Enterprises have policy-driven security and customization requirements

Enterprise have policies around using their specifically hardened and approved gold images of operating systems. The operating systems often need to have security configurations, databases, and other management tools installed before they can be used. Running these on public cloud may not be allowed or they may run very slowly.

ZeroStack provides a data center image store where enterprises can create such customized gold images. Furthermore, ZeroStack gives fine-grained access control and security where a cloud administrator can share these images selectively with various development teams around the world, based on the local security, regulatory, and performance requirements. The local Kubernetes deployments are then carried out using these gold images to provide the underlying infrastructure to run containers.

Enterprises need a DR strategy for container applications

Any critical application and the data associated with it needs to be protected from natural disasters regardless of whether these apps are based on containers or not. None of the existing solutions provide an out-of-the-box disaster recovery feature for critical Kubernetes applications. Customers are left to cobble together their own DR strategy.

ZeroStack provides remote data replication and disaster recovery between remote geographically-separated sites as part of its multi-site capabilities. This protects persistent data and databases used by the Kubernetes cluster. In addition, the underlying VMs that are running Kubernetes clusters can also be brought up at another site to provide an active-passive failover scenario.

ZeroStack is not a just container orchestrator. There are many open-source tools and companies which provide that capability. ZeroStack provides an end-to-end container and VM management platform, all the way down to the infrastructure that includes everything you need to confidently run your cloud-native applications in production. You can quickly deploy and run multiple clusters across geographically dispersed sites with a click of the button with full self-service features for developers while IT retains control. The ability to manage multiple server and storage solutions provides the basis for portability across infrastructure providers. Whether running containers or VMs on a single on-premises cluster or multiple clusters across sites with portability from and to AWS and Azure, ZeroStack is the ideal unified container and VM infrastructure management platform of choice for developers building ambitious cloud-native applications using containers and Kubernetes.


Leave a Reply