Mind the Gap Between Kubernetes and the Rest of the Hybrid Data Center

In my previous post, I established the necessity for a hybrid cloud management solution that enables policy-driven application placement on premises and in the public cloud. The Kubernetes container scheduling and orchestration platform fulfills this requirement.

The Kubernetes website advertises the ultimate dream: planet scale without increasing the ops team.

The Kubernetes website advertises the ultimate dream: planet scale without increasing the ops team. (source: www.kubernetes.io)

However, enterprises can only successfully leverage Kubernetes if they manage to integrate the platform with the rest of their hybrid data center infrastructure.

Commercial kubernates

The vast majority of enterprises aims to adopt a commercial container management platform in addition to the Kubernetes scheduler (EMA research data, Q1, 2018).


Kubernetes is Infrastructure Blind: Centralized Automation is Critical
Kubernetes is entirely blind when it comes to its underlying infrastructure, and it is not a trivial task to deploy, configure, manage, secure, and upgrade Kubernetes clusters on an ongoing basis. Recent EMA research shows that integration of containers with the existing IT infrastructure, the ability of the existing corporate IT team to manage containers, and a unified security and compliance framework for containers and traditional IT are the three key customer demands and decision criteria when it comes to selecting container technologies today. All three of these requirements boil down to the topic that I have now been writing about for almost a decade: centralized declarative automation.

Integration, Management by Existing Team, and Security/Compliance Enforcements are Critical When Selecting Container Management Solutions (source: EMA research data, Q1, 2018).


Users today expect a superior application experience that constantly offers the latest and greatest features in a reliable and well-performing manner. Nobody wants to listen to excuses in terms of resource contention or reliability. Every time a developer has to worry about defining, deploying, or even managing infrastructure will delay the delivery of features and capabilities that may sway prospects into becoming customers. Containers can help, but they are merely an efficient vehicle to deliver and manage application runtime environments.

Glueing the Infrastructure to the App
While Kubernetes is today’s standard for managing container clusters, Kubernetes application deployment and lifecycle management requires significant skill, knowledge, and integration with the underlying compute, network, storage, and virtualization infrastructure. The key challenge consists of making all of these infrastructure and traditional management components a part of the application management process. Let us break down this challenge into individual tasks:

  • Deploy the container management plane: Kubernetes needs to be configured for the required degree of performance, availability, scalability, security, and regulatory compliance. This configuration includes all the services belonging to the Kube Controller Manager for managing local Kubelets, and the ones belonging to the Cloud Controller Manager that enable the policy-driven shifting of Kubelets in one or more public clouds.
  • Cluster management: Manage the Kubernetes services for upgrades, disaster protection, high availability, secrets management, networking, and persistent storage.
  • Container monitoring: Application containers, as well as the Kubernetes management plane, need to be included in the overall application, virtualization, and infrastructure monitoring strategy.
  • Implement application-centric policy rules: Instead of storing policies in PDF documents, the respective experts should code them into the platform. This enables the policy engine of the container scheduler to automatically adhere to them.
  • Deploy application containers: Applications consisting of services or microservices that run on multiple containers need to be deployed and managed as one, despite the possibility of individual services following a separate release lifecycle.
  • Deploy targeted workspaces for development: Developer workspaces need to be optimized based on the type of project and role of the respective developers. Before they are delivered, they need to be pre-integrated with the container management platform, typically via the CI/CD platform.


Kubernetes Will Relentlessly Expose Your Automation Weaknesses
In summary, we can say that Kubernetes mercilessly exposes automation deficiencies. These could be automation silos, non-documented and non-parameterized scripts, and a DevOps pipeline that includes manual steps. The moment when there are manual steps is when there is significant risk of inconsistencies, security, policy, and scalability issues that quickly lead to legal exposure, compliance violations, and cost overruns.

Leave a Reply