Software is eating the world, and you need to feed the beast

John Burke John Burke — Principal Research Analyst with Nemetes Research

John advises key enterprise and vendor clients, conducts and analyzes primary research, and writes thought-leadership pieces across a wide variety of topics. John leads research on private and public cloud; private cloud infrastructure; private and hybrid cloud management and security; network, server and storage virtualization; and software-defined networking (SDN), SD-WAN, and network functions virtualization (NFV).

Software is eating the world, and you need to feed the beast

As the current wave of digital transformation sweeps through the enterprise, the need for bespoke applications—to empower staff or to fully engage customers—is becoming paramount. Companies of all types are counting on their developers to survive and thrive in a rapidly evolving landscape, and those developers are counting on IT for an infrastructure that can keep up with them.

Why is keeping up so hard? Because application development is getting faster, so fast that long-standing IT practices for resourcing to meet future needs and provisioning resources manually as they are requested can’t keep up. That means IT can’t provide the elasticity requisite to allow developers to spin new environments up in a few minutes, then throw them away and start over a few minutes later.

DevOps underlies the development speed-up. It is the melding of traditionally separate domains—development, QA testing, and operational deployment and management—into a single organization, combined with agile development practices, heavy automation, continuous delivery, and other practices. DevOps accelerates application delivery by orders of magnitude when aggressively applied.

As DevOps-driven changes to development and release practices continue to spread, the need for highly responsive, lightweight provisioning processes is spreading, too. You can’t run a DevOps-based development effort when admins are still manually allocating VMs and manually connecting them to storage and manually configuring their networking, all in an environment of manual and individualized insertion of new compute, storage, and network components into the data center. Manual operation is not, and cannot, be fast enough, flexible enough, or scalable enough.

IT needs to build its own cloud to support a cloud-ready and cloud-first development paradigm. Starting at the base, it needs infrastructure that is ready to be run like a cloud, and that it can expand by adding components and expecting them to be automatically configured and brought into service as part of the cloud’s resource pool. It needs cloud automation and orchestration atop that, to allow push-button deployment of resources, powering self-service access to a catalog of services—complex services, not just allocating a single VM, but whole constellations of VMs and containers, storage and networks. It needs by-the-workload accounting of true consumption of cloud resources, to allow proper accounting for use and close the loop that makes self-service sustainable.

Finally, it needs to be able to expose all this sophistication and automation to its developers in one or both of two ways: as IaaS, both manually and programmatically provisioned; and as a PaaS, with even the deployment of virtualized units of compute and storage and network abstracted away.

With a true private cloud that can meet the need for speed that digital transformation and DevOps is creating, IT will be in a position to fully support the next stages of the organization’s evolution. Without it, IT will either find itself increasingly irrelevant to the shaping of the company’s future, as developers are forced to work solely with external clouds; or will be the limiting factor on innovation, the brakes slowing everything down and creating ongoing competitive disadvantages for the organization as a whole. Time to start clouding up.

Leave a Reply