Before the era of virtualization, the process of deploying an application was arduous. IT admins would purchase physical hardware, rack and stack it. They would then install the OS needed by the application. The next step would be to install middleware or other dependent software applications. Finally, they would install the real application. This entire process would take anywhere from 4 to 6 weeks. Each application would typically be deployed in its own server to keep it isolated from other users and applications and secure at the cost of a slow, inflexible process. In addition, reliability was obtained by using replication and redundancy in hardware, which led to a much more expensive solution and silos within the datacenter.
From 2002 onwards, we witnessed a rapid adoption of virtualization in IT. In 2012 Gartner announced that more than 50% of the servers deployed worldwide were virtual as opposed to physical. Enterprises adopted virtualization to gain the benefits of server consolidation, ease of management, and some amount of agility to their business units. This decade long transformation started with the launch of vCenter 1.0 in 2003 by VMware. VMware’s primary focus since then has been to provide more power and tools to infrastructure admins with technologies like vMotion, DRS and HA.
In 2006, Amazon launched a beta version of EC2 offering a public cloud service. In contrast to VMware, Amazon EC2 hid all the infrastructure details and focused on application developers. They could use a cloud API without any knowledge of the underlying infrastructure. Developers could operate at a speed faster than their IT ticketing system could provision resources.
The focus of the two solutions could not be further apart.
Lack of Convergence:
Over time, VMware added additional features to make their platform suitable for application developers; vApps, vCloud, vCAC, CloudFoundry, among others. Similarly, EC2 added more access to their underlying infrastructure, with technologies like CloudWatch to help application developers troubleshoot their problems. VMware was building the solution from the bottom up, and EC2 was doing it top down. The same comparison holds true for Azure and other public clouds.
One would expect that over time, these two trends will converge and enterprises should be able to consume a cloud either on premise or off premise based on the application needs. That convergence will require two fundamental changes:
- Private cloud should be as easy to install, consume and operate as a public cloud
- Public clouds should provide more visibility, control, security and performance like a private cloud
Unfortunately both of these changes are much harder in practice, and the trends have been in opposite directions! Private cloud architecture has become more complex to build, manage, and operate. The number of services, databases, software and hardware components needed have increased over time. Similarly, public cloud wars are being fought over lower price, leading them to optimize for scale, cheaper hardware, lack of predictability and not for performance or control.
It would be great for enterprises to have these two models converge into something that provides the best of both worlds without the drawbacks of either.
Follow @0stack to learn more about ZeroStack!