The lonely outpost of Mt. Fremont lookout in Mt. Rainier National Park: Source Wikipedia
Before public clouds, enterprise infrastructure was all about running highly reliable, high performance, somewhat expensive infrastructure in your own datacenter. In last decade, with the growth and adoption of public clouds, everyone thought that on-premise infrastructure companies will be dead soon and that all the workloads would eventually move to few large public clouds.
However, the recent trend seems to be going in the reverse direction – public cloud vendors have realized that private infrastructure management is a very large market and there are lots of workloads out there that can simply not be moved to a centralized, remote cloud. Public clouds are starting to look more like the mainframes of cloud computing and we also need PCs of the cloud computing era. In most cases, the reasons come down to the physics of moving data and network latency. Some specific reasons why a lot of infrastructure needs to be closer to edges are:
- Large data creation at the edge – IoT, media handling, network functions on user data
- High network latency between users or data and public clouds – High-frequency trading
- Not enough network bandwidth and high jitter in data access over WAN – VDI, media workloads, cameras and sensors deployed close to edge
- Performance can be optimized much more at smaller scale – High-frequency trading, HPC workloads, workload locality in distributed systems.
Unfortunately, the Internet was not designed with quality of service as a key design principle. In fact, the main design principle was to have connectivity even in the presence of failures and to provide best-effort service for the most part. For a historical and very interesting perspective on this, you can read this paper from David Clark:
So the only way to get to such workloads is to put the cloud infrastructure closer to the edges.
The novelty here is not that the infrastructure is moving to the edge. That has been the norm for decades and was the main consumption model before public clouds. The big difference is how the infrastructure is being delivered and managed at the edge. That is going to be the main disruption for many existing vendors in this space, who are still following the older delivery, support and management models. A public cloud vendor would not move to selling on-premises solutions if it didn’t think it has an advantage over so many well-established incumbent solutions getting deployed at the edge!
In these new on-premises products, AWS and Google are shipping software on top of standard commodity hardware and once the software is installed, these machines dial back to the existing control plane within AWS or Google. The customers still dial in to a SaaS-based application that now shows both the public cloud and private cloud resources. A user can then deploy workloads on any of these platforms based on where it makes sense to run the workload, and not on the difficulty of building or operating a private cloud.
Furthermore, the on-premise infrastructure is completely managed, upgraded, patched and operated using the cloud based management software. So although the hardware is local to a customer, the operations and dashboards are all based on the data and telemetry collected by the SaaS software. The other big benefit is obviously to provide similar APIs across on-premises and public cloud infrastructure.
If you are an incumbent vendor with an existing on-premise cloud solution, storage solution or networking solution, it is clear that we are moving in the direction of “Cloud Managed Infrastructure”.
If you are not offering an infrastructure solution without similar characteristics, it is going to be very hard to stay competitive in this new era.