I came across this blog from IBM’s Cloud Architecture Center – “Reference architectures can speed your route to cloud” – which describes a tool to help you get started on your cloud journey with IBM. Think of it as a way to parse which parts of IBM’s portfolio are best suited to meet your needs. And it helps you figure out more about IBM’s offerings so you are more prepared to sit with solution architects. And I asked myself, “Who is this tool intended for?”
This is not just about IBM; this is true for most cloud solution vendors that want to sell you tools and technology to build a cloud yourself. Think of IBM cloud, VMware vCloud Suite, Azure on-premise, or Nutanix enterprise cloud. Essentially, what they’re saying is that their solution is so complex that you need a consultative tool to guide you through the process of building a solution.
But what if you didn’t have to traverse this complex maze of tools? What if a complete, integrated, fully-functioning, turnkey cloud platform could be available to you in under 30 minutes, and the solution would be an intelligent one at that, and wouldn’t require you to lift a finger in ongoing administration and management. Sounds too good to be true? We say NOT. Read on…
Not everyone can afford a multi-million dollar ball and chain from one of the big cloud solution vendors. And wouldn’t it be nice if everyone could access incredibly powerful technology without the need for a large consultative overlay, or overly complex technology? The idea of democratizing technology is a dream of many, but not many can get there. Similarly, getting a cloud technology stack, monitoring, and operations software and trying to build a cloud yourself with humans taking care of most aspects of running it, is not going to scale for businesses that are comparing that with a public cloud solution like AWS, GCP or Azure.
So is there a better way? Yes! It involves using software to abstract the complexity—so you never see it – and software to help learn your needs and better direct informed decisions from you. Software should learn about you and tailor its forward-looking behavior to you.
We all see the rise of self-driving cars. Cars appear simpler to use and drive every day, and with GPS, we all get lost a lot less! And with parking assist and lane departure warnings, we have a lot fewer accidents and ‘dings’ from bad parking jobs. Computers now tell the service people what is wrong. Troubleshooting is done for you.
If cars, which are extremely complex machines, can take this new route, how about IT infrastructure? Well, that is the point of all this chatter around artificial intelligence and machine learning. If software could learn from you, learn your biases and behavior, and help you predictively drive to new business heights with IT, why wouldn’t you try it?
Best yet, if your IT infrastructure could self-drive, you could minimize the amount of mundane tasks you are doing and therefore give yourself a more strategic view of your job, or more time for training or planning.
This is not a prediction – it is where the software industry is taking us. Soon, data center operations will use artificial intelligence (AI) so advanced that it can anticipate issues, decide the right course of action and deliver changes faster and more accurately than any human, or group of humans, possibly could. On-site management will be performed through cloud-based AI services, effectively eliminating the integration, operations and management overhead needed to run the infrastructure. Companies will consume their on-site infrastructure through a web portal—similarly to how they interact with AWS, Azure and Google today.
ZeroStack is at the forefront of this trend. ZeroStack uses AI to deliver a self-driving, fully integrated private cloud platform that offers the agility and simplicity of public cloud at a fraction of the cost. Your private cloud self manages through advances in AI and computer science, allowing you to focus on your core business rather than cloud operations.
The core of the solution is an AI engine, called Z-Brain, which is built from a big-data cluster to observe and guide cloud decisions. Change events, statistics and health checks are relayed up to Z-Brain to do complex event processing, thereby increasing automation levels and improving mean time to recovery, for example. Your existing administrators can monitor and manage application workloads, performance and capacity planning for VMs, CPUs, storage, and networking, so there’s no need to hire cloud experts. In fact, application developers, support teams or sales engineers get 1-click deployment templates through the Z-AppStore, which is integrated into our Intelligent Cloud Platform.
Customers deploy ZeroStack for a variety of reasons: to reduce complexity, to increase agility, and in many cases to control OpEx. Whatever their motivation, they all agree that the future of the datacenter has arrived sooner rather than later.
# # #
Original IBM Blog
December 7, 2016 | Written by: James Belton
One of the most difficult aspects of choosing a cloud road map is knowing where to start. In some respects, it’s also tough to know where to finish, even if there is already some cloud adoption in the organization.
It’s a journey that thousands of organizations have already traveled. If only there was a way to emulate their success and tap into their expertise (as well as avoid the mistakes they made). Well, good news: there is.
The IBM Cloud Architecture Center provides this expertise in the form of dozens of reference architectures, organized into a library under the headings cognitive, data and analytics, DevOps, e-commerce, hybrid, Internet of Things, micoservices, and mobile.
An IBM reference architecture is a design blueprint based on clients’ real-life experiences implementing cloud projects; not just one or two, but hundreds or thousands. Just as a builder may use a set of blueprints to successfully build houses, reference architectures can be reused to successfully build working IT architectures.
It’s important that the reference architecture is at the right level of granularity. This is made possible by micro and macro patterns. Just as a builder’s blueprint for a house may be divided into macro patterns of for the upstairs and downstairs, and then into micro-patterns of particular rooms such as the kitchen and bathroom, likewise, a cloud reference architecture for a hybrid data warehouse may be broken down into macro patterns for data sources and data Integration, and micro patterns around the deployment of actual products such as IBM Bluemix Data Connect. Micro patterns are often split into groups of use cases, that can be broken down into the individual actions that get the job done; for example, “build kitchen cupboard” or “provision IBM dashDB.”
An important thing to remember about patterns is that they are repeatable sets of actions which achieve particular outcomes. The best patterns are those which have been developed over time, because as they update and evolve. They have more and more experience built into them and are more likely to be accurate, resulting in the desired outcome.
Using a good reference architecture therefore saves time. There is surety to the outcome and no blind alleys where work has to be restarted to correct a mistake. Using the patterns within the reference architecture will also save on costs, because organizations make investments only in the tools that are proven necessary, and the entire process is more predictable.
As well as architecture patterns, the IBM Cloud Architecture Center also provides sample code and demos to really get your cloud apps going.
Why reinvent the wheel when IBM reference architectures can get you motoring to the cloud?