MULTI CLOUD: A DECLARATION OF INDEPENDENCE FOR IT

- Falk Borgmann

The first part of this post explained the various approaches to a cloud strategy. Whether IaaS, PaaS or even SaaS: any model will include risks as well as benefits, although these are often ignored in public debate—and sometimes even deliberately swept under the rug. This kind of debate is not necessarily in the interests of the major cloud providers and obviously doesn’t quite line up with their self-image. Because the typical cloud business model can best be described as a one-way street: while it’s quick and easy to move to the cloud, getting back out is a major headache. So it’s no surprise that major service providers sometimes have guest speakers at their events sign a kind of NDA where they promise to avoid mentioning the word ‘multi-cloud’.
A quick recap for cloud newbies: For small or midsized businesses, extensive use of the services on offer can indeed be sensible when compared with the capital expenditure otherwise required for internal, on-premise IT solutions.
In the enterprise segment, however, a cloud strategy truly comes into its own only when the aspect of IT flexibility is considered. And this is where the all-too-familiar vendor lock-in has been one of the key problems of the last ten years. Nor does this merely result in potentially high costs and more rigid structures: it can also act as a brake on innovation and the adoption of new technologies. As a result, the following scenario is aimed more at larger-sized companies or corporations who have decided to implement an IT model with the goal of safeguarding their own independence.

A Blue Sky Project
Let’s assume for a minute that we can design our IT infrastructure without having to cater to the sensibilities of any of the protagonists that we may find in our organization. What would the wish-list look like for this flexible yet efficient IT infrastructure? So the first step is pretty simple, really: we would try to avoid everything that would end up with our company being directly dependent on a cloud provider. This kind of interdependence translates to a lack of flexibility and (potentially) higher costs. At the beginning of our scenario, we have therefore once again adopted a conventional IaaS (infrastructure as a service) approach. The three major building blocks that make up infrastructure—storage, computing, and network—can be replaced comparatively easily as long as we’ve selected the right architecture for them. To ensure maximum flexibility, we’d be wise to use a container technology. In simple terms, this is a slightly different virtualization layer, since, unlike a conventional virtual machine (VM), each separate container does not require its own operating system. In multi-layered microservice infrastructures (many containers make up one system solution), this can greatly simplify solution operations, since it can be easier to narrow down the reasons for an error if a fault happens. Handling container-based applications is very much like working with a construction set, while a conventional VM will instead represent a complete server (with everything that that implies). Since container systems are designed to become as complex as needed, a container orchestration tool is always used in real-world scenarios. Kubernetes is probably the most well-known of these tools and is managed as an open-source product by the Linux Foundation. This kind of tool is used to manage the various container runtime environments (such as Docker, for example). Within a cluster, an orchestration layer is used to ensure that the required number of containers is continuously available. Since this layer stops and starts instances independently, it is ideally a highly autonomous management tool.
But: buyer beware! This is where the first pitfall lurks when we’re setting up a multi-cloud-capable infrastructure. This is because it’s essential for a Kubernetes cluster to remain fully functional across multiple cloud providers. And you won’t get far here by relying on proprietary Kubernetes services from individual cloud providers. At this point, you’d be well-advised to build up your own know-how and create a deployment yourself. To help with this deployment, Helm (another project from the Linux Foundation) can be used, for example, to make organizing the installation—and later, updates—easier from an operational perspective. When combined with a Kubernetes architecture, the automated scaling options that are available with the platform are particularly useful for handling fluctuating requirements in terms of load.


Copyright: Deepshore GmbH

As the figure illustrates, the orchestration layer itself can be operated in the cloud using a distributed model. Together, all of the instances make up a system cluster. Distributed solutions are naturally very suitable in cases where they are modelled on the microservices principle. In other words: these architectures are tailor-made for managing many container files. Squashing a monolithic ERP system into five huge containers and then operating these as a cloud service is naturally also an option. As the manufacturer, you could even advertise this as ‘cloud-ready’. A more clear-eyed view of this tactic might be to dub this kind of approach as ‘old wine in new bottles’, however.
Another key advantage of the scenario we’ve sketched out in combination with system components distributed and supplied as containers is the capability for data replication. Exploiting such mechanisms requires us to simultaneously think in terms of distributed systems. Compared with monolithic approaches, here we have the opportunity to utilize the available standard mechanisms to distribute data over various cloud providers without ever going outside our logical system (see next figure). This automatically leads to a situation in which individual containers or even cloud providers can be ‘switched off’ without negatively affecting our overall business logic. Integration with external applications (such as customers or business partners, for example) can also be made more flexible, since independent points of entry for various infrastructures can be defined on demand.


Copyright: Deepshore GmbH

Centralized systems (including those running in the cloud) naturally have certain features that make them beneficial to certain kinds of business use cases. Relational databases and the guarantees offered by their ACID-compliant transactions are a familiar example. For applications where data require special levels of protection (such as tax compliance), we continue to rely on these proven technologies—which are also being offered as cloud services. But if we consider the description as given above, we have to ask ourselves what benefits these SaaS models offer versus IaaS models in the enterprise segment. OK, so we don’t have to worry any more about hardware or patches, but we do hand the ‘keys of the kingdom’ to the cloud provider, making it much more complicated if we ever want to migrate in the future.

So we end up asking ourselves four basic questions:

  • Why do I want to use a cloud service (costs, quality, and/or greater flexibility)?
  • What do I want to run in my cloud?
  • How will I run it in the cloud?
  • Which short-term and long-term consequences will this have for my IT—and especially in terms of greater flexibility?

A well-designed and carefully planned cloud model makes a lot of sense. But you’d also be well-advised against taking an all-or-nothing approach. Unsurprisingly, dogmatic approaches should also be steered well clear of when thinking about cloud services. All too often, these approaches are codified as an ‘IT strategy’ with a specific focus. When considering trade-offs between costs, service quality, and greater flexibility (or avoiding dependencies), clear-headed objectivity is called for. If an IT architect tries to convince you that a cloud-only strategy with one major provider is the only sensible alternative—well, it certainly wouldn’t hurt to get a second opinion. “Always act to increase your available options.” Every IT decision-maker should place a copy of this quote from Heinz von Förster next to their consultants’ PowerPoint slides when the topic is strategy—i.e. the long-term prospects for one’s own IT unit.

Share