- Falk Borgmann


Google is famous both for innovation and its well-rounded models. This applies in particular to those that are most beneficial to Google itself. Only recently, the first production release of the Skaffold project was announced in this very context. Skaffold is currently the focus of a lot of intense debate, since it offers an interesting constellation of Google tools for the development of container-based software solutions for cloud environments. And all of this is centered on the orchestration tool Kubernetes—which is now the de facto standard for the management of container applications in the cloud. A high-level overview of the technologies mentioned can be found in the previous article in this series. Google’s approach in this case therefore begins with the provision of flexible open-source tools and ends (as of this writing) with its own managed cloud services.

The following post looks at the advantages and risks offered in this specific context.

When producing a container-based software environment, a number of steps are necessary to turn the written source code into something capable of being run. If you’re working with multiple containers in a larger-scale kind of environment—like a typical microservice architecture—deploying a container orchestration tool is a sensible step. Today, Kubernetes is generally used for the management of container solutions. Despite the orchestration layer, however, there is enough ‘hands-on’ work needed to get developed source code into a workable state even inside Kubernetes and, of course, to maintain, test and manage this code.
This multi-stage process begins with authoring or modifying source code. Once this coding work has been completed, a container image then needs to be created, so this can be deployed within a cluster. YAML configuration files will also need to be edited (these files contain the configuration descriptions) to identify new software and container versions, for example. This new information is then transferred to the cluster by using a command executed on the command line. Following this, the end result then needs to be verified, so as to ensure that the intended system behavior has in fact been achieved. This cycle starts over the next time that the source code is modified: it’s effectively an infinite loop from changes to the source code followed by the deployment and verification process—and no matter how insignificant this modification might be. In the real world, in-house developers start knocking out scripts as fast as they can, with the aim of at least getting some of these processes automated.
And this is where the Skaffold tool from Google now comes in. Open source—and therefore free to use—Skaffold sits on the developer’s local PC and makes it possible to have new or modified source code automatically packed into containers and provided to the Kubernetes cluster. The code can then even be started automatically. If activated in the right mode, local modifications to code can be detected by a fully automated process and transferred to a Kubernetes cluster. It’s a game-changing approach: it offers an effective way to simplify the work of development departments, because a large part of what would otherwise be manual activities can now be automated.
Alongside this whistle-stop tour of the benefits of Skaffold, the Google add-on Cloud Code is also worth a look: this is an extension that plugs into an integrated development environment (IDE). An IDE is a collection of important tools for software development designed to share a common user interface. Google’s Cloud Code currently supports two of these IDEs, namely VS Code and IntelliJ IDEA. For developers, the benefits include the provision of YAML templates that enable control files to be created more quickly and easily, as well as having the files’ functional capabilities checked during the creation process and so before the deployment is initiated. When deployed together, Cloud Code and Skaffold have the potential to simplify—and therefore accelerate—the development of solutions in container-based cloud environments by a significant margin.
On the other hand, we are not yet talking about a deployment to IT infrastructure that is in productive use. Since both projects are open source, the model as described works exceptionally well without getting into dependency difficulties with managed services from the Google Cloud. A productive target system could also be easily hosted using company infrastructure (on-premise)—or located in Azure or AWS.
This is also where it gets interesting. After all: automated deployment as part of a continuous deployment process (which means the permanent delivery of updated software without long planning periods or downtime) is obviously the icing on the cake here. Here too, Google has also given us a helping hand in the shape of its Cloud Build freebie. Cloud Build is intended to remove this last obstacle on our road to deployment happiness, and can be activated relatively simply with a few lines of code in a Skaffold .yaml file. So, with relatively little effort, it’s possible to implement fully-featured deployment processes in a productive Kubernetes cluster. One small detail should be mentioned, however. Cloud Build is currently available in two variants. An open-source variant creates the builds as a local version and restricts itself in doing so to isolated builds on exactly one host. Multiple build processes on multiple hosts are not possible with this version. However: that is what would be the specific goal of a useful microservice architecture. To start multiple builds in parallel, however, Cloud Build can also be consumed directly as a managed service from the Google Cloud. This allows the creation of Docker images in the Google Cloud. So to do so, you connect the local development environment to the service from the Google Cloud to create images in the Google infrastructure. But if your Kubernetes cluster is currently running on Azure or AWS, however, you’re out of luck. I see this as a clever tactical move from Google—only making the last mile publicly accessible as a teaser, so to speak. While you do get a look at the menu, you only get to enjoy the meal if you sit down at the Google Cloud table.

So, to recap:
Skaffold and Cloud Code are tools that can be used to simplify or accelerate development in a container environment using Kubernetes. However, they aren’t suitable for managing productive deployment pipelines as part of a continuous deployment strategy. To fill the gap, Google gives us the Cloud Build tool, which can be relatively easily docked onto a development environment. However, Cloud Build is only fully functional in the Google Cloud itself or in a Kubernetes cluster hosted there. As a result, the components we’ve been looking at certainly offer value as part of development processes, but ultimately end up more like window dressing for Google’s managed services and Google Cloud. And we should always read the small print before jumping enthusiastically onto the managed services bandwagon. After all, the flexibility gained in development is not quite so useful if you can’t map out your deployment processes without involving the service provider. Over the long term, no enterprise is going to truly benefit from proprietary Kubernetes implementations that, while they do tidy up a few things, are only available under specific conditions from a specific provider. And, while taking the first steps without managed services might prove to be more expensive, you’re always better advised to invest in your own know-how: independence in the field of IT infrastructure is a sure bet in the long run.