Link copied to the clipboard.
k6.io is an open-source framework that, together with our enhancements, can be used quickly and easily for smoke and load testing of cloud-native deployments. In this article, we present an exemplary way in which k6.io can help configure or conduct a fictitious test. Our extended setup can be used for both one-time system acceptance and continuous use during a product lifecycle...and it's available to everyone for free!
How can a cloud-native solution running on Kubernetes/OpenShift be thoroughly tested? Sure, Kubernetes itself provides a whole range of mechanisms with which resources can almost self-monitor: readiness/liveliness, health checks, custom resource definitions, lifecycle policies, and much more.
But what about a full-fledged load or acceptance test as preparation for actual production operation? Unlike (minimal) functional tests, a deployment should pass all necessary tests under realistic, even simulated, load before being put into production.
Initiating this process manually every time is one problem. Another is the question of continuous testing as part of Day-2 operations. In both cases, the k6.io framework comes into play, an open-source tool for performance testing under Kubernetes.
First, it is important to clarify some central terms in the context of software testing. What is a smoke test, what is hidden behind terms like load or performance tests?
For the so-called smoke test, one can imagine the following scenario: Someone has completed a hobby project, such as their own transistor radio, and now turns it on for the first time. If it now pops and the radio disappears in a cloud of smoke, it has not passed the so-called smoke test. Such a test is thus a kind of functional test with minimal load. Translated into cloud-native software, the test is used to determine whether the solution works as expected after deployment. A smoke test is also useful for newly introduced functionalities: Do the new components run as desired?
In contrast, load or performance tests clarify how a (software) system behaves under a certain load, e.g., parallel requests from a large number of users per second. These tests are important to check whether the deployment remains performant and reliable even under real conditions, i.e., under load. Furthermore, these tests provide important indicators for planning SLA/SLO during operation.
While the smoke test tests the software functionally under minimal load, the load test generates an artificial baseline load. This is ideally based on realistic sizing, and its tests take place on infrastructure that is at least production-like. And this is where things can go wrong if the system or application was not designed to be resilient. Many operations and many users are emulated in parallel, adhering to predefined patterns (user behavior, interdependent operations, etc.).
A good load testing strategy can be crucial to ensuring that an application survives real-world usage as well as peak loads.
To bring the topic to life, we want to present a simple way to roll out and execute smoke and load tests as deployment acceptance and/or permanent test solutions with a cloud-native Kubernetes solution: the k6.io framework as an open-source tool for performance testing provides a comfortable and very easy-to-handle starting point. One thing is crucial here: we also want to have an overview of the real time test metrics. Without correct visualization, the raw data provides little context and thus makes quick test evaluation difficult. In our example, we, therefore, use simple but visually appealing dashboards based on Kibana and Elasticsearch (also open-source).
Let's assume we have a web-based document solution called CrocoDoc on Kubernetes, which is internally developed and deployed.
200-500 employees access CrocoDoc simultaneously on a daily basis. These users create multiple documents on a daily basis with title and date as metadata. The new documents are also updated or retrieved at a later stage.
To verify this scenario with a k6.io load test, the following script is sufficient:
Of course, the k6.io framework must be installed locally beforehand, which can be done quickly and easily with the following instructions: https://k6.io/docs/get-started/installation/
Since in a real load test, testing must be done from different points (i.e., distributed), this local test makes only limited sense. For a productive load test, we can now seamlessly adapt the outlined test example to a Kubernetes environment, including a clear dashboard for all test runs and scenarios. The instructions for this can be found here - https://github.com/deepshore/k6-testkit-resources
As a result, we receive a visually prepared and clear evaluation of our distributed k6 load test within the Kubernetes cluster:
With the right tools, you can quickly set up a comprehensive smoke, integration, or permanent load test in a Kubernetes environment. Since everything is stored in manifest form, you can avoid installing complex and potentially expensive testing tools. Another advantage is the independence from Software as a Service Providers, as the setup is entirely under your control, whether in Azure, GCP, AWS or OnPremises.
All necessary resources for the above description can be found here: Deepshore K6 Testkit Resources