Kubertainers or containernetes? It’s time to go back to the future
Note: This is an article I originally published on Jan. 21st 2021. I find it is still relevant and it made sense to republish in my latest newsletter.
Overview
Once upon a time, there was no Kubernetes, yes, I know that sounds strange now. Four years ago, my conversations with companies revolved around non-production use cases, developer productivity, and optimizing the value streams to production. Even if you extended your use of docker containers into production, it was not through a container orchestrator, but rather deploying the docker runtime onto your existing production servers and updating your deployment runbooks.
Companies would ask me about the value of docker containers and how it could benefit their software development processes and where it fit in their toolchain. The focus was not on runtime characteristics, continuous delivery (or deployment for that matter), but how to reduce the friction of hand-offs between environments and ultimately into production. Furthermore, how to reduce or eliminate configuration drift.
Companies that ignored that phase of the world and are now considering containers and viewing Kubernetes as their starting point. These are often infrastructure and operations teams building out a "home" for developers to deploy their containers. Hold the phone! This is a very risky proposition when no one in the company has started with the first steps in container adoption (see below). You do not go from a baby in your crib to running sprints overnight, the same can be said about the maturity steps and time required for successfully adopting container technologies in your organization.
This is not meant to scare you away from Kubernetes or overcomplicate its usage, but rather provide a pathway to get there one day (but only if necessary). I have seen development teams in small pockets become productive and successful with adopting containers in compressed timeframes, however moving from this scale to enterprise scale is an entirely separate conversation we will save for the future. For now, my goal here is to be explicit on two points:
Kubernetes and containers are inextricably intertwined. You need containers to do Kubernetes, however you do not need Kubernetes to do containers.
Start with the non-production use cases of docker containers before you start with Kubernetes.
Container Adoption Maturity Stages
Below are the three evolutionary stages for container adoption. Gain confidence and competency at one stage before proceeding. Pause and consider if you achieved the benefits from your investments in one stage before proceeding. In either case, you may decide the time and effort is not worth the continued investment or you do not have the personnel and skillset to proceed further. You may also be using a platform that only requires the basic stage and automates the processes for the intermediate and advanced stages below.
*Note* the guidelines below are intended for teams that own and manage the lifecycle of the source code for their applications. Containerizing packaged COTS applications is a practice becoming more prevalent and pushed by vendors, however not covered below and will be a discussion for a future post.
Basic: Developer Workflow and Collaboration
Go back in time to four years ago. This stage optimizes the “microprocess” or “inner-loop” that makes up the majority of an individual developer’s workflow day-to-day.
Introduce dockerfiles into your source code. Include a dockerfile in your source code that will be used to build your source code, its system config and dependencies into a container image.
Introduce a container registry for pushing and pulling container images. Docker Hub, Quay, JFrog Artifactory, Sonatype Nexus, Harbor are popular choices along with the major cloud providers offer.
Retool developer workstations to include options for building and working with containers. The most common example is Docker Desktop, but there are others like Captain. There are plug-ins for VSCode, Eclipse, and IntelliJ.
Upskill developer’s competency in docker and containers through training and specific projects and workflows that benefit from using containers.
Benefits:
Reduce friction in hand-offs between environments and enable easier sharing of system config between developers. Eases the barrier to entry for adopting cloud application platforms.
Cautions:
When you do not have issues with configuration management and your workflow is VM-based, this has the potential to introduce undue complexity. However, consider that most cloud application platforms are using containers (OCI-compliant) as the universal packaging format for deployment artifacts.
Intermediate: Team Development and Static Environments
This stage optimizes the “macroprocess” or “outer-loop” that team’s (build managers, release managers, testers, product owners, etc) are responsible for day-to-day. This injects containers into the processes spanning source code repositories to deployable artifacts delivered to production.
Introduce the docker toolchain into Continuous integration and Production use cases.
Retrofit your build, test, validation, and release environments to host the container runtime and toolchain you have chosen (docker is one example, but podman is another).
Retrofit your production servers with the container runtime of your choice.
Instrument your applications and infrastructure to work with containers.
Upgrade runbooks, security and observability tooling to work with containers.
Upskill employees through training and other means and working on projects that benefit from this stage for hands-on experience.
Benefits:
Full fidelity of container images from CI server all the way through to production runtime. Reduce or eliminate configuration drift across environments potentially improving troubleshooting and root cause analysis. Further reduce friction in hand-offs between development and operations teams. Potentially improve release velocity.
Cautions:
This is a much larger investment than individual developer machines and therefore requires a stronger business case to fend off disappointed stakeholders. Toolchains can be brittle, and users commonly have strong preferences or best of breed selections in play. However, most of these toolchains are pluggable and continue to evolve to support container-based workflows. It is easy to get distracted by the number of options here, but stay focused on achieving the right technology and business outcomes and using tools you already have in-house where possible.
Advanced: Dynamic Environments, Automation, and Continuous Delivery
While the previous two stages are primarily focused on improving developer effectiveness and productivity, this stage is focused on automating tasks that operators perform manually and improving the quality-of-service for applications and services.
At this stage, your company is embracing immutable infrastructure, infrastructure-as-code, deployment automation, and continuous delivery with plans to adopt these practices at a broader enterprise level.
Introduce Kubernetes and Continuous Delivery for a small subset of applications that can take advantage of the runtime characteristics of Kubernetes and adhere to cloud-native architecture principles.
Leverage a public cloud service when possible for initial testing and PoCs. As you move to production, consider if you require a hybrid or multi-cloud deployment of your clusters that span multiple clouds and/or on-premises. If you do require more control, flexibility, and portability across infrastructure, leverage a commercial distribution (e.g., VMWare Tanzu or Red Hat OpenShift or a Kubernetes partner).
Retrofit, select new, or work with incumbent vendors to retool infrastructure for managing Kubernetes clusters. This operational tooling burden is eased when using a public cloud service, however for complex deployments, compliance requirements and more strategic systems with strong business continuity requirements, this is not an option.
Upskill employees and work with vendors to climb the Kubernetes learning curve. There are plenty of resources to learn Kubernetes but harvesting best practices and scaling this knowledge across multiple teams in an enterprise is a major change management challenge. Its best not to go at it alone and you should work with a partner or vendor to help you through this significant change.
Benefits:
In addition to the benefits from the basic and intermediate stages, Kubernetes can improve operational capabilities for deployments of distributed applications and services. This includes local high availability, automated deployment, precision service scalability, and automated configuration and policy management at scale. Kubernetes has a vast eco-system which means there is lots of innovation happening you can take advantage of (not just a blessing, but a curse, see cautions below).
Cautions:
The blast radius for this change impacts a broad and diverse set of users and roles in the organization, so change management is tough to get right and is an ongoing process. For many organizations, adopting Kubernetes is one of many parts in a broader cultural shift underway (e.g., project to product, devops, digital transformation). Being successful with Kubernetes at enterprise scale means adopting a culture that is comfortable with automation at the very least.
Introducing Kubernetes increases the complexity for managing both non-production and production environments. You will have to retool all environments to support Kubernetes and reach environment parity. Examples of tools you likely will adopt include helm, minikube, and Prometheus as well as an entire eco-system of tools aimed at improving the usability of Kubernetes and better integrating with or replacing existing operational tooling. The Kubernetes eco-system is vast which means access to innovation, but also includes the pain of overlap, inconsistency, immaturity, room for error, and plenty of options to consider and distill.
Do the guidelines above work in every use case?
Of course not, there is not always a smooth, linear progression in container adoption and there are exceptions to the rules. I have come across companies using Kubernetes in interesting ways that arguably could not be achieved without Kubernetes and did not include requirements related to the first two stages above. That is because Kubernetes is geared towards automation in production use cases with requirements for multi-tenancy, high availability, elasticity, and scaling. In all cases of containers and/or Kubernetes adoption, start with solving a pain point or improving a key metric and tie that solution to a specific business impact and outcome.
My Ask
Thank you for reading this article. I would be very grateful if you complete one or all of the four actions below. Thank you!
Like this article by using the ♥︎ button at the top or bottom of this page.
Share this article with others.
Share your feedback on this article.
Subscribe to the elastic tier newsletter! *note* please check your junk mail if you cannot find it