kubernetes

Containerisation done right: reduce infrastructure TCO and product time to market. Part 1

Marcin Jedyk
#kubernetes#Docker#containerisation
Feature image

Disclaimer: this is my opinionated approach to building scalable, resilient and manageable IT infrastructure. To make the post concise I had to simplify some concepts a little. I hope that simplified explanation is a reasonable explanation of the thinking process behind what I am describing.

To start…

The whole process of containerisation requires certain development/DevOps resources – simply put, it’s not for free! The more microservices you are running or planning to run, the more sense it makes to go ahead with that change. If you are not into microservices yet it’s never too late to start. Let’s start with some whys

Why microservices?

If you decide to switch to microservices architecture you should clearly understand why. Here are few whys for using a microservice based architecture:

Why containerisation?

Because it beautifully abstracts the service away from additional libraries it needs to run. Let’s say you have two applications, one written in Python and one in Scala. Each requires different sets of libraries/binaries to be available to run. If you add more programming languages into the mix it easily becomes a mess to run all of them on development machines, qa, prod, etc. Then we have problem of version incompatibility, and so on and so forth. Let’s say however that you use Docker as your container engine. Once you package and distribute those applications as Docker images that effectively becomes a unified interface to run all sort of things. Life becomes simpler. docker run and there you go (almost). To summarise the ‘why’ of containerisation:

Let’s set for the journey

OK, with some whys answered let’s now go on a journey of containerising a software system – step by step.

The Service

Let’s start at the beginning. We have a service which we want to deploy so that it can serve its purpose. The service can be a Java, Scala, NodeJS, PHP or whatever-other-technology application. With single service and simple setup (dev and prod) you can easily manage build and deploy with little fuss. You can easily have it build and deploy with a subset of Jenkins, Ansible, Puppet, Cloud Formation, etc. The problem in this setup is consistency of execution environments. There may be variations of libraries between your development environment, QA and production – and that’s not a good place to be. You can solve that problem with Puppet or Ansible but that’s a bit of a headache – you will be burning precious DevOps time but at least you can run the thing in some sort of consistent manner. It may get bit more messy when you have few services to manage.

First container

We can make things bit easier by packaging up the service into a Docker image, which for modern services is relatively straightforward and Developers are going to love it! Containerisation will give you few benefits from the start:

Basic infrastructure for containers

So, in order to leverage ‘economy of scale’ for containers it makes sense to setup some basic infrastructure. You will need:

Adding in more services

Since you now have containers basics covered it’s time to move on and add more services. Once you have the first service complete it becomes relatively easy to add in more. Ideally, you would build and release new containers in exactly the same manner as your first container. Consistency, remember that? Re-use tooling and components to save yourself time AND very importantly: encouraging a wider audience, such as developers to actively participate in those DevOps’y flavoured activities. Ideally, once the first service is containerised any software engineer within your group should be able to and be encouraged to containerise the next service. Initially created tooling should make it super easy to turn a PR merge a new Docker image pushed to registry (provided it passes tests 😉 ). I’m a big fan of Jenkins files and defining build pipelines as code. In this way Dockerised applications and pipeline-as-a-code are a great foundation for well containerised infrastructure.

To be continued …

← Back to Blog