Using Docker, Kubernetes to Build a multi container Continuous Delivery & Test pipeline: Part 1

I am a huge, huge fan of Containers, from the isolation they give, to the multitude of open-source tools that the community (and corporations) get involved in creating to help provide ever expanding stability, and driving adoption.

It has taken me a while to get the time to be able to do anything meaningful with them, and I will be rapidly expanding my knowledge through 2018 as I try to embark on re-architecting some of the cloud-based platforms that I predominantly work on, as well as some personal software.

I am, admittedly, late to the party – but I genuinely believe this will end up being the de-facto way that I deploy both my own personal software, as well as that of for my current organisation.

Docker

I had been previously working with the Azure Preview of App Services on Linux, and had successfully crafted a continuous delivery pipeline for a .net core web application using a combination of VSTS Release Management, Docker & Azure Container Registry… the premise was pretty basic – edit code, submit pull-request, on merge to branch – execute a build which packages the web project into a Docker image, and pushes to an instance of Azure Container Registry, then execute a release to make the app-service re-pull the :latest: tagged image.

And… this all worked pretty well, I got to learn the basics behind Docker, how Azure and VSTS like to interact with it, and we had a continuously deployed development site to Azure – which I could now retrieve and run locally on my development machine…. in exactly the same way. That last part was really cool and sparked a lot of thoughts – I could now take the compiled bits and execute exactly the same container, but under my local Docker daemon.

There were some hurdles along the way (tip…. make sure you build the container image using the VSTS Hosted Linux build server, lest you wish to spend an eternity trying to figure out why your container will not run properly… thank the creator for the Bash console in Kudu..)

The benefits of this that I can see, even given my limited time working with containers, are huge.

If you have a development team working in a micro-services architecture, and you want to make a change to one service during development, which relies on other services running to interact with (and you don’t want to virtualize your HTTP calls…) – you can orchestrate the deployment of micro-service a, b and c via a call to docker-compose, ensure they are on the same Docker network and can communicate, and hey presto, make your changes and ensure micro-service d can talk to the container hosted other micro-services, and you have a pretty powerful development environment.

There is a caveat I found along the way however, as an example, Linux App Services are great at hosting a single image – a single .net core web application, or a single service – and you will get auto-scaling via Azure out of the box.

Multi-container Deployments

But…. what happens when you want to run a multi-container deployment? A .net core web application, with a data API, maybe a ServiceBus queue consumer component too? And whats more, what about if you wish to manage the replication, scaling, recovery, load balancing, etcetera of these individual components?

The recommended approach for splitting out application components within containers is one container per concern (most likely, process, though container images can spawn off multiple processes, this isn’t really advised too often).

You could spin up multiple Linux App Services, and this… would work (I assume, not that I have tried, I don’t see why it wouldn’t) – it would be clunky though, and you would need to use deployment slots to provide rolling updates to services, letting Azure’s DNS switching take care of the production slot swapping – again I have used this before for macro-services hosted within Azure, and it works fine…. but it feels clunky for containers.

Kubernetes Selenium Grid

So then, I started looking into what container orchestration technologies were available, and came across Kubernetes, as one of the supported orchestration engines within Azure Container Services, and seemingly one of the best offerings out there, and started looking into how I could produce a Selenium grid k8s cluster, containing containers for Grid, and [n] number of worker nodes – which would be capable of running my UI Tests as part of a VSTS Build & Release pipeline.

Whats more, i’d want to spin this cluster up as part of the build, use it, and then destroy it afterwards.

I am part way through this journey, and will write subsequent articles. The next one will be about setting up a MiniKube cluster on Mac OS, and using this to test out deploying containers in the Kubernetes way.

Until next time…

Hello there….

I have recently updated my blog to WordPress on Linux (Azure makes this stuff so easy….) – and am determined to post more frequently, hopefully you will find some of the information on here interesting (maybe some of it useful, even).

My name is Clint – I am a technical solutions architect, currently working in finance (previously in digital media and some public sector work, too).

I get involved in some really interesting challenges day-to-day, both technical and interpersonal and would like to think I can attempt to help others if they run into similar issues, so I will post my experiences and, where possible, resolutions to both technical and non-technical conundrums that I have experienced!