Using Docker, Kubernetes to Build a multi container Continuous Delivery & Test pipeline: Part 2

Part 2 – Kubernetes MiniKube Environment on Mac OS

I tend to use my Macbook for development these days, windows is still a good environment, and I still maintain a Win10 VM on my Mac using Parallels – but find myself using it less and less. With the advent of Rider as a cross-platform IDE, and .net core, I spend more time in my Mac OS now.

One other benefit of the Mac OS environment is the ease of setting up Docker, VirtualBox, and MiniKube – the Kubernetes development environment. Setting it up on Windows 10 using Hyper-V is possible, but non-trivial, and has the side effect of not being able to run the native Docker daemon side-by side.

Why MiniKube?

When learning to use Kubernetes, one of the biggest benefits is the local single-cluster development environment call MiniKube, it allows you to completely test your k8s deployment scripts and definitions on your mac, and these same definitions (with a slight bit of tweaking) will work in the same way when deployed to an Azure Container Services cluster (or your own IaaS K8S environment).

Getting going

These instructions are for Mac OS, Windows 10 guides exist and are not wildly different, but I will go into detail for Windows 10 in a later article.

Check your Virtualisation is enabled in the BIOS

Open a terminal – and execute the following command:

sysctl -a | grep machdep.cpu.features | grep VMX

If the command doesn’t error and you get some output – then you are good to go! =]

– Install Prerequisites

Make sure you have homebrew installed…

brew update && brew install kubectl && brew cask install docker minikube virtualbox

What have we just done? We have installed Kubectl (The Kubernetes CLI), and installed a triumvirate of tools to support the creation of MiniKube locally. Docker, MiniKube and VirtualBox.

– Verify Installation

We can execute a few commands to ensure that everything installed O.K (the terminal will tell you if there were errors anyway). As long as you get good output from the following commands, we should be O.K to continue.

docker --version 
minikube version 
kubectl version --client

– Start MiniKube

To start a local MiniKube cluster, which will default to using 2GB of memory, execute the following command and wait for the Host VM to download and setup on VirtualBox.

minikube start

This can take time, especially on a slower internet connection, or slightly older Macbook, but once finished you should get a message to the terminal indicating that MiniKube has been setup, and that the kubectl cli is configured to use the MiniKube context. A Kubernetes single-node cluster is now running inside the VM on your mac host. Nice =]

– Check out K8S

Lets checkout to see if we can see the minikube node, execute the following command (see here for the Kubectl readme)

kubectl get nodes

You should see an output similar to


minikube   Ready     <none>    40s       v1.7.5

– Ensure Docker daemon from MiniKube is used

You can start executing docker commands in the terminal now, as docker itself was installed by Brew, however something that might be confusing at first (especially if you were to deploy a pre-built container into MiniKube straight away) is when you execute a command such as ‘docker ps’ to view current containers, you will see the host machine context view, e.g. containers outside of the MiniKube context, which resides inside the VirtualBox VM.

We need to switch to this context to view, and manipulate containers running inside MiniKube. Execute the following command:

eval $(minikube docker-env)

Now executing docker commands will use the correct context, try it out before moving on, you should see the some of the MiniKube related containers running.

– Create a local image registry

Kubernetes will need to pull container images from a registry when instructed to do so via definition files, if your files contain references to images on Dockerhub, or a private registry then this step wouldn’t be needed as long as you are also publishing directly to there too, but for local development we may as well have a local registry, so execute the following command:

docker run -d -p 5000:5000 --restart=always --name registry registry:2


That’s it – if the above has all gone well you should be able to run the following command and load up the MiniKube dashboard:

minikube dashboard

Should yield….


Now we will be ready to start creating a local multi-container application. I will cover this in the next article in the series.

Have fun =]

Using Docker, Kubernetes to Build a multi container Continuous Delivery & Test pipeline: Part 1

I am a huge, huge fan of Containers, from the isolation they give, to the multitude of open-source tools that the community (and corporations) get involved in creating to help provide ever expanding stability, and driving adoption.

It has taken me a while to get the time to be able to do anything meaningful with them, and I will be rapidly expanding my knowledge through 2018 as I try to embark on re-architecting some of the cloud-based platforms that I predominantly work on, as well as some personal software.

I am, admittedly, late to the party – but I genuinely believe this will end up being the de-facto way that I deploy both my own personal software, as well as that of for my current organisation.


I had been previously working with the Azure Preview of App Services on Linux, and had successfully crafted a continuous delivery pipeline for a .net core web application using a combination of VSTS Release Management, Docker & Azure Container Registry… the premise was pretty basic – edit code, submit pull-request, on merge to branch – execute a build which packages the web project into a Docker image, and pushes to an instance of Azure Container Registry, then execute a release to make the app-service re-pull the :latest: tagged image.

And… this all worked pretty well, I got to learn the basics behind Docker, how Azure and VSTS like to interact with it, and we had a continuously deployed development site to Azure – which I could now retrieve and run locally on my development machine…. in exactly the same way. That last part was really cool and sparked a lot of thoughts – I could now take the compiled bits and execute exactly the same container, but under my local Docker daemon.

There were some hurdles along the way (tip…. make sure you build the container image using the VSTS Hosted Linux build server, lest you wish to spend an eternity trying to figure out why your container will not run properly… thank the creator for the Bash console in Kudu..)

The benefits of this that I can see, even given my limited time working with containers, are huge.

If you have a development team working in a micro-services architecture, and you want to make a change to one service during development, which relies on other services running to interact with (and you don’t want to virtualize your HTTP calls…) – you can orchestrate the deployment of micro-service a, b and c via a call to docker-compose, ensure they are on the same Docker network and can communicate, and hey presto, make your changes and ensure micro-service d can talk to the container hosted other micro-services, and you have a pretty powerful development environment.

There is a caveat I found along the way however, as an example, Linux App Services are great at hosting a single image – a single .net core web application, or a single service – and you will get auto-scaling via Azure out of the box.

Multi-container Deployments

But…. what happens when you want to run a multi-container deployment? A .net core web application, with a data API, maybe a ServiceBus queue consumer component too? And whats more, what about if you wish to manage the replication, scaling, recovery, load balancing, etcetera of these individual components?

The recommended approach for splitting out application components within containers is one container per concern (most likely, process, though container images can spawn off multiple processes, this isn’t really advised too often).

You could spin up multiple Linux App Services, and this… would work (I assume, not that I have tried, I don’t see why it wouldn’t) – it would be clunky though, and you would need to use deployment slots to provide rolling updates to services, letting Azure’s DNS switching take care of the production slot swapping – again I have used this before for macro-services hosted within Azure, and it works fine…. but it feels clunky for containers.

Kubernetes Selenium Grid

So then, I started looking into what container orchestration technologies were available, and came across Kubernetes, as one of the supported orchestration engines within Azure Container Services, and seemingly one of the best offerings out there, and started looking into how I could produce a Selenium grid k8s cluster, containing containers for Grid, and [n] number of worker nodes – which would be capable of running my UI Tests as part of a VSTS Build & Release pipeline.

Whats more, i’d want to spin this cluster up as part of the build, use it, and then destroy it afterwards.

I am part way through this journey, and will write subsequent articles. The next one will be about setting up a MiniKube cluster on Mac OS, and using this to test out deploying containers in the Kubernetes way.

Until next time…

Hello there….

I have recently updated my blog to WordPress on Linux (Azure makes this stuff so easy….) – and am determined to post more frequently, hopefully you will find some of the information on here interesting (maybe some of it useful, even).

My name is Clint – I am a technical solutions architect, currently working in finance (previously in digital media and some public sector work, too).

I get involved in some really interesting challenges day-to-day, both technical and interpersonal and would like to think I can attempt to help others if they run into similar issues, so I will post my experiences and, where possible, resolutions to both technical and non-technical conundrums that I have experienced!