5 Ways to Supercharge Containers in your Organisation | Software Architecture Series, Part IV
--
This time last week, in The Polyglot Toolkit: Independent Services | Software Architecture Series, Part III, we’ve explored how introducing containers inside your technology arsenal can allow you to empower a polyglot technology team.
When I’ve first discovered Docker, quite a few years ago, I distinctly remember the feeling of excitement. Like a child with a brand new toy, opening up a world of endless ways to play with it.
Many years later, the same enthusiasm about what Docker brings to my everyday work as a Software Architect remains. DevOps is an important aspect of my job and having a tool that can create perfectly reproducible environments pretty much anywhere is a game-changer for productivity.
Writing the first Docker image for a project can be a rather challenging experience, a trial-and-error where you attempt to describe an entire system in a single file.
Once that file compiles successfully and you have a fully working Docker container, then it becomes a critical asset in a company’s tech arsenal. Every other developer in the company, the CI system and even staging and production servers will be based on the same definition. This alone reduces hundreds of hours of DevOps time and can make entire classes of problems irrelevant.
Let’s now go through a series of ways you can take containers to the next level in your company’s tech stacks.
Boost #1 Multi-Stage Builds
Creating a single Dockerfile with a Multi-Stage build will be one of the best optimisations you can do in regards to image size, reduced attack surface and, in some cases, even performance.
Take, for example, a simple Dockerfile installing a Python (Django) app and a series of dependencies. While a basic approach would be to install pip and then use it to load dependencies, this model alone will insert a large number of binaries in your final images, from the GCC compiler to lots of development packages required to perform installations.
In a multi-stage build, you can use the first image to install all dependencies, which are then exported as a single environment and skip having to install anything else on the final image.
This approach works with most languages yet the ones with the most extreme results will be languages capable of ahead of time compilation. Creating a Go container, with a static binary build option in a multi-stage Dockerfile, can create a container as small as 15MB, which only holds your application code, compiled for the target architecture.
Boost #2 Treat Containers as Executables
Many engineers mistakenly treat Docker containers are VMs, which is not only incorrect but can also have severe security implications.
The best way to describe VMs vs Containers is to think about them from a different angle. Imagine Containers as apartments and VMs as houses.
A house will have its own, dedicated plumbing, installation and generally not share any resources with any other building. An apartment however is a compact system, sharing resources with a large number of neighbours and structurally linked with all other apartments in the complex.
Think about Containers in the same you would think about a binary application running inside your infrastructure. While the experience inside the container itself resembles one of a VM, the actual implementation is not adequate to consider containers secure, self-standing nodes. This is especially the case when running untrusted or unverified code.
Boost #3 Use a Private Registry
Building Docker images is a very tedious process. The greater the complexity in the image, the longer it takes to compile an image. To rely on compiling the image itself on every CI run, or by every system is a complete waste of time.
Instead, you want to rely on a Registry to store versioned final builds which are being generated by either an engineer, when making relevant changes, or ideally by your build system.
When using a Registry, any Docker runner only has to pull the relevant image and it can immediately run it. There are countless options available, from the official Docker Hub to self-hosted, or my favourite, GitHub Packages which is already closely connected to your codebase.
Boost #4 Use Containers everywhere
Now that you have these containers created and working, it’s important to use them absolutely everywhere. Every developer, or system in your infrastructure, should be able to pull pre-built images and run the entire application without relying on other tools.
By using Docker this way, you can empower reproducible environments, where your code executes exactly the same way, regardless of the system it runs on.
Taking this approach allows you to create concise, simple runners for Continuous Integration, easy-to-debug staging and production environments and identical developer workstations.
Boost #5 Let the CI do the hard work for you
Once your company has a working Continuous Integration system, use it to build new versions of images as required by your system.
GitHub Actions, for example, can be easily configured for automated builds and pushes of Docker Images when certain events are triggered.
Do you want to join one of London’s fastest-growing HealthTech businesses? We are looking for brilliant engineers, fluent in Python and JavaScript. See our open jobs portal.