Current software platforms are constrained by having multiple infrastructure and technology stacks to maintain. This leads to fragile and complex deployment processes, which are hard to replicate, test and support. How does application containerisation solve these problems?


At IMQS Software we constantly look for innovative ways to accelerate digital transformation.

Our ultimate goal is to improve the quality of the software we deliver to our clients.

Current software platforms are constrained by having multiple infrastructure and technology stacks to maintain. This leads to fragile and complex deployment processes, with many interdependencies, which are hard to replicate, test and support.

There are an unprecedented number of new technology solutions that promise to bring real change to business, one of which is containerised software development.


The emergence of containers in the 1950s revolutionised cargo shipping. The result was an exponential deepening of economic globalisation across the world. But what was so special about this “soulless aluminium or steel box held together with welds and rivets, with a wooden floor and two enormous doors at one end”?

As noted by Marc Levinson, the value of the object does not lie in what it is, but how it came to be used. According to Levinson, the container came to lie at the core of a “highly automated system for moving goods from anywhere to anywhere, with a minimum of cost and complication”. The container made shipping cheap, organised and standardised.

Fast-forward to around 2003 and containerisation once again started a revolution, this time in the software industry. It was, however, only with the launch of Docker in 2013 that software containers were brought to the masses.

Application containerisation, much like shipping containerisation, overcomes challenges associated with transporting software from one computing environment to another. Transport, in this sense, could include moving between:

1) A developer's laptop to a test environment

2) A staging environment to a production environment

3) A physical machine in a data centre to a virtual machine in a private or public cloud

As explained by Docker creator, Solomon Hykes:

You're going to test using Python 2.7, and then it's going to run on Python 3 in production and something weird will happen. Or you'll rely on the behavior of a certain version of an SSL library and another one will be installed. You'll run your tests on Debian and production is on Red Hat and all sorts of weird things happen ... the network topology might be different, or the security policies and storage might be different but the software has to run on it.

Containers are a lightweight approach to solving this problem by packaging the entire runtime environment of an application – libraries, code and configuration files needed for it to run – into a “standardised unit for development, shipment and deployment”. Differences in operating system (OS) distributions and underlying infrastructure are abstracted away, and only the bare minimum required for an application to run and function is included.


Containers are often compared to virtual machines (VMs).

In both cases, a guest OS such as Linux or Windows runs on top of a host OS with virtualised access to the underlying hardware. Both containers and VMs allow you to package your application together with libraries and other dependencies, providing isolated environments for running your software services.

However, instead of virtualising the hardware stack as with VMs, containers virtualise at the OS level. Multiple containers run directly on top of the OS kernel. In this sense, containers offer a far more lightweight unit for developers and IT Ops teams to work with, carrying a myriad of benefits:

1. Lightweight & Resource Efficient

Application containers work with micro-services and distributed applications to consume fewer resources by sharing resources without a full OS to underpin each app.

Each micro-service communicates with others through application programming interfaces. The container virtualisation layer is able to scale up micro-services to meet rising demand for an application component and distribute the load.

A container may be only tens of megabytes in size, whereas a VM with its own entire OS may be several gigabytes in size. Because of this, a single server can host far more containers than a VM.

2. Modularity

Rather than run an entire complex application inside a single container, the application can be split into modules – a.k.a. the micro-services approach.

Applications built in this way are easier to manage because each module is relatively simple, and changes can be made to modules without one having to rebuild the entire application.

Because containers are so lightweight, individual modules (or micro-services) can be instantiated almost immediately only when they are needed.

3. Just in Time Instantiation

As noted above, another major benefit is that containerised applications can be started almost instantly. Containers can, therefore, be instantiated when they are needed and disappear when they are no longer required. This frees up resources on their hosts.

4. Flexibility

If a developer desires a variation from the standard container, she can create a container that holds only the new library. To update an application, a developer makes changes to the code in the container image, then redeploys that image to run on the host OS.

5. Portability

Throughout the application lifecycle – from building code through testing and production – the file systems, binaries and other information stay the same.

7. Security

Containers isolate applications from one another and from the underlying infrastructure. Application issues are therefore limited to a single container instead of the entire machine.


Docker is the company driving the container movement and the preferred containerised platform at IMQS Software. It provides an integrated, tested and certified platform for apps running on enterprise Linux or Windows as well as cloud providers.

Docker containers that run on a single machine share that machine's OS kernel, and images are constructed from file-system layers that share common files. This minimises disk usage, and image downloads are much faster. These containers are based on open standards and run on all major Linux distributions, Microsoft Windows, and on any infrastructure including VMs, bare-metal and in the cloud.

The Software Engineering division at IMQS Software is actively working on incorporating this exciting technology to benefit its clients in the near future. To keep up to date with IMQS developments, why not sign up for our newsletter or check out our Youtube Channel.