Docker provides easy to use, light-weight containers that run processes mostly isolated from the host operating system.

Pieces of the collection of underlying technology have been around for quite a while, and probably sound more familiar as virtual private servers (VPS), jails, Linux Containers (LXC) or chroots.

Unlike virtual machines, Docker does not require a separate operating system instance to run, and instead relies on Linux kernel support for namespaces and cgroups, such that the processes that run within a container have an isolated view of the operating system.

Docker specifically facilitates the creation and distribution of images, the deployment of containers based on these images, as well as various networking and storage aspects.

The images distributed result in running relatively short-lived containers, bringing about a new dimension to system life-cycle management, and great opportunities for fast delivery and deployment of turn-key solutions.

What does this mean?


Ramble on a bit about the various aspects in which this has effect;

  • Micro-services (“lab mice”) rather than pets or cattle, w/;
    • scalability per unit in role,
    • looser dependencies,
    • roll-over deployment methodologies.
  • “Works on my system” syndrome
  • Agility and speed of delivery, including:
    • Delivery to retrospective in agile development methodology Scrum.
    • Continuous Integration.

Continue with our Getting Started chapter.