in

Introduction to Containers

Introduction

Containers, along with containerization technology like Docker and Kubernetes, have become increasingly common components in many developers’ toolkits. The goal of containerization, at its core, is to offer a better way to create, package, and deploy software across different environments in a predictable and easy-to-manage way.

In this guide, we’ll take a look at what containers are, how they are different from other kinds of virtualization technologies, and what advantages they can offer for your development and operations processes. If you just want a quick overview of some of the core terms associated with containers, feel free to skip ahead to the terminology section.

What Are Containers?

Containers are an operating system virtualization technology used to package applications and their dependencies and run them in isolated environments. They provide a lightweight method of packaging and deploying applications in a standardized way across many different types of infrastructure.

These goals make containers an attractive option for both developers and operations professionals. Containers run consistently on any container-capable host, so developers can test the same software locally that they will later deploy to full production environments. The container format also ensures that the application dependencies are baked into the image itself, simplifying the handoff and release processes. Because the hosts and platforms that run containers are generic, infrastructure management for container-based systems can be standardized.

Containers are created from container images: bundles that represent the system, applications, and environment of the container. Container images act like templates for creating specific containers, and the same image can be used to spawn any number of running containers.

This is similar to how classes and instances work in object-oriented programming; a single class can be used to create any number of instances just as a single container image can be used to create any number of containers. This analogy also holds true in regards to the inheritance since container images can act as the parent for other, more customized container images. Users can download pre-built container from external sources or build their own images customized to their needs.

What is Docker?

While Linux containers are a somewhat generic technology that can be implemented and managed in a number of different ways, Docker is by far the most common way of running building and running containers.  Docker is a set of tools that allow users to create container images, push or pull images from external registries, and run and manage containers in many different environments. The surge in the popularity of containers on Linux can be directly attributed to Docker’s efforts following its release in 2013.

The docker command-line tool plays many roles. It runs and manages containers, acting as a process manager for container workloads. It can create new container images by reading and executing commands from Dockerfile or by taking snapshots of containers that are already running. The command can also interact with Docker Hub, a container image registry, to pull down new container images or to push up local images to save or publish them.

While Docker provides only one of many implementations of containers on Linux, it has the distinction of being the most common entry point into the world of containers and the most commonly deployed solution. While open standards have been developed for containers to ensure interoperability, most container-related platforms and tools treat Docker as their main target when testing and releasing software. Docker may not always be the most performant solution for a given environment, but it’s likely to be one of the most well-tested options.

Practically speaking, while there are alternatives for containers on Linux, it usually makes sense to learn Docker first because of its ubiquity and its influence on the terminology, standards, and tooling of the ecosystem.

Virtual Machines vs Containers

Virtual machines, or VMs, are a hardware virtualization technology that allows you to fully virtualize the hardware and resources of a computer. A separate guest operating system manages the virtual machine, completely separate from the OS running on the host system. On the host system, a piece of software called a hypervisor is responsible for starting, stopping, and managing the virtual machines.

Because VMs are operated as completely distinct computers that, under normal operating conditions, cannot affect the host system or other VMs, virtual machines offer great isolation and security. However, they do have their drawbacks. For instance, virtualizing an entire computer requires VMs to use a significant amount of resources. Since the virtual machine is operated by a complete guest operating system, the virtual machine provisioning and boot times can be fairly slow. Likewise, since the VM operates as an independent machine, administrators often need to adopt infrastructure-like management tools and processes to update and run the individual environments.

In general, virtual machines let you subdivide a machine’s resources into smaller, individual computers, but the end result doesn’t differ significantly from managing a fleet of physical computers. The fleet membership expands and the responsibility of each host might become more focused, but the tools, strategies, and processes you employ and the capabilities of your system probably won’t noticeably change.

Containers take a different approach. Rather than virtualizing the entire computer, containers virtualize the operating system directly. They run as specialized processes managed by the host operating system’s kernel, but with a constrained and heavily manipulated view of the system’s processes, resources, and environment. Containers are unaware that they exist on a shared system and operate as if they were in full control of the computer.

Rather than treating containers as if they were full computers like with virtual machines, it is more common to manage containers more similarly to applications. For instance, while you can bundle an SSH server into a container, this isn’t a recommended pattern. Instead, debugging is generally performed through a logging interface, updates are applied by rolling new images, and service management is de-emphasized in favor of managing the entire container.

These characteristics mean that containers occupy a space that sits somewhere in between the strong isolation of virtual machines and the native management of conventional processes. Containers offer compartmentalization and process-focused virtualization, which provide a good balance of confinement, flexibility, and speed.

Conclusion

Containers are not a magic bullet, but they do offer some attractive advantages over running software on bare metal or using other virtualization technologies. By providing lightweight, functional isolation and developing a rich ecosystem of tools to help manage complexity, containers offer great flexibility and control both during development and throughout their operational life cycle.

This post was created with our nice and easy submission form. Create your post!

What do you think?

Written by Amlan-Mukherjee

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Loading…

0

Review of Asus ROG 3

What is Cyber Security? Definition, Best Practices & More