We have covered virtualization at length both in this space as well as at Linux Academy in the past. However, we have always been focused on hardware virtualization through a Hypervisor. Full OS emulation and server stack with device drivers communicating directly within the virtualization stack. Along comes Docker to further application virtualization. Although similar technologies have existed for some time (since early 2000), Docker appears to have arrived at the right time and captured all the buzz. Let’s take a look at some of the Docker basics that we will be covering in this exciting new course! Ready to jump strait to the course?
A Brief Introduction
So let’s talk for a few minutes about what exactly Docker is. Well, let’s start out with an explanation taken largely from Wikipedia as well as from the Docker Website itself.
Docker is an open-source project that automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating system-level virtualization on Linux.
So having said that, its a collection of tools that package an application and everything it depends on into a “container” that virtualizes the application so it can run on any compatible OS (and by compatible, we mean Docker compatible, which is pretty much anything).
What Can I Use It For?
In short, lots of stuff! Here are just a few Use Cases that Docker is a perfect candidate for:
- Configuration Simplification
- Enhance Developer Productivity
- Server Consolidation and Management
- Application Isolation
- Rapid Deployment
- Build Management
Keep in mind that this is the tip of the iceberg. In our Linux Academy course (if you haven’t already visit Linux Academy’s Website and sign up for an account today), we talk about a wealth of other use cases and do a good job demonstrating the advantages Docker offers in a typical DevOps corporate structure.
How Is A Container Different Than A Virtual Machine?
In short, a virtual machine encompasses the virtualization of the entire server stack – hardware, device drivers and operating system – through something called a ‘hypervisor’. This hypervisor sits directly on top of the hardware and facilitates the communication with one or more operating systems that run on it.
A container is higher up the technology food chain. You will often hear the word container in the same sentence with “application virtualization”. It literally is a box that one or more applications and all their dependencies live within but are completely segregated from the underlying operating system it runs on. All files, applications, libraries and processes that are targeted to a specific distribution and version are “emulated” within that container regardless of the underlying operating system of the server itself. It’s not a full emulation stack, it’s “just enough for my application to run correctly” emulation. Pretty cool huh?
Other than a final “go sign up and take this course now” encouraging word, Docker is one of the coolest technological advances to come out in recent memory. It’s not that it is a revolutionary way of solving a problem, it’s more that it is the right solution come at a time where the performance of hardware virtualization is almost indistinguishable from bare metal performance under most circumstances. So we are ripe for wringing even more performance out of our technology stack and have moved up the chain to application virtualization. Let’s go along for the ride! Anyone up for process emulation next?
Check out the LinuxAcademy.com docker course!