More Cloud Providers Are Replacing Virtual Machines with Containers - Learn Why
Cloud providers follow the direction of the market, like many other businesses. These days, more and more companies are looking to run their cloud-based applications on containers instead of virtual machines. As a result, cloud service providers have started providing support through the appropriate tools and environments to enable their customers to run container-based applications on their cloud infrastructure.
All the major cloud providers offer some form of container support. Some examples include:
- Amazon Elastic Container Service (Amazon ECS) of AWS
- Azure Kubernetes Service (AKS) of Microsoft Azure
- Google Kubernetes Engine of Google Cloud
- Kubernetes Service of Oracle Cloud
In case you’re wondering, Kubernetes is a popular open-source platform designed for orchestrating containers. There are other similar platforms too, like Mesos and Docker Swarm, but Kubernetes is the de-facto standard. Basically, Kubernetes allows you to deploy, scale, and monitor a large number of containers. We won’t go into the details of Kubernetes here, but we just might talk about it (as well as Mesos and Swarm) in more detail in another post, so stay tuned.
Let’s now proceed with our original objective — understanding why VMs are being replaced by containers.
Containers are faster
One of the reasons organisations have migrated to cloud infrastructures is because of the agility that is offered. If they want to run tests, roll out applications to production, or simply scale, they can easily and quickly spin up virtual machines. Containerisation also affords them these capabilities. In fact, they’re much better in this regard because containers perform faster both during startup and runtime.
Containers are faster during startup because, unlike virtual machines, they don’t have to load a kernel. The underlying OS kernel is typically shared among containers, so they don’t have to load each time a containerised application is booted up. The time saved during boot ups becomes even more noticeable if you’re booting up multiple VMs on one bare metal machine, as opposed to booting up multiple containers also from a single bare metal machine.
The absence of a ‘guest OS’ and a hypervisor underneath it also means that containerised applications don’t have additional layers in their execution path during runtime, the way virtualised applications do. Consequently, they won’t experience a drop in performance in the way those virtualised applications would (compared to applications running on bare metal).
Containers are more efficient
We always talk about how virtual servers (and consequently server consolidation) to help companies maximise computing resources in cloud infrastructures. Although these capabilities have undoubtedly enabled businesses to achieve exceptionally high levels of efficiency, containerisation highlighted the vast room for improvement in this area.
It is actually possible to squeeze more containers onto a physical server than onto virtual machines. One reason is because containers, unlike VMs, don’t need to pre-allocate resources. If containers consume a certain amount of memory, it’s because they already need to use that amount of memory. By comparison, VMs can consume a certain amount of memory even without making full use of it.
Another reason is that, unlike VMs, containers don’t have individual kernels accompanying them that occupy a substantial amount of storage space. As we mentioned earlier, containers running on the same physical host typically share the same kernel. So, in a containerised environment, you would typically have only a single kernel on a physical host. In a virtualised environment, however, you could have multiple kernels on a physical host, with each kernel occupying its own space in the cloud storage device.
Should you really ditch VMs in favor of containers?
In spite of the advantages of containers over virtual machines, it’s not advisable to replace all your VMs with containers — at least for now. There are still several use cases where a VM would serve as the better solution — or perhaps the only solution.
For example, let’s say you’re developing an application on Docker, but you need to test a feature that requires certain settings on a Windows Server 2012 R2 firewall because it’s really designed to interact with those specific settings and OS once it’s deployed in your production environments.
Unfortunately, it just so happens that your development environment is a MacBook Pro running macOS High Sierra, so you couldn’t test that functionality through your container platform alone. Well, one solution would be to find another laptop, desktop, or server, install the specific operating system, install Docker, and then test your application on it. But that’s just too impractical, especially because this exercise might only be useful for this particular feature. Furthermore, what if you don’t have an extra physical machine lying around?
A more practical solution would be to just spin up a Windows Server 2012 R2 VM, install Docker on it, and then run your application off that. If you need to run tests on another operating system, you just spin up another VM for that task (presumably on the same physical host) and once again install Docker and your container-based application. You can even take advantage of snapshots and other VM features that would simplify your development process.
This particular example not only illustrates that VMs can also be more suited than containers for certain use cases, but also that containers can be used alongside virtual machines to achieve a particular goal.
The moral of the story here is that the shift to containers from VMs is not a foregone conclusion. You need to be flexible enough to consider using a virtual machine or container, or even both, if the situation calls for it.
Related Posts
By accepting you will be accessing a service provided by a third-party external to https://www.htl.london/