Disclaimer: This is not a technical tutorial on Docker or vSphere Integrated Containers, rather my views on the philosophy and gradual integration of containers into our existing VMware ecosystem.
Container will replace VM’s… Eventually… Maybe…
I recently presented a session at the Melbourne VMUG called “Can VM’s and Containers happily coexist?”. Though somewhat rhetorical, the title was born out of the protracted argument that containers will somehow surpass VM’s in the near future. To condense our session overview into a single sentence, we tackled a brief history of containers, Docker’s rise to fame and the inherent issues with this rise to fame. Despite the age of containers, the fresh faced vendors have yet to prove their worth as a wholesale replacement for virtualisation.
In my first post in this series I described the basic tenets behind Cloud Native Applications, one of which is the 12 Factor App. This framework has arguably become the unofficial guideline to creating applications suitable for a microservices architecture, but it also lends itself perfectly to illustrating why a vast majority of existing monolithic & layered applications are not suitable for decomposition.
It’s also worth bearing in mind that it may be more efficient to build new functionality & services around an existing monolith, a concept Martin Fowler refers to as the “Strangler Application” aka Strangling the Monolith. Simply, if it ain’t broke, don’t fix it… just gradually improve it!
Taking both these factors into consideration it becomes clear that VM’s will play their part for existing organisations for some time to yet, albeit sharing the limelight with there slimmer, more popular counterparts.
Evolution of the workload…
We’ve all heard the pets vs livestock analogy many times, but a recent focus on microservices and the art of driving economy through mass autoscaling has introduced the ‘Organism’: A computing entity that is minuscule from both a footprint and lifespan perspective and has little impact as an individual, but when combined with other organisms the ‘grouping’ becomes highly dynamic and resilient. Where can we find these organism type of workloads being used to great effect? Think Google, Facebook et al.
Allow me digress to make a point. A large part of my role is identifying how to modernise datacenter practices to incorporate cloud technology in whatever format aligns best with a strategic outcome. I only mention this because I believe marketing is well beyond the point where use of container technology can be useful for a business that has come from traditional mode 1 operations.
In my opinion most organisations are still finding the balance between pets and livestock through evolved lifecycle practices, but that doesn’t mean they can’t incorporate the numerous benefits of containers other than those more commonly found in environments with large scale workload churn.
Note: As an aside, I recently watched “Containerization for the Virtualisation Admin” posted on the Docker blog where the misconception of containers only supporting microservices was dismissed, something I have long been arguing against. Nice one guys.
Sneaking containers to operations…
For ops, the most common first encounter with containers is likely to occur when Dev’s request a large Linux/Windows VM that will eventually become a container host unbeknownst to the the ops team. In almost all cases this means that operations lose visibility of what’s running within the container host(s). Not ideal for monitoring, security, performance troubleshooting and so forth.
In this scenario our Devs interaction with the rogue Docker hosts may look something like below:
This approach leaves a lot to be desired from an operational perspective and therefore rules out almost all production scenarios. A better approach is to try to evolve the compute construct to suit existing practices within a business. In VMware’s world, this means treating VM’s and containers as one and the same (at least from an evolutionary perspective).
A box within a box!?.
As an atomic unit, do developers care if a container runs in a VM, natively on a public cloud or on bare metal? Well… it depends on the developer, but generally the answer is no. As long as they have free range access to a build/hosting environment with the required characteristics to support the application, all is good. For operations this is an entirely different story. So like in any good relationship, we need to compromise…
Arguably, there is an easy way to get around this issue: Create a 1:1 mapping of containers to VM’s (I can already hear the container fanboys groaning). Yes, we do lose some of the rapid provisioning benefits (from microseconds to seconds) and some of the broader Docker ecosystem features, but we don’t have to forklift an environment that we have spent years (and lots of $$$) refining. Anecdotally, it seems we have spent so long trying to create monster VM’s that we have forgotten the power of the hypervisors ability to balance and isolate numerous tiny VM’s.
Admittedly, for some organisations having the bottleneck of provisioning individual VM’s is still a very real headache for the development team…
*Fanfare… Enter vSphere Integrated Containers!
vSphere Integrated Containers (aka VIC) provides our Dev’s with a way to transparently work with containers in a vSphere environment, therefore reducing a lot of the friction traditionally found with operations having to create VM’s.
The premise behind VIC is to overlay the container construct into existing vSphere functionality, but with all the characteristics of a container (isolated, lightweight, portable, etc). This has a numerous benefits for operations around resource control/distribution, monitoring and security using mechanisms are already well established (and more importantly, well understood) by network and security teams.
So we can visualise the above using a familiar interface like vCenter, if I run a basic command like <$docker run …..> from my Docker client to the Docker daemon running on my Virtual Container Host, vCenter launches a Instant Clone forked VM with a single container running inside. From a vCenter perspective we can see the container running in the same vApp where the VCH and Instant Clone Template exists.
Note: The version of VIC I used for this screen shot is based on Project Bonneville (detailed here) to show the use of the command <docker ps -a> which displays both running and exited containers. At the time of writing (0.3.0), the VIC beta (available here) did not support certain docker commands, including ps. Based on user feedback there have been some changes to the overall architecture to better align with real world requirements. More to follow soon…
The result is vSphere admins can enforce control directly from the parent resource pool. We can route, monitor, shape and secure network traffic on the port group assigned to the VCH as the docker bridge network. We can set CPU/memory shares, reservations and limits to ensure we don’t compromise other workloads… and our devs get access to a Docker environment that operations fully comprehend and have existing operational policies and procedures that can be adapted.
Before the container/microservices fanboys get up in arms, this post was not intended to show the use of containers for isolated applications or greenfields projects, but rather the integration of a new construct into an existing VMware enterprise. IMO, traditional organisations value the portability, ubiquity and flexibility of Docker across disparate operational platforms more than rapid provisioning and scaling… and us Ops folk need to learn to walk before we can sprint…
In the next post of this series we will start to tackle the challenge of scaling using the same philosophies detailed in this post. See you next time.