Intro: OK. So you may have noticed that I have been absent from the blogosphere for a few months (probably not)… In any case, where have I been?
Simply put, I took a break so I could take a step back to take a glance at the big picture, assess ‘who’s who’ in my areas of interest and return with renewed focus. Now you might think this sounds a little conceited, but I had written one too many blogs that never made it to published status because I wasn’t entirely certain of their content or more importantly, relevance. But part of stepping away is stepping back in again… and with that said, let the blogging begin!
Note: This is not a technical tutorial on Docker or vSphere Integrated Containers, rather my views on the philosophy and gradual integration of containers into our existing VMware ecosystem.
Container will replace VM’s… Eventually… Maybe…
I recently presented a session at the Melbourne VMUG called “Can VM’s and Containers happily coexist?”. Though somewhat rhetorical the title was born out of the regular bashing that the VM’s take from container fanboys. To condense a session overview into a single sentence, we tackled a brief history of containers, Dockers rise to fame and the inherent issues with this rise to fame. Despite the age of containers, the fresh faced vendors have yet to prove their worth as a full replacement for virtualisation.
In my first post in this series I described the basic tenets behind Cloud Native Applications, one of which is the 12 Factor App. This framework has arguably become the unofficial guideline to creating applications suitable for a microservices architecture, but it also lends itself perfectly to illustrating why a vast majority of existing monolithic & layered applications are not suitable. With this in mind it’s clear that VM’s still have to a large part to play in the evolution of application architecture.
Taking the above into account, the rush to try and scale / orchestrate containerised services becomes less relevant than current marketing would have us believe. It’s far more important to have a product that has enough demand to require scaling, than to worry about scale before you have the demand. Of course this doesn’t mean that scaling should be an afterthought, but it shouldn’t be the primary focus.
Evolution of the workload…
We’ve all heard the pets vs livestock analogy many times (we’ve stopped picking on cattle :)), but a recent focus on serverless computing and the art of economics through mass autoscaling has introduced the ‘Organism’: A computing entity that is minuscule from both a footprint and lifespan perspective, has little impact as an individual, but when combined with other organisms the ‘grouping’ becomes highly dynamic and resilient. Where can we find these organism type of workloads being used to great effect? Think Google, Facebook et al.
Allow me digress to make a point. A large part of my role is identifying how to modernise datacenter practices to incorporate cloud technology in whatever format aligns best with a strategic outcome. I only mention this because I believe marketing is well beyond the point where use of container technology can be useful for a business that has come from traditional mode 1 operations.
In my opinion most organisations are still finding the balance between pets and livestock through evolved lifecycle practices, but that doesn’t mean they can’t incorporate the numerous benefits of containers other than those more commonly found in a microservices architecture.
Note: As an aside, I recently watched “Containerization for the Virtualisation Admin” posted on the Docker blog where the misconception of containers only supporting microservices was dismissed, something I have long been arguing against. Nice one guys.
Sneaking containers to operations…
For ops, the most common first encounter with containers is likely to occur when Dev’s request a large Linux/Windows VM that will eventually become a container host unbeknownst to the the ops team. In almost all cases this means that operations lose visibility of what’s running within the container host(s). Not ideal for monitoring, security, performance troubleshooting and so forth.
In this scenario our Devs interaction with the rogue Docker hosts may look something like below:
This approach leaves a lot to be desired from an operational perspective and therefore rules out almost all production scenarios. A better approach is to try to evolve the compute construct to suit existing practices within a business. In VMware’s world, this means treating VM’s and containers as one and the same (at least from an evolutionary perspective).
A box within a box!?.
As an atomic unit, do developers care if a container runs in a VM, natively on a public cloud or on bare metal? Well… it depends on the developer, but generally the answer is no. As long as they have free range access to a build/hosting environment with the required characteristics to support the application, all is good. For operations this is an entirely different story. So like in any good relationship, we need to compromise…
Arguably, there is an easy way to get around this issue: Create a 1:1 mapping of containers to VM’s (I can already hear the container fanboys groaning). Yes, we do lose some of the rapid provisioning benefits (from microseconds to seconds) and some of the broader Docker ecosystem features, but we don’t have to forklift an environment that we have spent years (and lots of $$$) refining. Anecdotally, it seems we have spent so long trying to create monster VM’s that we have forgotten the power of the hypervisors ability to balance and isolate numerous tiny VM’s.
Admittedly, for some organisations having the bottleneck of provisioning individual VM’s is still a very real headache for the development team…
*Fanfare… Enter vSphere Integrated Containers!
vSphere Integrated Containers (aka VIC) provides our Dev’s with a way to transparently work with containers in a vSphere environment, therefore reducing a lot of the friction traditionally found with operations having to create VM’s.
The premise behind VIC is a single container per VM (aka, pico VM, Micro VM, Just Enough VM), but with all the characteristics of a container (isolated, lightweight, portable, etc). This has a numerous benefits for operations around resource control/distribution, monitoring and security using mechanisms are already well established (and more importantly, well understood) by network and security teams.
So we can visualise the above using a familiar interface, when I run a basic command like <$docker run hello-world> from my Docker client to the VCH, vCenter launches a PicoVM with our container running inside. From a vCenter perspective we can see the the container running in the same vApp where the VCH and Instant Clone Template exists.
Note: The version of VIC I used for this screen shot is based on Project Bonneville (detailed here) to show the use of the command <docker ps -a> which displays both running and exited containers. At the time of writing (0.3.0), the VIC beta (available here) did not support certain docker commands, including ps. Based on user feedback there have been some changes to the overall architecture to better align with real world requirements. More to follow soon…
The result is vSphere admins have can enforce control directly from the parent resource pool. We can route, monitor, shape and secure network traffic on the port group assigned to the VCH as the docker bridge network. We can set CPU/memory shares, reservations and limits to ensure we don’t compromise other workloads… and our devs get access to a Docker environment that operations fully comprehend and have existing operational policies and procedures that can be adapted.
Before the container/microservices fanboys get up in arms, this post was not intended to show the use of containers for isolated projects or startups, but rather the integration of a new construct into an existing VMware enterprise. IMO, traditional organisations value the portability, ubiquity and flexibility of Docker across disparate operational platforms more than rapid provisioning and scaling… and us Ops folk need to learn to walk before we can sprint…
In the next post of this series we will start to tackle the challenge of scaling using the same philosophies detailed in this post. See you next time.