VMWorld 2017 Recap pt2: Cloud Native, DevOps & Automation Session(s)

Aside from my focus on VMware Cloud on AWS I’ve also been spending my time getting to grips with all the the various technologies supporting containerisation and DevOps. If you’ve followed my blog in the past you’ll know that I’ve was delving into modern software engineering practices, albeit from an operational perspective.

Luckily for me VMware has come a long way in creating technology that supports this theoretical understanding, which was highly evident at VMWorld 2017. There was a shedload of great sessions on Pivotal Cloud Foundry, Pivotal Container Service, vSphere Integrated Containers, vRealize Automation, Wavefront (and on and on and on…)

As I received some great feedback from my last post which consolidated some of the VMC focused sessions from VMWorld, I decided to repeat the process for VMware Cloud Native and DevOps.

Please note, I’m also going to use this as a starting point to cover Pivotal Container Service (PKS) and vSphere Integrated Containers (VIC) in more detail,  before breaking it down into easily digestible chunks. In the meantime enjoy the extensive selection of CNA, DevOps & automation sessions from VMWorld 2017.

Screen Shot 2017-09-28 at 10.17.50 am

Cloud Native

DEV1369BU - A Tale of CI/CD, Infrastructure as Code, and How Containers/IaaS Fit into the Story

CNA3429BU - Basics of Kubernetes on BOSH: Run Production-grade Kubernetes on the SDDC

CNA2006BU - Deep Dive: Architecting Container Services with VMware and Pivotal Developer Ready Infrastructure

CNA2080BU - Deep Dive: How to Deploy and Operationalize Kubernetes

DEV2704BU - Delivering Infrastructure as Code: Practical Tips and Advice

CNA1509BU - Developer-Ready Infrastructure from VMware & Pivotal

DEV1517BU - DevOps Deep Dive

CNA2563BU - Navigating Through the Container Ecosystem

CNA1091BU - One-Stop Container Networking: Cloud Foundry, Kubernetes, Docker, and More

CNA2150BU - Optimizing Critical Banking Workloads Using vSphere Integrated Containers

CNA1699BU - Running Docker on Your Existing Infrastructure with vSphere Integrated Containers

CNA1612BU - Use Cases: Deploying real-world workloads on Kubernetes and Pivotal Cloud Foundry

CNA2547BU - vSphere Integrated Containers Deep Dive: Cool Hacks, Debugging, and Demos

CNA3045BU - What's New with Containers on SDDC

CNA3430BU - Your Enterprise Cloud-Native App Platform: An Introduction to Pivotal Cloud Foundry

FUT3076BU - Simplifying Your Open-Source Cloud With VMware

FUT1226BU - VMware and Open Source: Compliance, Quality, and Viability
 


DevOps & Automation

DEV2858BU - The Shift to the Left: The Changing Role of Operations as Developers in a DevOps World

VIRT2211BU - Automating NSX for Virtual Machines and Containerized Applications

MGT1307BU - vRealize Automation and Puppet: Enabling DevOps-Ready IT

MGT2716BU - vRealize Automation for the Developer Cloud

MGT1776BU - vRealize Automation Solves the Container Onboarding Conundrum

 

Thanks as always to @lamw for his annual session scraping. URL’s for all uploaded sessions found here:

https://github.com/lamw/vmworld2017-session-urls/blob/master/vmworld-us-playback-urls.md

Advertisements

VMWorld 2017 Recap – VMware Cloud on AWS Session(s)

Another VMWorld done… and now for the weeks of recoding catchup on deep-dive goodness. There is plenty of detailed information to finally clear up the speculation and FUD that has been circulating on VMware Cloud on AWS over the last 12 months.

For your convenience I have collected available VMC sessions for easy consumption. I’ll follow up this post with my top 10 once I have trawled through all the below.

Enjoy!..

Screen Shot 2017-09-25 at 9.01.29 am

LHC3376BUS - AWS Native Services Integration with VMware Cloud on AWS: Technical Deep Dive

LHC1547BU - Creating Your VMware Cloud on AWS Data Center: VMware Cloud on AWS Fundamentals

LHC3345BUS - Enabling the dynamic hybrid cloud environment Powered by VMware Software Defined Data Center and VMware Cloud on AWS

MMC3066BU - How Do You Use Network Insights' SaaS to Secure Multitier Hybrid Apps Running on vSphere, VMware Cloud on AWS, and AWS Native?

LHC2281BU - Intriguing Integrations with VMware Cloud on AWS, EC2, S3, Lambda, and More

MMC2820BU - Live Demo: 3 Best Practices for Deploying, Managing and Securing AWS EC2 Apps with VMware Cloud Services

LHC2103BU - NSX and VMware Cloud on AWS: Deep Dive

LHC2105BU - NSX and VMware Cloud on AWS: The Path to Hybrid Cloud

MMC2455BU - On-Demand Disaster Recovery for Enterprise Applications with the VMware Cloud on AWS

LHC1539BU - Paving the Way to the Hybrid Cloud with VMware Cloud Service Providers and vCloud Availability

LHC1882BU - Service Overview for VMware Cloud on AWS

LHC2386BU - True Costs Savings - Modeling and Costing A Migration to VMware Cloud on AWS

LHC1910BU - Using vRealize with VMware Cloud on AWS

LHC1748BU - VMware Cloud for AWS and the Art of Software-Defined Data Centers: API, CLI, and PowerShell

LHC1755BU - VMware Cloud for AWS Storage and Availability: Keeping Your Bits Safe for Humanity

LHC3174BU - VMware Cloud on AWS: An Architectural and Operational Deep Dive

LHC3174BU - VMware Cloud on AWS: An Architectural and Operational Deep Dive

LHC2384BU - VMware Cloud on AWS: A Technical Deep Dive

LHC3375BUS - VMware Cloud on AWS Hybrid Cloud Architectural Deep Dive: Networking and Storage Best Practices

LHC3175BU - VMware Cloud on AWS Partner Solutions Showcase

LHC3371BUS - VMware Cloud on AWS ??? The Painless Path to Hybrid Cloud

LHC2651BUS - Work Load Mobility & Resiliency for the New VMware Cloud on AWS

MGT2875BU - Manage, Govern, and Extend VMware Cloud on AWS with vRealize Automation

STO3194BU - Protecting Virtual Machines in VMware Cloud on AWS

STO1498BU - Tech Preview: Disaster Recovery with VMware Cloud on AWS

STO1890BU - VMware Cloud on AWS: Storage Deep Dive

LHC3016PU - VMware Cloud on AWS: A View of the World from Our Customers

 

Thanks as always to @lamw for his annual session scraping. URL’s for all uploaded sessions found here:

https://github.com/lamw/vmworld2017-session-urls/blob/master/vmworld-us-playback-urls.md

Cloud Native Apps for the Ops Guy – 3 Containerised Tools for the VMware Engineer…

Over the last year (give or take a few months), VMware has been diligently tweaking a variety of its products to integrate container functionality as it becomes more prevalent in the enterprise. With this in mind, I thought I’d put together a quick post detailing three VMware tools which can be used in a simple containerised format.

Update: The below tools are intended to be run as single Docker commands rather than launching a terminal session on the container as you would with William Lam’s much more comprehensive vmware-utils Docker appliance. If @lamw‘s approach is more your bag, you can read about it here.

OVFTool v4.2

For me, OVFTool is a great CLI utility for migrating VM templates and ISOs to & from vCloud Air, although it’s functionality extends way beyond VCA. As a little side project I thought it would be a great idea to containerise the most recent release (v4.2), instead of installing it on my Mac and dealing with potential conflicts. To my delight this was a relatively easy task and took less than 10 mins to build, commit and push to my public repo.

Disclaimer: This image is hosted on my public Docker Hub registry, however  it is not officially endorsed or supported by VMware in any way. That said, please feel free to use (but at your own risk).

To use, simply enter the below Docker command which will allow you to interactively (-i) run the skinnypin/ovftool image and execute an OVFTool command, in this case ovftool –help.

~> docker run -i skinnypin/ovftool ovftool --help
screen-shot-2016-11-28-at-3-38-41-pm
Example output

PowerCli Core

PowerCLI Core builds upon the open source Microsoft PowerShell Core and .Net Core enabling the use the of PowerCLI on non-Windows operating systems. As a Mac user, having to open up a Windows VM in VMware Fusion just to use PowerShell has been a little inconvenient. But no more…

In addition to availability on OSX and Linux, the awesomeness of PowerCLI Core can also be accessed via an offical VMware Docker image. For more info on PowerCLI Core see here.

To use, enter the below Docker command which will give you interactive (-i) access to the PowerCLI prompt.

~> docker run -i vmware/vmwarepowercli
screen-shot-2016-11-28-at-12-09-50-pm
Example output

Project Platypus

Project Platypus is a very nice tool built by my good friend Grant Orchard (and other VMware folks) which details supported VMware product API’s and their usage. If you’ve ever tried to utilise VMware APIs by referencing official documentation, you understand why this tool is absolutely necessary. To the best of my knowledge Platypus is only available in this containerised format, so if you want the goodness you’re going to have to get familiar with Docker…

Details available on Github here.

To use, enter the below Docker command which will run a detached (-d) container which is accessible from your web browser on port 8080 (-p 8080:80) using the IP address of your container host.

~> docker run -d -p 8080:80 vmware/platypus
screen-shot-2016-11-28-at-11-59-31-am
Example output
Screen Shot 2016-11-28 at 11.57.41 AM.png
VMware Platypus Web UI

So there you have it. Three easily accessible VMware tools that can be distributed without having to read any installation documentation (as long as you have access to a Docker environment ). As always, feedback is appreciated, especially if this is useful and you want to see other tools available in this format.

Enjoy!

Update 2 > BONUS TOOL: I also spend some time Dockerizing VIC Machine (v0.8.0-rc3), the container host provisioning utility used with vSphere Integrated Containers. Details on VIC here.

screen-shot-2016-11-30-at-10-18-40-am
Example output

To use, simply enter the below Docker command which will allow you to interactively (-i) run the skinnypin/vic image and execute a VIC Machine command, in this case vic-machine-linux –help.

~> docker run -i skinnypin/vic /vic-machine-linux --help

 

 

Author: @Kev_McCloud

VMware Cloud(s) Dissected – A VMware Public Cloud Platform Comparison…

Intro: So you may have been following my VCA Dissected series, but in line with the recent expansion of VMware Cloud Services my role as a Cloud Specialist has diversified to include all things VMware & Cloud. With that in mind, a series name change is in order… So VCA Dissected becomes VMware Cloud(s) Dissected.

All of the (VMware) Clouds…

Holy moly, it’s been a crazy few months on the road with VMWorld! So many game changing announcements delivered through keynotes, breakout sessions and group discussions. In addition to general announcements on vSphere 6.5, EUC and Cloud Native Apps, we were also introduced to several new VMware public cloud offerings and associated services. For the purposes of clarity, I’m going to give a high-level break down of each platform within VMware’s Cross Cloud Architecture (not including Cross Cloud Services) to try and illustrate where each will be most effective.

First things first. If you haven’t watched day one keynote from VMWorld Europe, I highly recommend you do so… (click on image to view the recording. If you’re not interested in the reasoning behind the vision, skip to about 30 mins in).

screen-shot-2016-10-05-at-2-52-54-pm

To summarize, P.G. talked through his predictions for cloud consumption trends in the near (and not so near) future which set the stage to announce VMware Cross-Cloud Architecture; a set of converged software services incorporating major partnerships with leaders in hyper-scale cloud. So let’s dig a little deeper.

Note: There were plenty of disclaimers and forward looking statements on tech previews in the VMWorld presentations, public FAQ’s, demos and press releases, so please understand that anything I mention here is subject to change as more information is released.

VMware Cloud (VMC) on Amazon Web Services

Boom, the cat’s finally out of the bag. As many of the talking heads have pointed out this is about as significant as any cloud partnership could be. Here are some of the highlights I have chosen from the recent VMWorld VMC sessions.

screen-shot-2016-11-07-at-6-25-39-am

The big stuff…

  • The VMware SDDC stack (vSphere 6.5, VSAN & NSX) available within AWS Datacenter’s, on AWS infrastructure dedicated to this service.
  • VMC procurement, provisioning and lifecycle is via the VMC customer portal.
  • VMC upgrades, maintenance and billing are exclusively managed  by VMware.
  • Non-VMC services are still billed and managed by AWS directly.
  • VMC can be consumed as a standalone platform on AWS, as a hybrid cloud through vCenter Enhanced Linked Mode, or (in the future) cloud-to-cloud between AWS regions/availability zones through the same mechanism.
  • Continuous upgrades of the SDDC components (including vCenter) on AWS will be scheduled and executed by VMware.
  • Billed by the hour, or procured for a reduced price over 12 or 36 months in a similar commercial model to AWS reserved instances. Customers will also be able to leverage their existing investments in VMware licenses through VMware customer loyalty programs.
  • Availability mid-2017.

The technical stuff…

  • Initial deployment of between 4 to 64 hosts, which can be scaled through manual process or by;
  • Elastic Distributed Resource Scheduler (Elastic DRS) which dynamically adds and removes physical hosts based on predefined EDRS rules.
  • Enhanced Linked Mode enables inventory management, content library synchronization, etc. of AWS VMC hosts from on-prem vCenter.
  • Each tenancy uses the AWS VPC construct for logical isolation.
  • Edge/perimeter services are provided by NSX Edge Services Gateway, not AWS VPC network services.
  • Full VMC integration with AWS Direct Connect.
  • VMC and AWS user accounts are linked, but separate interfaces and authentication is required for services unique to each vendor.
  • Administrators have direct access to vCenter UI and REST APIs.
  • VMware defined RBAC limits the install of untested third party software with custom VIB’s.

The value…

  • Simply put, industry leading SDDC platform on an industry leading hyper-scale public cloud. Truly the best of both worlds.
  • The ability to easily integrate and extend our VMware IaaS platform to incorporate AWS storage, data, application and automation specific services.
  • Intra region/availability zone efficiency through low latency connectivity to AWS services, avoiding costs incurred when data and network traffic leaves the AWS region.
  • Zero downtime workload migration to VMC-on-AWS through Cross vCenter Server vMotion.
  • Maintenance and upgrade of SDDC platform components managed entirely by VMware.

There’s not a whole bunch of detailed information on VMC right now as it’s early days, but Frank Denneman’s blog and the AWS blog is a good place to start. Note, during the ‘Closer Look’ VMWorld breakout session it was also acknowledged that a number of announcements are still to be revealed at AWS Re:Invent at the end of November.

VMware Cloud Foundation (VCF) on IBM Softlayer

VMware Cloud Foundation is the same SDDC stack (vSphere, VSAN, NSX) as VMC  but with VMware SDDC Manager as the overlay software which handles platform deployment, configuration and ongoing SDDC lifecycle tasks for specific use cases. What makes VCF different from VMC (other than the obvious partnerships), is that Cloud Foundation can be deployed privately within our own datacenters in addition to public cloud.

The global partnership with IBM was announced at VMWorld Las Vegas and they will be the first global cloud service provider to offer Cloud Foundation. vCloud Air will also join IBM in the near future in addition to other numerous VCAN providers throughout 2017.

Note, I’m not really going into any detail about VCF as this is a public cloud breakdown. I would recommend a read of Ray Heffer’s fantastic official VMware blog digging deeper into VCF’s underlying architecture.

screen-shot-2016-10-22-at-11-39-53-am

In addition to the numerous benefits of VCF architecture here are some of the notes I have taken around the IBM partnership.

The big stuff…

  • Fully automated deployment of the VCF stack (vSphere 6.5, VSAN & NSX) on IBM Softlayer dedicated infrastructure.
  • All services are billed directly by IBM.
  • VCF can be consumed as a standalone platform on IBM, as a hybrid cloud through vCenter Enhanced Linked Mode, or cloud-to-cloud between IBM regions through the same mechanism.
  • vCenter-as-a-Service can be also procured as a subscription through IBM, but customers also have the option to procure perpetual licensing if non-VCF license ownership is desired.
  • Availability; before the end of 2017 for IBM, early 2017 for VCA. Other VCAN partners TBA.

The technical stuff…

  • SDDC Manager will not be directly accessible as it abstracted through the Softlayer Customer Portal. Provisioning, lifecycle tasks, patch management and upgrades are delivered through this portal.
  • NSX completely removes the constraint of IBM Softlayer internal networking (3-4 VLANs).
  • Integrated snapshot based backups of management layer components.
  • VCF best practice single management layer governing multiple IBM Softlayer regions.
  • Linking Cloud Foundation environments is achieved through vCenter Enhanced Linked Mode, not via SDDC Manager.
  • Minimum deployment of four hosts (converged management and workload domains).

The value stuff…

  • BYO-Cloud and consume the full VCF stack on a monthly basis.
  • Low latency access to IBM Cloud services (Object Storage, Bluemix, Watson, etc.)
  • Zero cost private datacenter interconnects between IBM Softlayer Regions.
  • True BYO public cloud for those who require full access to all SDDC functions, including the upgrade and patching of individual SDDC components which is maintained by the customer, not VMware (or IBM without additional services).
  • Ability to build and manage identical SDDC components both on-prem and in public cloud.

Note that VCF is not the only way to consume VMware on IBM Softlayer as IBM customers have previously been able to select individual VMware technologies and deploy them on IBM Softlayer bare metal. This also allows customers to bring their existing licensing to IBM Cloud, which can be a real bonus when migrating from, or replacing an existing datacenter. Note, as an example of how much complexity is actually involved with deploying an entire SDDC platform independently on IBM Softlayer I would suggest a read of the extremely comprehensive reference architecture here.

vCloud Air (non-VCF services)

Contrary to a number of blogs and articles I have read recently, vCloud Air is here to stay, albeit with a renewed focus to address specific VMware hybrid-cloud challenges. I’m not going to cover the existing vCloud Air service here as it has been available for a while now and we should all know it back to front, right? 🙂

In addition to VCF on vCloud Air, there were numerous announcements including;

  • Enhancements to Hybrid Cloud Manager with the full release of version 2.0, including;
    • Zero downtime Cross-Cloud vMotion utilizing fully integrated WAN opto, proximity routing. Note: This has no dependency on vSphere 6.x and can be used with vSphere 5.5 today.
    • NSX policy migration.
    • Proxy support.
  • New services for Enterprise DR, Hybrid DMZ and DMZ lite.
  • Enhanced Integrated Identity & Access Management.
  • Increased DPC host memory capacity (up to 1TB per host)

Today, vCloud Air is still the only way to subscribe to a fully managed VMware cloud service and take full advantage of Hybrid Cloud Manager. As an added benefit, the entry point for Dedicated Private Cloud (as a direct comparison) is only a single N+1 host meaning the overall initial commitment is not as significant as the other services.

Summing up…

Although the these individual cloud offerings may seem to overlap they each address a different set of challenges by integrating with key partners who are market leaders in a specific hybrid/public cloud capabilities. This puts VMware customers in a unique position of having a choice of multiple clouds depending on individual requirements.

In addition to the above, VMware also has 4000+ vCloud Air Network partners who all offer unique services with VMware software at the core. If I even began to try and break down the breadth of services covered through these partners this blog would turn into War & Peace…

I have only covered a very small amount of high-level info here as I hope to flesh out each service as more information is released. Comments, opinions and feedback in general is always welcome. If your attending vForum Australia 2016 I will also be presenting a couple of sessions on VMware Cross-Cloud Architecture and demoing VMC on AWS, so come and say hello and give me your take on this new world…

 

Author: @Kev_McCloud

VCA Dissected – Docker Machine Driver for vCloud Air

If you’ve followed my blog or seen me presenting in the last six months you may have noticed I have developed a keen interest in Cloud Native Apps and DevOps in general.  I was lucky enough to present a combined CNA/vCloud Air session at VMWorld this year which was a little different from the hybrid cloud talks I usually give.

In addition to the ‘what-why-how’,  I also ran a live demo showing the provisioning and decommissioning of a remotely accessible VCA Docker host, complete with NAT and firewall configuration using two simple commands. Since Las Vegas I have been meaning to post how I constructed the demo, so here it is.

Note: some prior knowledge of basic vCloud Air administration and Docker functionality is assumed…

Docker Machine Driver for vCloud Air

In my previous post I talked about VM’s and containers living side by side as decomposing (or building alongside) monolithic apps can take an extended period of time, or may not be possible at all. To support this notion, VMware has made great strides in the containers space to provide technology that allows organisations to run containers natively on vSphere (through VIC) or Photon Platform depending on operational requirements and overall maturity with the cloud native apps.

However there is one aspect of the VMware CNA vision that is often overlooked, namely vCloud Air. This may be because vCloud Air does not have a native container offering (at the time of writing this post), but it does have support for Docker Machine which is an essential part of the Docker workflow if using Docker Toolbox for administration.

What do we need?

In order to use the Docker Machine Driver for vCloud Air we will need to have a VCA subscription (either Virtual or Dedicated Private Cloud) and a user account with network & compute administrator permissions assigned. With this we can go ahead and create a private template which Docker Machine will use to create our container host. Note, if not specified in our docker-machine create command, Docker Machine will use Ubuntu Server 12.04 LTS from the VCA Public Catalogue by default.

Quick Tip: To create a quick template I used the Ubuntu Server 12.04 LTS image from the VCA Public Catalogue as it already has VMware tools installed. After I ran my usual VCA linux template prep, (root pw change, network config, ssh config, apt-get upgrade, apt-get update, etc) I renamed vchs.list to vchs.list.old found in /etc/apt/sources.list.d/. Now I did this because when Docker Machine runs through the provisioner process it uses apt-get to retrieve VCA packages from packages.vmware.com, which can sometimes be a little slow to respond. This occasionally results in the provisioner process timing out (as it did in my demo at VMWorld….grrr). Note, post initial template creation it is not necessary to have the packages.vmware.com repo available for the docker provisioning process.

Provided we have a access to a VCA routable network and an available public IP address, we can go ahead and run a relatively simple shell script to execute the entire provisioning process.  It should be noted that I created this script to be easily distributed to anyone needing quick access to a docker environment, provided they had the correct VCA permissions. It also avoids storing your VCA password in clear text.

_____________________________________________________________________________________________________

#Simple docker-machine VCA docker host creation script

read -p "Enter VCA user name: " USER
echo Enter VCA Password:
read -s PASSWORD
echo

docker-machine create --driver vmwarevcloudair \
--vmwarevcloudair-username="$USER" \
--vmwarevcloudair-password="$PASSWORD" \
--vmwarevcloudair-vdcid="M123456789-12345" \
--vmwarevcloudair-catalog="KGLAB" \
--vmwarevcloudair-catalogitem="DMTemplate01" \
--vmwarevcloudair-orgvdcnetwork="KGRTN01" \
--vmwarevcloudair-edgegateway="M123456789-12345" \
--vmwarevcloudair-publicip="x.x.x.x" \
--vmwarevcloudair-cpu-count="1" \
--vmwarevcloudair-memory-size="2048" \
DockerHost01

_____________________________________________________________________________________________________

The expected output is as follows…

screen-shot-2016-09-29-at-12-25-04-pm
Sample docker-machine output

Note, this is a minimal subset of commands for basic VCA container host provisioning. I have also changed the VDC ID, Edge Gateway ID and public IP in the example script for obvious reasons. A full list of Docker Machine Driver for vCloud Air commands can be found on the Docker website here.

Once the provisioner process is complete, we should have an internet accessible container host configured with 1 vCPU, 2GB of memory with Docker installed, running and listening for client commands on the configured public IP we specified earlier.

To natively connect to this environment from our Docker client we simply enter the following…

_____________________________________________________________________________________________________

~> eval (docker-machine env DockerHost01)

_____________________________________________________________________________________________________

That was easy, right? Well… it’s not quite that simple.

The above will create a relatively insecure Docker environment as the edge firewall rules are not locked down at all (as shown below).

screen-shot-2016-09-29-at-1-20-34-pm
Default docker-machine VCA Firewall configuration
screen-shot-2016-09-29-at-1-22-12-pm
Default docker-machine VCA SNAT/DNAT configuration

This can be handy for testing internet facing containers quickly as we do not need to explicitly define and lock down the ports needed for external access. However if this Docker host is intended to become a even a little more permanent, we can use VCA-CLI or the VCA web/VCD user interface to alter the rules (at a minimum port 2376 needs to be open from a trusted source address for client-server communications, and whatever ports are needed to access containers directly from the internet).

Assuming our environment is temporary, we can also tear it down quickly using:

_____________________________________________________________________________________________________

~> docker-machine rm DockerHost01

_____________________________________________________________________________________________________

So there you have it. The entire build provisioning process takes less than 5 mins (once you have set up a template) and decommissioning takes less then 2 mins! In addition to simple tasks I’ve outlined here we can also use a similar process to create a Docker Swarm cluster, which I will cover in my next post.

As always, if you have any questions or feedback feel free to leave a comment or hit me up on Twitter.

 

Author: @Kev_McCloud

Cloud Native Apps for the Ops Guy – VM’s and Containers Living Together in Harmony

Disclaimer: This is not a technical tutorial on Docker or vSphere Integrated Containers, rather my views on the philosophy and gradual integration of containers into our existing VMware ecosystem.

Container will replace VM’s… Eventually… Maybe…

I recently presented a session at the Melbourne VMUG called “Can VM’s and Containers happily coexist?”.  Though somewhat rhetorical, the title was born out of the protracted argument that containers will somehow surpass VM’s in the near future. To condense our session overview into a single sentence, we tackled a brief history of containers, Docker’s rise to fame and the inherent issues with this rise to fame. Despite the age of containers, the fresh faced vendors have yet to prove their worth as a wholesale replacement for virtualisation.

In my first post in this series I described the basic tenets behind Cloud Native Applications, one of which is the 12 Factor App. This framework has arguably become the unofficial guideline to creating applications suitable for a microservices architecture, but it also lends itself perfectly to illustrating why a vast majority of existing monolithic & layered applications are not suitable for decomposition.

It’s also worth bearing in mind that it may be more efficient to build new functionality & services around an existing monolith, a concept Martin Fowler refers to as the “Strangler Application” aka Strangling the Monolith. Simply, if it ain’t broke, don’t fix it… just gradually improve it!

Taking both these factors into consideration it becomes clear that VM’s will play their part for existing organisations for some time to yet, albeit sharing the limelight with there slimmer, more popular counterparts.

Evolution of the workload…

We’ve all heard the pets vs livestock analogy many times, but a recent focus on microservices and the art of driving economy through mass autoscaling has introduced the ‘Organism’: A computing entity that is minuscule from both a footprint and lifespan perspective and has little impact as an individual, but when combined with other organisms the ‘grouping’ becomes highly dynamic and resilient. Where can we find these organism type of workloads being used to great effect? Think Google, Facebook et al.

Screen Shot 2016-06-03 at 10.30.56 AM

Allow me digress to make a point. A large part of my role is identifying how to modernise datacenter practices to incorporate cloud technology in whatever format aligns best with a strategic outcome. I only mention this because I believe marketing is well beyond the point where use of container technology can be useful for a business that has come from traditional mode 1 operations.

In my opinion most organisations are still finding the balance between pets and livestock through evolved lifecycle practices, but that doesn’t mean they can’t incorporate the numerous benefits of containers other than those more commonly found in environments with large scale workload churn.

Note: As an aside, I recently watched “Containerization for the Virtualisation Admin” posted on the Docker blog where the misconception of containers only supporting microservices was dismissed, something I have long been arguing against. Nice one guys.

Sneaking containers to operations…

For ops, the most common first encounter with containers is likely to occur when Dev’s request a large Linux/Windows VM that will eventually become a container host unbeknownst to the the ops team. In almost all cases this means that operations lose visibility of what’s running within the container host(s). Not ideal for monitoring, security, performance troubleshooting and so forth.

In this  scenario our Devs interaction with the rogue Docker hosts may look something like below:

Screen Shot 2016-06-09 at 3.34.44 PM

This approach leaves a lot to be desired from an operational perspective and therefore rules out almost all production scenarios. A better approach is to try to evolve the compute construct to suit existing practices within a business. In VMware’s world, this means treating VM’s and containers as one and the same (at least from an evolutionary perspective).

A box within a box!?.

As an atomic unit, do developers care if a container runs in a VM, natively on a public cloud or on bare metal? Well… it depends on the developer, but generally the answer is no. As long as they have free range access to a build/hosting environment with the required characteristics to support the application, all is good. For operations this is an entirely different story. So like in any good relationship, we need to compromise…

Arguably, there is an easy way to get around this issue: Create a 1:1 mapping of containers to VM’s (I can already hear the container fanboys groaning). Yes, we do lose some of the rapid provisioning benefits (from microseconds to seconds) and some of the broader Docker ecosystem features, but we don’t have to forklift an environment that we have spent years (and lots of $$$) refining. Anecdotally, it seems we have spent so long trying to create monster VM’s that we have forgotten the power of the hypervisors ability to balance and isolate numerous tiny VM’s.

Admittedly, for some organisations having the bottleneck of provisioning individual VM’s is still a very real headache for the development team…

*Fanfare… Enter vSphere Integrated Containers!

vSphere Integrated Containers (aka VIC) provides our Dev’s with a way to transparently work with containers in a vSphere environment, therefore reducing a lot of the friction traditionally found with operations having to create VM’s.

The premise behind VIC is to overlay the container construct into existing vSphere functionality, but with all the characteristics of a container (isolated, lightweight, portable, etc). This has a numerous benefits for operations around resource control/distribution, monitoring and security using mechanisms are already well established (and more importantly, well understood) by network and security teams.

Screen Shot 2016-06-09 at 3.34.18 PM

So we can visualise the above using a familiar interface like vCenter, if I run a basic command like <$docker run …..> from my Docker client to the Docker daemon running on my Virtual Container Host, vCenter launches a Instant Clone forked VM with a single container running inside. From a vCenter perspective we can see the container running in the same vApp where the VCH and Instant Clone Template exists.

Screen Shot 2016-06-28 at 12.02.20 PM

Note: The version of VIC I used for this screen shot is based on Project Bonneville (detailed here) to show the use of the command <docker ps -a> which displays both running and exited containers. At the time of writing (0.3.0), the VIC beta (available here) did not support certain docker commands, including ps. Based on user feedback there have been some changes to the overall architecture to better align with real world requirements. More to follow soon…

The result is vSphere admins can enforce control directly from the parent resource pool. We can route, monitor, shape and secure network traffic on the port group assigned to the VCH as the docker bridge network. We can set CPU/memory shares, reservations and limits to ensure we don’t compromise other workloads…  and our devs get access to a Docker environment that operations fully comprehend and have existing operational policies and procedures that can be adapted.

Conclusion.

Before the container/microservices fanboys get up in arms, this post was not intended to show the use of containers for isolated applications or greenfields projects, but rather the integration of a new construct into an existing VMware enterprise. IMO, traditional organisations value the portability, ubiquity and flexibility of Docker across disparate operational platforms more than rapid provisioning and scaling… and us Ops folk need to learn to walk before we can sprint…

In the next post of this series we will start to tackle the challenge of scaling using the same philosophies detailed in this post. See you next time.

Author:@Kev_McCloud

Cloud Native Apps for the Ops Guy – Container Basics

Welcome back All. In part 1 we covered the basic understanding of Cloud Native Applications (CNA) and more importantly its relevance to todays IT Operations Teams. Let’s start with a quick recap of what we’ve already covered:

  • CNA is an evolution in software development focused on speed and agility, born out of removing the challenges of the traditional SDLC
  • Ops Teams are struggling to fully operationalise modern web-based applications, i.e. build and/or maintain operational practices for hosting cloud native applications
  • There is a vast array of CNA architectures, platforms and tools, all of which are in their relative infancy and require a degree of rationalisation to be useful in the enterprise

I also covered breaking down my understanding of CNA into two areas of research; Foundational Concepts and CNA Enablers, the later of which we’ll cover in this post.

How can CNA fit into existing IT Operations?..

To see where CNA Enablers might fit, I took a look at the responsibilities of a modern IT Team for application delivery. At a high-level our priorities might cover:

Development Team: Application architecture / resiliency / stability / performance, deployment, version control, data management, UX/UI.

Operations Team: Platform automation / orchestration / availability, scaling, security, authentication, network access, monitoring.

Note, this is a very generic view of the average IT Team dichotomy, but it does illustrate that there is virtually no crossover. More importantly, this shows that the core of operational tasks are still aligned with keeping hosting platform(s) alive, secure and running efficiently. So with this mind, how do we go about bringing operations and development closer together? Where will we start to see some overlap in responsibilities?

Introducing Containers…

There has been a lot of commotion around containers (and by association, micro services) as the genesis of everything cloud native, however Linux containers have existed for a long time. If we filter the noise a little, it’s clear to see that containers have become essential because they address the lack of standardisation and consistency across development and operations environments, which has become more prevalent with the growing adoption of public clouds like AWS.

So what is all the fuss about? To begin to describe the simple beauty of containers, I like to think of them as a physical box where our developers take care of what’s inside the box, whilst operations ensure that the box is available, wherever it needs to be available. The box becomes the only component that both teams need to manipulate.

Screen Shot 2016-01-25 at 10.57.23 AM

To overlay this onto the real world, our dev’s have to deal with multiple programming languages and frameworks, whilst we (as ops) have numerous platforms to maintain, which are often comprised of drastically different performance and security characteristics. If we introduce a container based architecture, the “box” reduces friction by providing a layer of consistency between both teams.

Note: There are plenty of awesome blogs and articles which describe the technical construct of a container in minute detail. If this is your area of interest, get Googling…

Architecture basics…

Now for me it was also important to understand that containers are not the only way to deploy  a cloud native architecture (please refer to this excellent post from my VMware colleague @mreferre), but also to acknowledge that they are important for a number of reasons, namely:

  • They provide a portable, consistent runtime across multiple platforms (desktop, bare metal, private & public cloud)
  • They have a much smaller, more dynamic resource footprint
  • They can be manipulated entirely via API
  • They start in milliseconds, not seconds or minutes
  • They strip away some layers which could be considered to add significant ‘bloat’ to a traditional deployment
  • Taking into account all of the above they provide a better platform for stateless services

If we compare a “traditional” VM deployment to a containerised construct (diagram below), it’s evident that gen 2 (i.e. monolithic / single code base) apps often have a larger resource overhead because of their reliance on vertical scaling and the tight coupling of their constituent parts. If we have to move (or redeploy) a gen 2 app, we need to move (or redeploy) everything northbound of the VM layer, which can be considerable if we are moving app data as well.

Screen Shot 2016-01-29 at 10.33.13 AM

Note: The above diagram is not intended to show a refactoring from gen 2 to gen 3, but instead how the same applications might look if architected differently from scratch.

From an operational perspective, gen 3 (ie. cloud native) apps which leverage containers and have a far greater focus on horizontal scaling, whilst greatly increasing consolidation of supporting platform resources.

As a comparison, when moving gen 3 apps between environments we only have to push the updated app code and supporting binaries/libraries not included in the base OS. This means we have a much smaller package to move (or redeploy) as the VM, guest OS and other supporting components already exist at the destination. Deployment therefore becomes far more rapid with far less dependency.

Now this is all very exciting, but in reality gen 2 and gen 3 will have to coexist for some time yet, therefore it’s probably best to have a strategy that supports both worlds. For this reason, I am researching the synergies between the two constructs which is where I believe many IT shops will thrive in the near term.

Where do we begin?..

If we start with a minimal platform, all we really need to be able to build a containerised application is; a host, an OS which supports a container runtime and a client for access. It’s entirely possible to build containerised applications in this way, but obviously we are severely limited in scalability. Once we go beyond a single host platform, management becomes far more complex and therefore requires greater sophistication in our control plane. But I guess we should try to walk before we run…

Let’s take a closer look at some of the layers of abstraction we will be working with. Note: So as not to confuse myself with too many technologies, I’ve focused my research on VMware’s Photon (for obvious reasons) and Docker, which I believe has firmly established itself as the leader in container and container management software.

Container Engine / Runtime – This is the software layer responsible for running multiple, isolated systems (i.e. containers) by providing a virtual environment that has its own CPU, memory, block I/O, network, cgroups and namespaces within a single host. It is also responsible for scheduling critical container functions (create, start, stop, destroy) in much the same way a hypervisor does.

In the case of Docker, it’s also the runtime that manages tasks from the Docker Daemon which is the interface that exposes the Docker API for client-server interaction (through socket or REST API).

Container OS – A container OS (as the name would suggest) is an operating system which provides all the binaries and libraries needed to run our code. It also enables the container engine interact with the underlying host by providing the hardware interfacing operations and other critical OS services.

Photon is VMware’s open source Linux operating system, optimised for containers. in addition to Docker, Photon also supports Rkt and Garden meaning we are not limited to a single container engine. It’s also fully supported on vSphere (and therefore vCloud Air) and it has no problems running on AWS, Azure and Google Cloud Engine (though it may be fully supported by these service providers at the time of writing).

Note: If you feel like having a play around with Photon (ISO), it can be downloaded from here, deployed directly from the public catalogue in vCloud Air, or if you want to build your own Photon image you can also fork it directly from GitHub.

Host – Our operating system still needs somewhere to run. I believe that for most of us, virtual machines are still best used here because of the sophistication in security, management and monitoring capabilities. In the short term it means we can run our containers and VM’s side by side, but it should be noted that we can also run our container OS on bare metal and schedule container operations through the control plane.

Platform – A platform in the context of operations is simply a hosting environment. This could be a laptop with AppCatalyst or Fusion, vSphere and / or private and public cloud, really any environment that is capable of hosting a container OS and the ecosystem of tools needed to manage our containers.

Basic Container Usage…

In order to make this an effective approach for our dev’s, they need self-service access to deploy code and consume resources as they see fit. The simplest approach for our dev’s is to deploy in an environment where they have full control over the resources, like their laptop.

Once we go beyond the dev laptop, our platforms might include on-premises virtual infrastructure, bare metal and public cloud. The platform itself is not really that important to our dev’s provided it has the capabilities needed to support the application. So ops really need to concentrate on transparently supporting our dev’s ability to operate at scale. With that comes operational changes, which might include:

  • Secure access to the container runtime (via a container scheduling interface, which we’ll cover in the next post)
  • Internal network communications to support containerised services to function at scale, including virtual routing/switching, distributed firewall, load balancing, message queuing, etc
  • Secure internet and/or production network communications for application front end network traffic
  • Support for auto-scaling and infrastructure lifecycle management, including configuration management, asset management, service discovery, etc
  • Authentication across the entire stack defined through identity management and role based access controls (RBAC)
  • Monitoring throughout the entire infrastructure stack (including the containers!)
  • Patching container OS / runtime and all supporting platforms

Now I realise this is only scratching the surface, but if we listed all of the operational changes needed to incorporate this mode of delivery we would be here all day. For this reason I’m ignoring CI/CD and automation tools for the time being. Don’t get me wrong, they are absolutely critical to building a reliable self-service capability for our dev’s, but for now they are just adding a layer of complexity which is not going to aid our understanding. We’ll break it down in a later post.

So there you have it. In looking at the simple benefits that containers provide, we quickly begin to realise why so many organisations are developing cloud native capability. In the next post we’ll start to look at some of the realities of introducing a cloud native capability to our operations when working at scale.

References and credits:

 

Author: Kev_McCloud