VMware Cloud(s) Dissected – A VMware Public Cloud Platform Comparison…

Intro: So you may have been following my VCA Dissected series, but in line with the recent expansion of VMware Cloud Services my role as a Cloud Specialist has diversified to include all things VMware & Cloud. With that in mind, a series name change is in order… So VCA Dissected becomes VMware Cloud(s) Dissected.

All of the (VMware) Clouds…

Holy moly, it’s been a crazy few months on the road with VMWorld! So many game changing announcements delivered through keynotes, breakout sessions and group discussions. In addition to general announcements on vSphere 6.5, EUC and Cloud Native Apps, we were also introduced to several new VMware public cloud offerings and associated services. For the purposes of clarity, I’m going to give a high-level break down of each platform within VMware’s Cross Cloud Architecture (not including Cross Cloud Services) to try and illustrate where each will be most effective.

First things first. If you haven’t watched day one keynote from VMWorld Europe, I highly recommend you do so… (click on image to view the recording. If you’re not interested in the reasoning behind the vision, skip to about 30 mins in).


To summarize, P.G. talked through his predictions for cloud consumption trends in the near (and not so near) future which set the stage to announce VMware Cross-Cloud Architecture; a set of converged software services incorporating major partnerships with leaders in hyper-scale cloud. So let’s dig a little deeper.

Note: There were plenty of disclaimers and forward looking statements on tech previews in the VMWorld presentations, public FAQ’s, demos and press releases, so please understand that anything I mention here is subject to change as more information is released.

VMware Cloud (VMC) on Amazon Web Services

Boom, the cat’s finally out of the bag. As many of the talking heads have pointed out this is about as significant as any cloud partnership could be. Here are some of the highlights I have chosen from the recent VMWorld VMC sessions.


The big stuff…

  • The VMware SDDC stack (vSphere 6.5, VSAN & NSX) available within AWS Datacenter’s, on AWS infrastructure dedicated to this service.
  • VMC procurement, provisioning and lifecycle is via the VMC customer portal.
  • VMC upgrades, maintenance and billing are exclusively managed  by VMware.
  • Non-VMC services are still billed and managed by AWS directly.
  • VMC can be consumed as a standalone platform on AWS, as a hybrid cloud through vCenter Enhanced Linked Mode, or (in the future) cloud-to-cloud between AWS regions/availability zones through the same mechanism.
  • Continuous upgrades of the SDDC components (including vCenter) on AWS will be scheduled and executed by VMware.
  • Billed by the hour, or procured for a reduced price over 12 or 36 months in a similar commercial model to AWS reserved instances. Customers will also be able to leverage their existing investments in VMware licenses through VMware customer loyalty programs.
  • Availability mid-2017.

The technical stuff…

  • Initial deployment of between 4 to 64 hosts, which can be scaled through manual process or by;
  • Elastic Distributed Resource Scheduler (Elastic DRS) which dynamically adds and removes physical hosts based on predefined EDRS rules.
  • Enhanced Linked Mode enables inventory management, content library synchronization, etc. of AWS VMC hosts from on-prem vCenter.
  • Each tenancy uses the AWS VPC construct for logical isolation.
  • Edge/perimeter services are provided by NSX Edge Services Gateway, not AWS VPC network services.
  • Full VMC integration with AWS Direct Connect.
  • VMC and AWS user accounts are linked, but separate interfaces and authentication is required for services unique to each vendor.
  • Administrators have direct access to vCenter UI and REST APIs.
  • VMware defined RBAC limits the install of untested third party software with custom VIB’s.

The value…

  • Simply put, industry leading SDDC platform on an industry leading hyper-scale public cloud. Truly the best of both worlds.
  • The ability to easily integrate and extend our VMware IaaS platform to incorporate AWS storage, data, application and automation specific services.
  • Intra region/availability zone efficiency through low latency connectivity to AWS services, avoiding costs incurred when data and network traffic leaves the AWS region.
  • Zero downtime workload migration to VMC-on-AWS through Cross vCenter Server vMotion.
  • Maintenance and upgrade of SDDC platform components managed entirely by VMware.

There’s not a whole bunch of detailed information on VMC right now as it’s early days, but Frank Denneman’s blog and the AWS blog is a good place to start. Note, during the ‘Closer Look’ VMWorld breakout session it was also acknowledged that a number of announcements are still to be revealed at AWS Re:Invent at the end of November.

VMware Cloud Foundation (VCF) on IBM Softlayer

VMware Cloud Foundation is the same SDDC stack (vSphere, VSAN, NSX) as VMC  but with VMware SDDC Manager as the overlay software which handles platform deployment, configuration and ongoing SDDC lifecycle tasks for specific use cases. What makes VCF different from VMC (other than the obvious partnerships), is that Cloud Foundation can be deployed privately within our own datacenters in addition to public cloud.

The global partnership with IBM was announced at VMWorld Las Vegas and they will be the first global cloud service provider to offer Cloud Foundation. vCloud Air will also join IBM in the near future in addition to other numerous VCAN providers throughout 2017.

Note, I’m not really going into any detail about VCF as this is a public cloud breakdown. I would recommend a read of Ray Heffer’s fantastic official VMware blog digging deeper into VCF’s underlying architecture.


In addition to the numerous benefits of VCF architecture here are some of the notes I have taken around the IBM partnership.

The big stuff…

  • Fully automated deployment of the VCF stack (vSphere 6.5, VSAN & NSX) on IBM Softlayer dedicated infrastructure.
  • All services are billed directly by IBM.
  • VCF can be consumed as a standalone platform on IBM, as a hybrid cloud through vCenter Enhanced Linked Mode, or cloud-to-cloud between IBM regions through the same mechanism.
  • vCenter-as-a-Service can be also procured as a subscription through IBM, but customers also have the option to procure perpetual licensing if non-VCF license ownership is desired.
  • Availability; before the end of 2017 for IBM, early 2017 for VCA. Other VCAN partners TBA.

The technical stuff…

  • SDDC Manager will not be directly accessible as it abstracted through the Softlayer Customer Portal. Provisioning, lifecycle tasks, patch management and upgrades are delivered through this portal.
  • NSX completely removes the constraint of IBM Softlayer internal networking (3-4 VLANs).
  • Integrated snapshot based backups of management layer components.
  • VCF best practice single management layer governing multiple IBM Softlayer regions.
  • Linking Cloud Foundation environments is achieved through vCenter Enhanced Linked Mode, not via SDDC Manager.
  • Minimum deployment of four hosts (converged management and workload domains).

The value stuff…

  • BYO-Cloud and consume the full VCF stack on a monthly basis.
  • Low latency access to IBM Cloud services (Object Storage, Bluemix, Watson, etc.)
  • Zero cost private datacenter interconnects between IBM Softlayer Regions.
  • True BYO public cloud for those who require full access to all SDDC functions, including the upgrade and patching of individual SDDC components which is maintained by the customer, not VMware (or IBM without additional services).
  • Ability to build and manage identical SDDC components both on-prem and in public cloud.

Note that VCF is not the only way to consume VMware on IBM Softlayer as IBM customers have previously been able to select individual VMware technologies and deploy them on IBM Softlayer bare metal. This also allows customers to bring their existing licensing to IBM Cloud, which can be a real bonus when migrating from, or replacing an existing datacenter. Note, as an example of how much complexity is actually involved with deploying an entire SDDC platform independently on IBM Softlayer I would suggest a read of the extremely comprehensive reference architecture here.

vCloud Air (non-VCF services)

Contrary to a number of blogs and articles I have read recently, vCloud Air is here to stay, albeit with a renewed focus to address specific VMware hybrid-cloud challenges. I’m not going to cover the existing vCloud Air service here as it has been available for a while now and we should all know it back to front, right? 🙂

In addition to VCF on vCloud Air, there were numerous announcements including;

  • Enhancements to Hybrid Cloud Manager with the full release of version 2.0, including;
    • Zero downtime Cross-Cloud vMotion utilizing fully integrated WAN opto, proximity routing. Note: This has no dependency on vSphere 6.x and can be used with vSphere 5.5 today.
    • NSX policy migration.
    • Proxy support.
  • New services for Enterprise DR, Hybrid DMZ and DMZ lite.
  • Enhanced Integrated Identity & Access Management.
  • Increased DPC host memory capacity (up to 1TB per host)

Today, vCloud Air is still the only way to subscribe to a fully managed VMware cloud service and take full advantage of Hybrid Cloud Manager. As an added benefit, the entry point for Dedicated Private Cloud (as a direct comparison) is only a single N+1 host meaning the overall initial commitment is not as significant as the other services.

Summing up…

Although the these individual cloud offerings may seem to overlap they each address a different set of challenges by integrating with key partners who are market leaders in a specific hybrid/public cloud capabilities. This puts VMware customers in a unique position of having a choice of multiple clouds depending on individual requirements.

In addition to the above, VMware also has 4000+ vCloud Air Network partners who all offer unique services with VMware software at the core. If I even began to try and break down the breadth of services covered through these partners this blog would turn into War & Peace…

I have only covered a very small amount of high-level info here as I hope to flesh out each service as more information is released. Comments, opinions and feedback in general is always welcome. If your attending vForum Australia 2016 I will also be presenting a couple of sessions on VMware Cross-Cloud Architecture and demoing VMC on AWS, so come and say hello and give me your take on this new world…


Author: @Kev_McCloud

VCA Dissected – Docker Machine Driver for vCloud Air

If you’ve followed my blog or seen me presenting in the last six months you may have noticed I have developed a keen interest in Cloud Native Apps and DevOps in general.  I was lucky enough to present a combined CNA/vCloud Air session at VMWorld this year which was a little different from the hybrid cloud talks I usually give.

In addition to the ‘what-why-how’,  I also ran a live demo showing the provisioning and decommissioning of a remotely accessible VCA Docker host, complete with NAT and firewall configuration using two simple commands. Since Las Vegas I have been meaning to post how I constructed the demo, so here it is.

Note: some prior knowledge of basic vCloud Air administration and Docker functionality is assumed…

Docker Machine Driver for vCloud Air

In my previous post I talked about VM’s and containers living side by side as decomposing (or building alongside) monolithic apps can take an extended period of time, or may not be possible at all. To support this notion, VMware has made great strides in the containers space to provide technology that allows organisations to run containers natively on vSphere (through VIC) or Photon Platform depending on operational requirements and overall maturity with the cloud native apps.

However there is one aspect of the VMware CNA vision that is often overlooked, namely vCloud Air. This may be because vCloud Air does not have a native container offering (at the time of writing this post), but it does have support for Docker Machine which is an essential part of the Docker workflow if using Docker Toolbox for administration.

What do we need?

In order to use the Docker Machine Driver for vCloud Air we will need to have a VCA subscription (either Virtual or Dedicated Private Cloud) and a user account with network & compute administrator permissions assigned. With this we can go ahead and create a private template which Docker Machine will use to create our container host. Note, if not specified in our docker-machine create command, Docker Machine will use Ubuntu Server 12.04 LTS from the VCA Public Catalogue by default.

Quick Tip: To create a quick template I used the Ubuntu Server 12.04 LTS image from the VCA Public Catalogue as it already has VMware tools installed. After I ran my usual VCA linux template prep, (root pw change, network config, ssh config, apt-get upgrade, apt-get update, etc) I renamed vchs.list to vchs.list.old found in /etc/apt/sources.list.d/. Now I did this because when Docker Machine runs through the provisioner process it uses apt-get to retrieve VCA packages from packages.vmware.com, which can sometimes be a little slow to respond. This occasionally results in the provisioner process timing out (as it did in my demo at VMWorld….grrr). Note, post initial template creation it is not necessary to have the packages.vmware.com repo available for the docker provisioning process.

Provided we have a access to a VCA routable network and an available public IP address, we can go ahead and run a relatively simple shell script to execute the entire provisioning process.  It should be noted that I created this script to be easily distributed to anyone needing quick access to a docker environment, provided they had the correct VCA permissions. It also avoids storing your VCA password in clear text.


#Simple docker-machine VCA docker host creation script

read -p "Enter VCA user name: " USER
echo Enter VCA Password:
read -s PASSWORD

docker-machine create --driver vmwarevcloudair \
--vmwarevcloudair-username="$USER" \
--vmwarevcloudair-password="$PASSWORD" \
--vmwarevcloudair-vdcid="M123456789-12345" \
--vmwarevcloudair-catalog="KGLAB" \
--vmwarevcloudair-catalogitem="DMTemplate01" \
--vmwarevcloudair-orgvdcnetwork="KGRTN01" \
--vmwarevcloudair-edgegateway="M123456789-12345" \
--vmwarevcloudair-publicip="x.x.x.x" \
--vmwarevcloudair-cpu-count="1" \
--vmwarevcloudair-memory-size="2048" \


The expected output is as follows…

Sample docker-machine output

Note, this is a minimal subset of commands for basic VCA container host provisioning. I have also changed the VDC ID, Edge Gateway ID and public IP in the example script for obvious reasons. A full list of Docker Machine Driver for vCloud Air commands can be found on the Docker website here.

Once the provisioner process is complete, we should have an internet accessible container host configured with 1 vCPU, 2GB of memory with Docker installed, running and listening for client commands on the configured public IP we specified earlier.

To natively connect to this environment from our Docker client we simply enter the following…


~> eval (docker-machine env DockerHost01)


That was easy, right? Well… it’s not quite that simple.

The above will create a relatively insecure Docker environment as the edge firewall rules are not locked down at all (as shown below).

Default docker-machine VCA Firewall configuration
Default docker-machine VCA SNAT/DNAT configuration

This can be handy for testing internet facing containers quickly as we do not need to explicitly define and lock down the ports needed for external access. However if this Docker host is intended to become a even a little more permanent, we can use VCA-CLI or the VCA web/VCD user interface to alter the rules (at a minimum port 2376 needs to be open from a trusted source address for client-server communications, and whatever ports are needed to access containers directly from the internet).

Assuming our environment is temporary, we can also tear it down quickly using:


~> docker-machine rm DockerHost01


So there you have it. The entire build provisioning process takes less than 5 mins (once you have set up a template) and decommissioning takes less then 2 mins! In addition to simple tasks I’ve outlined here we can also use a similar process to create a Docker Swarm cluster, which I will cover in my next post.

As always, if you have any questions or feedback feel free to leave a comment or hit me up on Twitter.


Author: @Kev_McCloud

vCloud Air @VMWorld 2015 pt.2

Wow. It’s been a crazy couple of months for VMware with the talking heads spewing opinion on the Dell/EMC acquisition and vCloud Air nearly spinning out into VirtuStream (sic). It’s been a real shame that all this shareholder “news” was incredibly distracting during the fantastic events that are VMWorld Barcelona and vForum Australia.

For those of you that missed the announcements here is a breakdown of the recent vCloud Air futures in addition to those from my first post.

VMware vSphere Integrated Containers (VIC) – This is not specifically a vCloud Air capability however it will be critical to the future of the platform. VIC builds upon the existing support for Photon OS , enabling support for containerised applications on vCloud Air. This is probably the most interesting innovation in VMware’s portfolio and one that I am spending a lot of time delving into to get a better understanding of what’s to come.

Have a gander at the below clip for an introduction to VIC (aka Project Bonneville)

VMware vCloud Air Monitoring Insight – Stripped down, SaaS/subscription delivery model of vROPs like monitoring. Expect operational metrics, event logs, and user-defined alarms that provide analytics into cloud service operations. These metrics provide information on the infrastructure, application health and platform performance and are also exposed via API. To be honest, this is an essential cloud operations requirement rather than an innovation, but definitely one of the most requested platform enhancements and a most welcome addition.

Enhanced Identity Access Management – Finally we can federate our Active Directory and other SAML 2.0-compliant identity services with vCloud Air. Again, table stakes in the public cloud game, but critical to automated application deployment and cloud security.

Project Michigan – This extends the capabilities of Advanced Networking Services and Hybrid Cloud Manager to all vCloud Air services. Currently HCM and ANS are only available on Dedicated Private Cloud due to the fact that this was the first platform which supported integrated NSX. Having the tools available across all platforms will be a complete game changer for VMware and cannot come soon enough…

On a side note, I had taken a taken a brief hiatus from blogging whilst I figured out what all the recent noise meant for VMware and my current role as a Cloud Specialist within this awesome company. Now the proverbial cat is out of the bag I am hoping just to get on with what I do best: Soapboxing on the technology I am passionate about and ranting about the state of IT…


Author: @Kev_McCloud

vCloud Air @VMWorld 2015 pt.1

VMWorld was bigger than ever this year with 23,000+ attendees. Announcements came in droves and vCloud Air certainly didn’t miss out on its fair share. I will be covering all of these in lot more detail in coming posts, but for now here’s high level summary.

Some of these announcements have been doing the rounds for a while, whilst others came out of the blue. Here is a breakdown of what got me childishly clapping at the screen whilst I watched remotely.

Hybrid Cloud Manager: is the replacement for the vCloud Air vSphere Web Client Plugin and vCloud Connector. It also adds advanced hybrid network capabilities based on NSX technology.

Enhanced Hybrid Cloud Management: Greatly improved administration and management of vCloud Air workloads from vCenter.

Enhanced Virtual Machine Migration: Replication-based migration of VMs over Direct Connect and VPN with automated cutover (ASAP, scheduled and optimised) and finally an exposed API. Oh, and the all important addition of WAN optimisation!

Infrastructure extension: Stretch multiple L2 segments (up to 200!) into vCloud Air over Direct Connect or VPN. This replaces vCloud Connector Stretch Deploy which is a very welcome addition if you have ever tried to use Stretch Deploy in any anger…

Project Skyscraper was also announced during the keynote on Day One of VMWorld and got all the ‘ooohs’ and ‘aahhs’ of a world class fireworks display. It has been a long time coming but unfortunately is still in technology preview (*sad face). It includes;

Cross-Cloud vMotion: This had to be one of the coolest demos of the entire event. vSphere vMotion into vCloud Air (and back again). Zero downtime for workload migration to public cloud. That is an absolute first! You can guarantee I will be covering this in great detail!

Cross-Cloud Content Syncsynchronise VM templates, vApps, ISOs, and scripts with their content catalog from vSphere to vCloud Air.

Advanced Networking Services: The long-awaited NSX integration into vCloud Air. There’s way too much here to go into any detail. High level enhancements include;

Enhanced Edge Services: Current vCA Edge capabilities + support for dynamic routing (eBGP/OSPF), SSL-VPN, enhanced load balancer, multi-interface trunking (True L2 Stretch), enhanced stateful edge firewall + more.

Trust groups: Stateful in-kernel L2/3 firewall (micro-segmentation), Object based rules, 5-tuples mapped to apps + more.

Compliance and operations: Enhanced API, Distributed firewall logs, Load balancer health checks + more.

Site Recovery Manager Air (SRM Air): In terms of revision, DR2C (or Disaster Recovery to Cloud) is in it’s second incarnation. v2.0 gave us reverse replication, fail-back, multi point-in-time recovery in addition to other small improvements. Enhanced Disaster Recovery aka SRM Air (or DR2C 3.0 if you will) adds SRM capabilities in a SaaS delivery model. Expect application-centric automated workflows that orchestrate testing, failover and failback of VMs to and from vCloud Air.

DR2C will also move away from a subscription model to OnDemand, metered by the minute pricing model where you only pay for the storage you consume + a per VM monthly subscription. Watch this space!

EMC & Google Cloud Object Storage: Object storage has been a cornerstone of public cloud for many years now. VMware has partnered with EMC and Google Cloud to provide the following services integrated directly into vCloud Air.

Google Cloud Object Storage: Available in three flavours. Standard, Durable Reduced Availability and Nearline. I think these services are all pretty self explanatory but for more info see here.

EMC Object Storage: Geo-protected distribution, located in the same data centers as vCloud Air, S3 compatible API. Currently only available in the US, but coming to other global vCA data centers in the near future.

vCloud Air SQL (DBaaS): Again relatively self explanatory. vCA SQL is a managed database service initially based on, you guessed it, Microsoft SQL. This was announced last year but has made it into the fold this year.

Backend-as-a-ServicevCloud Air’s first foray into the PaaS world outside of the partnership with federation brethren, Pivotal. Most impressive in my opinion is Kinvey, a vCloud Air integrated mobile development platform with complete backend services for end-to-end delivery of mobile applications: identity management, data services, business logic, analytics + more.

Apparently you can build and deploy a complete mobile application without writing a single line of code! I find this difficult to believe but I look forward to giving it a go.

There were a number of other smaller announcements which weren’t as exciting (to me at least). If you want any more info about any of the above feel free to give me a shout.

Author: @Kev_McCloud

Fusion 8 – vCloud Air Integration

True story… I spend a lot of my daytime hours working with VMware Fusion Pro for BAU and local lab tasks. Fusion has been a mainstay for me since switching to a Macbook Pro for my main workhorse a number of years ago. I test a lot of beta software, build & break stuff for training. Fusion has always had me covered.

I’ve also gotten used to working with a variety of interfaces when testing various cloud platforms, but recently most of my time has been spent in vCloud Air (obvz). So when I fired up Fusion 8 Pro this morning and found the <VMWARE VCLOUD AIR> header in my navigation pane, I was understandably excited.

vCA Login

A simple click onto the header reveals login fields and a couple of contextual links to vCloud Air (bonus points to the product team for adding <Remember Password> checkbox).

Screen Shot 2015-08-26 at 7.13.32 am

I enter my login credentials and *BAAM*. Within a couple of seconds, full access to all of my Subscriptions and VPC OnDemand. I will admit that I was surprised by how rapidly Fusion was able to display and allow interaction with my full vCA inventory across multiple SID’s / VDC’s.

I’m hoping to see a more advanced federated ID type service integrated into Fusion in the near future, but this will do for now.

VM Remote Console (VMRC)

Hands down one the best features of this Fusion release is VMRC access to vCA workloads. No messing with firewall and NAT rules. Just plain VMRC…

Screen Shot 2015-08-26 at 10.33.50 am

The result is Operations can continue to administer the vCA platform and assign access to VDC’s based on login. Developers (or users) who have no delegated administrative control in vCA can login via Fusion and get access to the resources the need. No additional URL’s to send out. No training users on the multiple interfaces. They just continue to use Fusion in the same way they always have…

Transport to vCA

As for workload transport, we can still export to OVF/OVA and upload to our Private Catalogue in vCA…

Screen Shot 2015-08-26 at 10.35.16 am

…but why would we when we can now drag ‘n drop our local VM’s into vCloud Air! Select the VM, drag to our VDC of choice, rename if required and click <Upload> to confirm. Simple.

Screen Shot 2015-08-26 at 11.37.01 am

Note: One small gotcha (why is there always a gotcha…). In order to use this method of migration we need to update Fusion tools to the latest version, and then downgrade the hardware version to a maximum of v10. The VM can be upgraded again post migration which is a small hassle (edit:when supported), but in general this method rocks!

Screen Shot 2015-08-26 at 11.48.44 am

And that’s it… A ~5GB VM took just under 10 minutes to move from Fusion to vCA (AU South). Of course the network still needs to be setup on the vCA side to support intra-VDC and external network communication, but if the VM is standalone then nothing else is required.

Screen Shot 2015-08-26 at 12.14.23 pm

More detail to come in the near future. Happy Fusion’ing 😉


Author: @Kev_McCloud

vCloud Air Dissected – pt.3 Understanding DR2C Compute

vCloud Air Disaster Recovery (aka Disaster Recovery to Cloud, aka DR2C) is by far the easiest entry point and the most in demand offering from vCloud Air. However, it also seems to be the most misunderstood service from a compute scoping perspective. Let’s break it down!

I won’t be covering DR2C in its entirety as there are numerous solution briefs, videos, white boards etc, most of which can be found in the links I featured here. My colleague @davehill99 also has a great blog covering everything from DR2C basics to advanced architecture.

DR2C Basics

DR2C is a subscription service built around VPC compute (covered in part 1 of this series) so we know that the same characteristics apply. What’s important to note is that workloads replicated from our on-premises vSphere Replication consumes storage in VCA, but does not reserve any compute. So what does this mean for the consumer?

First off let’s cover the two scenarios which would require access to active compute in DR2C. Unlike a VPC Virtual Data Centre (VDC), DR2C is not intended to run active VM’s outside of Testing or Recovery, defined as follows:

Testing: Launch protected VM’s in an isolated VDC for a period of 7 days. VM data continues to be asynchronously replicated for the entire duration of the test.

Recovery (or planned migration): Launch protected VM’s, accessible in a predefined production topology for 30 days (and beyond if required). VM replication is ceased (on-premises > vCA) for the entire duration that recovery workloads are active. Note: reverse replication and failback is available for DR2C 2.0 customers (vR & vCenter 6.0 and above).

Screen Shot 2015-08-19 at 2.27.01 pm

Scoping DR2C Compute…

Scoping compute for DR2C is a little different from other cloud services as we really need to analyse our procedures for testing and recovery. Let me explain…

DR2C compute is procured in the same blocks as VPC (10GHz/20GB) and scales in exactly the same way. In order to keep the cost of the service low (for the consumer) it is recommended that we only procure as much compute as we need to guarantee. I’ll come back to this in a second.

Screen Shot 2015-08-20 at 7.38.46 am
Note: This figure is conceptual and not intended to show any relationship between compute & storage

The above example shows a VDC capable of supporting 10GHz CPU / 20GB memory / 4TB standard storage. Conceptually let’s say that we need 40Ghz CPU / 80GB memory to fully stand up our replicated VM’s for a test. Using this allocation only allows us to test (or recover) a subset of our total footprint within the core subscription.

This is where temporary add-ons come in. We can procure additional compute resources on a temporary basis (1 week for testing, 1 month for recovery) from My VMware to support any additional compute we will need for both scenarios. This gives us the flexibility to test individual application stacks within the confines of the core subscription, or temporarily extended to full capacity to test (or recover) everything at once.

Screen Shot 2015-08-19 at 2.36.02 pm
Temporary subscription compute add-ons

How much to guarantee?..

The quantity of compute we choose to guarantee is completely dependent on our risk profile and overall Recovery Time Objective (RTO). The more compute we guarantee, the more our monthly cost goes up, much like an insurance policy.

Procuring add-ons will add a small amount of time to our overall recovery, but in reality this is a small inconvenience considering the potential cost savings. A simple rule of thumb would be to protect critical workloads (Tier 1) with guaranteed compute to reduce downtime, but scale up with add-ons for everything else (Tier 2 & 3).

It’s also important to remember that we can utilise active capacity in VPC, VPC OnDemand and DPC to augment our DR strategy. The VMware solution brief on Advanced Architecture for DR2C really helps to understand how different vCA services hang together to form a complete solution.

…and there you have it. Pretty simple once we know the limitations. That said, most of the complexity in DR strategy lies in the network. I’ll save that for the next post.



vCloud Air Dissected – pt.2 Understanding DPC Compute

So we’ve already covered some of the basic elements of scoping vCA compute in part 1. Let’s delve into Dedicated Private Cloud (DPC) to show how the compute differs from the services built on Virtual Private Cloud (VPC, DR2C and OnDemand).

Again, if any of this seems unfamiliar please check out pt.1 Understanding VPC Compute or have a fish around Cloud Academy for some vCA goodness.

DPC compute basics

Unlike the multi-tenanted Virtual Private Cloud service, Dedicated Private Cloud (DPC) assigns physically dedicated host’s to each tenant (as the name might suggest). This has numerous benefits for compute, specifically;

  • Physical isolation for the purpose of security and licensing compliance
  • The ability to split resources across multiple VDCs
  • The ability to control oversubscription at a VDC/VM level

Note: There are other benefits for network, which I will cover in a another post.

From a consumer perspective the services are almost identical, however there are a few additional parameters which can be configured from both the vCA and vCD web portals within the DPC service.

Physical allocation

A great place to start is understanding what we have access to when working with DPC. The physical specification of the hosts is identical to VPC (2x 8 core, x86 Intel processors @2.6GHz and 256GB RAM in AU South at the time of writing). This gives us access to a core subscription of 35Ghz CPU and 240GB RAM (the remaining resources are reserved for the hypervisor).

From a compute standpoint our workloads have access to a single host’s resources, but in reality we actually have two host’s (1x Active / 1x HA) within our DPC environment. High Availability has been a critical component of vSphere for many years now and by no means is it excluded from vCA.

Screen Shot 2015-08-04 at 1.59.21 pm
DPC core subscription = 1x Active + 1x HA hosts

DPC scales in this manner until we reach 7 hosts (1x core & 6x add-ons) where a second HA host is added into our DPC environment. As a rule every 7 hosts in your allocation adds a HA host, capped at 28 which maxes out the underlying cluster of 32 hosts (28x Active + 4x HA).

Screen Shot 2015-08-04 at 2.00.08 pm
DPC core + 6x compute add-ons

So why is this important? Probably the most obvious reason (other than curiosity) is licensing compliance. For the more restrictive software vendors (*cough*Oracle) it is critical to know how many hosts are in our environment as a whole.

Correction: Since I wrote this post the model has changed to horizontally scale HA hosts from 16 hosts onwards (ie. one HA host until you reach 14x active hosts where a second is added).

DPC compute consumption

As DPC compute is dedicated to a single (master) tenant, all compute resources are 100% guaranteed. It is completely up to the account administrator(s) how resources are split over a single or multiple VDC’s within the organisation.

The ability to create multiple VDC’s is a powerful tool as it gives us the choice to run heavily oversubscribed (think test/dev) or with room to stretch (think mission critical) within the same subscription. It’s also useful as a subtenancy construct for service providers (to be covered in a later post).

Screen Shot 2015-08-04 at 6.04.33 pm
VDC resource config

Once we dive into the configuration for an individual VM we can set shares, reservations and limits as a mechanism to balance workloads according to priority when our VDC is in contention. By default the reservation is set to ‘0’ (as pictured below) for both CPU and memory (unlike VPC which automatically reserves 100% of the memory allocation for powered on VM’s).

Screen Shot 2015-08-04 at 6.11.42 pm
VM compute resource config

Sticking with the default configuration (NORMAL, 0, Unlimited) will allow us to run heavily oversubscribed within our VDC with all workloads having equal access to resources. We get far greater control over the individual performance of each VM once we start pulling levers (read: configuring shares, reservations and limits). We’re not going to cover the specifics of resource management in this post, but I recommend reading this blog from @FrankDenneman as a start.


… is not as straight forward as VPC where we can use memory as a measure for sizing the environment (assuming memory is the most common resource for contention). If you are familiar with scoping and building a vSphere environment from scratch then this shouldn’t be too much of a chore.

When you are moving VM’s from an existing vSphere environment to vCA there are a number of tools that can be used to predictively analyse virtual resource utilisation like vRealize Operations Manager or VMware Infrastructure Planner. If not concerned with peak utilisation we can also use a point-in-time capture tool like RVTools. The key is to understand CPU and memory utilisation across a known resource quantity. As long as we know what is currently being consumed, we know how to size the compute for DPC.

For example, if we take a typical vSphere environment with 3 hosts (100GHz CPU / 384GB RAM) with a 15% and 50% utilisation respectively then we can conclude we need 15GHz CPU and 192GB RAM which is within the thresholds of a core subscription (35GHz / 240GB). However if memory utilisation is high, let’s say 75% (288GB) then we need a minimum of a core subscription plus a single compute add-on (70GHz / 480GB).

Before anyone gets fired up, I realise that the above is a drastic oversimplification for anything but the simplest migrations. It’s only intended to show that we don’t need to go to ridiculous lengths to get an indicative scope.

Note: We can use the same methodology for VPC, however we need would need to right-size the workloads before being migrated as vRAM is 100% reserved for powered on VMs.

Simple right?


Author: @Kev_McCloud