vCloud Air Dissected – pt.1 Understanding VPC Compute

Introduction

In this series of posts I will be dissecting areas of vCloud Air (vCA) that I get grilled on regularly. Before reading on, please note that I am writing these blogs from the vCA Consumer perspective. For the vSphere Admin, interacting at purely the consumption layer can seem unnatural. I’m hoping this series will reveal enough detail to demonstrate VMware’s objectives in designing a global public cloud platform with the vSphere Admin in mind.

I’m going to assume some basic reader understanding of the key differences between Virtual Private Cloud (VPC), VPC OnDemand and Dedicated Private Cloud. There is a wealth of 101 style vCloud Air info covered in Cloud Academy to get you started if you are not already armed with this knowledge. In addition I’ll point you to a great blog from @mreferre who has the perfect vehicular analogy for the basic Virtual Data Centre (VDC) construct. With that out of the way, lets get started.

Screen Shot 2015-07-27 at 3.04.40 pm

VPC compute basics…

One of the areas I get questioned on regularly is how CPU is consumed and scoped for vCloud Air. In this post we are going to focus on CPU consumption within a VPC VDC.

VPC compute is made up of securely shared memory and CPU, procured as blocks of 10GHz vCPU / 20GB vRAM. vRAM is relatively easy to understand as whatever has been allocated (read: configured) to your powered on VMs equates to what is consumed within your VDC. The upside to this characteristic is that vRAM within your VDC is 100% guaranteed which is key to vCA’s multi-tenancy performance. Note: The logical vRAM maximum for a single VM is limited to what is available on a single host, minus the hypervisor overhead (~240GB at the time of writing).

Screen Shot 2015-07-20 at 6.36.23 pm

Core VPC Allocation

vCPU is a little trickier to explain as consumption is not static like vRAM. vCA CPU is procured in aggregated GHz ie. pooled clock cycles across a number of physical hosts. This however does not mean that a single workload’s processing is distributed. Regardless of how many vCPU’s have been configured a VM can only execute on a single host. It’s important to remember that vCA’s underlying hypervisor is vSphere and therefore the same rules apply in vCloud Air.

Going back to my introduction briefly, those who have familiarity with vSphere will want details on guarantees, oversubscription etc. The key to working with multi-tenanted cloud is to accept that low level functions (like the ability to set reservations, limits and shares) are often abstracted from the consumer. In principal this is how vCA ensures equal resources to all tenants. Wouldn’t be fair to have some customers more equal than others, right?

Note: DPC does allow the consumer to configure compute shares and reservations at a VM level however I will cover this in part 2 of this series.

How VPC compute is consumed…

VPC’s current architecture utilises hosts with 2x 8 core Intel x86 processors rated at 2.60GHz (at the time of writing). Contrary to popular belief vCPU is not throttled within vCA, therefore you can run a theoretical max of ~2.6GHz per vCPU. Conceptually, within a 10GHz allocation you could run 3x single vCPU VMs at 100% utilisation before you would potentially run into resource contention ie. 3x 2.6 = 7.8GHz leaving 2.2GHz.
Screen Shot 2015-07-27 at 1.17.14 pm

If we were to launch the fourth VM the expected behaviour is for the scheduler to balance all VMs within the VDC resulting in throttling by the percentage of the overcommitment.

Screen Shot 2015-07-27 at 1.15.55 pm

Note: In the above example this could theoretically be ~100MHz penalty per VM however this would be a point in time snapshot of the potential impact to workloads. The scheduler executes thousands of transactions per second therefore any impact would not be uniform across all workloads. This scenario is pretty unrealistic, but helps me to articulate the limits in this explanation.

If we were using a larger compute allocation (50GHz CPU, 100GB RAM) we could potentially run a single VM with 16x vCPUs running at a maximum clock speed of ~2.4GHz on a single host. The remaining host compute is reserved for hypervisor overhead and this is not included as part of our overall consumption.

Screen Shot 2015-07-26 at 6.27.26 pm

Using this same example, you can assign up to 32 vCPUs to a single VM* (deployed from vCloud Director UI), however vCPUs would be capped at ~1.2MHz when executing simultaneously due to hyper-threading on the underlying host processors. It’s important to note that multithreading characteristics are also dependent on the OS and application and should be taken into consideration when sizing a vCA environment.

*Interestingly, when deploying a new VM from the vCloud Director Web Interface you have the ability to assign up to 64 vCPUs to a single VM, but you will be swiftly issued with a error message upon creation.

vCA CPU Reservation

Definition: A reservation is a guarantee of the specified amount of physical resources regardless of the total number of shares in the environment.

What’s not immediately obvious (and can’t be observed from the basic vCA dashboard) is that a powered on VM automatically reserves 260MHz per vCPU and will scale up its reservation in 260MHz blocks. Theoretically you could power on 38x single vCPU VMs all using less than 260MHz before admission control stopped you from powering on subsequent VMs within you VDC.

Screen Shot 2015-07-26 at 4.37.30 pm

Again, this scenario is unrealistic but would help to explain why your dash may be showing available CPU that you cannot access. It is also important to note that the dash is not realtime (updated every 5 mins) and therefore not a reliable source of data for troubleshooting. Monitoring is a broad topic on it’s own so I’ll have to cover it in a later post.

When it comes to scoping typically most VMware environments are RAM bound and CPU usually clocks in at anywhere between 5-30% utilisation during non-peak periods. With this in mind (and the info in this post) it is pretty simple to put together some basic math to calculate the minimum requirements for a VPC environment (but don’t forget to factor in overhead for peak periods). VPC compute add-ons are also instantaneous when procured from My VMware, so if your math is a little skewed don’t worry, we got you covered.

I hope that this post has been useful in understanding some of the nuances of vCA Compute. Feel free to message me if you want more info on VPC compute or any aspect of vCA in general. -K

 

Author: @Kev_McCLoud

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s