Prerequisites for below vcenter services before enabling them.
So how does it work? At a basic level SIOC is monitoring the end to end latency of a data store. When there is congestion (the latency is higher than the configured value) SIOC reduces the latency by throttling back VM’s who are using excessive I/O. Now you might say, I need that VM to have all of those I/O’s, which in many cases is true, you simply need to give the VMDK(s) of that VM a higher share value. SIOC will use the share values assigned to the VM’s VMDK’s to prioritize access to the data store.
Before using vSphere DRS, the following requirements must be met:
- vCenter Server needs to be installed.
- CPUs in ESXi hosts must be compatible.
- To use DRS for load balancing, hosts in the DRS cluster must be part of a vMotion migration network.
- All hosts should use shared storage, with volumes accessible by all hosts.
- Shared storage needs to be large enough to store all virtual disks for the VM.
- DRS works best if the VMs meet vSphere vMotion requirements, listed here.
- The VM must not have a connection to an internal standard switch.
- The VM must not be connected to any device physically available to only one ESXi host, such as disk Storage, CD/DVD drives, floppy drives, and serial ports.
- The VM must not have a CPU affinity configured.
- The VM must have all disk, configuration, log, and NVRAM files stored on a datastore accessible from both ESXi hosts.
- If the VM uses RDM, the destination ESXi host must be able to access it.
The physical adapter shares assigned to a network resource pool determine the share of the total available bandwidth guaranteed to the traffic associated with that network resource pool. The share of transmit bandwidth available to a network resource pool is determined by the network resource pool’s shares and what other network resource pools are actively transmitting. For example, if you set your FT traffic and iSCSI traffic resource pools to 100 shares, while each of the other resource pools is set to 50 shares, the FT traffic and iSCSI traffic resource pools each receive 25% of the available bandwidth. The remaining resource pools each receive 12.5% of the available bandwidth. These reservations apply only when the physical adapter is saturated.
- Fault Tolerance logging and VMotion networking configured.
- vSphere HA cluster created and enabled.
- Hosts must use supported processors.
- Hosts must be licensed for Fault Tolerance.
- The configuration for each host must have Hardware Virtualization (HV) enabled in the BIOS.
- vCenter: Because VMware HA is an enterprise-class feature, it requires vCenter before it can be enabled.
- Access to shared storage: All hosts in the HA cluster must have access and visibility to the same shared storage; otherwise, they would have no access to the VMs.
- Access to same network.
DirectPath I/O vs SR-IOV
SR-IOV offers performance benefits and tradeoffs similar to those of DirectPath I/O. DirectPath I/O and SR-IOV have similar functionality but you use them to accomplish different things.SR-IOV is beneficial in workloads with very high packet rates or very low latency requirements. Like DirectPath I/O, SR-IOV is not compatible with certain core virtualization features, such as vMotion. SR-IOV does, however, allow for a single physical device to be shared amongst multiple guests.
With DirectPath I/O you can map only one physical funtion to one virtual machine. SR-IOV lets you share a single physical device, allowing multiple virtual machines to connect directly to the physical funtion.
This functionality allows you to virtualize low-latency (less than 50 microsec) and high PPS (greater than 50,000 such as network appliances or purpose built solutions) workloads on a VMWorkstation.