capacity of VMware vSphere 4.0.
Aside from the sizable scale
increase enabled in this version of
vSphere 4.1, the main advances in the
platform are evolutionary extensions
of capabilities that improve how the
platform handles VM resource contention. During tests, I used the new
I/O controls in networking and storage to govern resource use.
IT managers who are already
accustomed to using resource controls
in VM CPU settings will have a leg up
when it comes to using I/O controls in
both network and storage areas. Even with the
CPU control heritage,
my use of network and
storage control features
revealed a fair number
of “version 1” limitations.
Network I/O control
prioritizes network traffic by type when using
network resource pools
and the native VMware
Switch. Network I/O
control works only with
the 4.1 version of the
Switch—not with the
Cisco Nexus V1000 or the standard
switch from VMware.
Implementation is simple
While it takes advanced network
expertise to design and tune the policy that runs network I/O controls,
the implementation of the feature is
quite simple. Entering the parameter
changes to enable the feature and set
the specific physical network adapter
shares is simply a matter of walking
through a couple of configuration
screens that are easily accessed from
the vSphere client.
I was able to assign a low, medium,
normal, high or custom setting
that designated the number of net-
work shares (a policy designation
that represents the relative impor-
tance of virtual machines that are
using the same shared resources)
that would be allocated to virtual
machine, management and fault-
tolerant traffic flows.
Storage I/O controls were equally
easy to configure once the policy decisions and physical prerequisites were
met. In my relatively modest test
environment, it was no trouble to
run storage I/O controls on a single
vCenter Server. I tested this feature
on an iSCSI-connected storage array.
It also works on Fibre Channel-con-nected storage, but not on NFS or
Raw Device Mapping storage.
Virtual machines can be limited
based on IOPS (I/O operations per
second) or MB per second. In either
case, I used storage I/O controls to
limit some virtual machines in order
to give others priority.
I found that the large number of
considerations—for example, each
virtual disk associated with each VM
must be placed under control for the
limit to be enforced—meant that I
spent a great deal of time figuring policy to get a modest amount of benefit
when my systems were running.
VMware included a handy memory innovation in vSphere 4.1 called
Since accessing this
memory is significantly
faster than swapping
memory pages to disk,
the virtual machines
ran much faster with
this feature than they
did when it was disabled
and the same workloads
IT managers should
expect to devote at least
several weeks of expert
analysis to determine the
most effective memory
compression configuration for each workload.
VMware did some housekeeping in the incremental release of
vSphere. The vSphere client is still
available in the vCenter 4.1 installation bits, but it is no longer included
in the ESX and ESXi code. There also
were some minor changes made to
various interface screens, but there
was nothing that would puzzle an
experienced IT administrator. ;
Technical Director Cameron Sturdevant
can be reached at firstname.lastname@example.org.
vSphere 4.1 now adds a method to compress memory pages to reduce
swapping and improve performance.
This story can be found