Monday, August 18, 2008

VMware VMFSVs RDM ( Raw Device Mapping )

Recently I had read a couple of article regarding the perfomance camparison chart from VMware, Netapps and some of the forum communittes, I do really find out the real perfomance is much different with the technical white paper that I read before this.

As for the today, more users are actually deployed the mission critical and high I/O servers on the virtualization environment, but we do see some I/O bottle neck which cause by the storage perfomance always. VMDK do provide flexibility from management perspective, but it does sacrifice the perfomance you may require for your databases, files transfers and disk perfomance. I had run a couple of test with real case scenerio instead of I/O meter that been always use widely, and here is the summarize result I would like to share.

In disk perfomance, we always split it to 2 categories as sequential and random I/O. in sequential mode, you will see the huge different while you try to perform the file transfer locally or through network. My test environment is running with SAN storage from fiber channel with same LUN size and raid group which are created from the Storage Level. The only differences is VMFS Vs Raw.

Raid Group design 7+1 raid 5 configuration and run on MetaLun configuration

Each LUN size is 300GB

Perfomance monitoring tools = Virtual Center Perfomance Chart

VM Test Machine = 4 Vcpu, 8GB Memory

Operating System = SLES 10 x32, x64 ; Windows Server 2003 x32, x64

Sequential : RDM is out perform VS VMFS as it able to achieve > 2 times higher through put during the file transfer locally on the VM

Random I/O : The Raw Device Mapping is still out perform the VMFS and getting the similiar through put with sequential file transfer. Multi session with random database query is been executed in the test

for NFS file transfer from VMFS to VMFS, I do see the bottle neck happen much more earlier than RDM.


No comments:

 
Site Meter