Recently I had read a couple of article regarding the performance caparison chart from VMware, Netapps and some of the forum communities, I do really find out the real performance is much different with the technical white paper that I read before this.

As for the today, more users are actually deployed the mission critical and high I/O servers on the virtualization environment, but we do see some I/O bottle neck which cause by the storage performance always. VMDK do provide flexibility from management perspective, but it does sacrifice the performance you may require for your databases, files transfers and disk performance. I had run a couple of test with real case scenerio instead of I/O meter that been always use widely, and here is the summarize result I would like to share.

In disk perfomance, we always split it to 2 categories as sequential and random I/O. in sequential mode, you will see the huge different while you try to perform the file transfer locally or through network. My test environment is running with SAN storage from fiber channel with same LUN size and raid group which are created from the Storage Level. The only differences is VMFS Vs Raw.

Raid Group design 7+1 raid 5 configuration and run on MetaLun configuration

Each LUN size is 300GB
Read more »