Storage Planning – Virtualization

Posted on September 10th, 2008 in Data Center, Storage, Virtualization | 1 Comment »

Storage is always the important piece to be run in Data center as well as any solutions we put into the production environment. It will also impact the capacity planning, performance as well as the sustainable period of the specify solution.

Here I would like to specify talk about ESX storage planning. As many of the cases I read and experience before, more and more users are really heading to the stage of choosing specify storage for specify solutions. Example you may think off for using Netapps on NFS for ESX, which claim better performance, dedupe and protocol different. Somehow, the users had forgotten that they had not run any of netapps storage in their existing environment. When we look at the management perspective, the operation management for IT support is not SIMPLIFY. Everyone know how important to standardize our environment as well as simplify out IT environment. The more complex you have, the more pain you will feel.

Read more »

VMware VMFS Vs RDM ( Raw Device Mapping )

Posted on August 22nd, 2008 in Virtualization | 3 Comments »

Recently I had read a couple of article regarding the performance caparison chart from VMware, Netapps and some of the forum communities, I do really find out the real performance is much different with the technical white paper that I read before this.

As for the today, more users are actually deployed the mission critical and high I/O servers on the virtualization environment, but we do see some I/O bottle neck which cause by the storage performance always. VMDK do provide flexibility from management perspective, but it does sacrifice the performance you may require for your databases, files transfers and disk performance. I had run a couple of test with real case scenerio instead of I/O meter that been always use widely, and here is the summarize result I would like to share.

In disk perfomance, we always split it to 2 categories as sequential and random I/O. in sequential mode, you will see the huge different while you try to perform the file transfer locally or through network. My test environment is running with SAN storage from fiber channel with same LUN size and raid group which are created from the Storage Level. The only differences is VMFS Vs Raw.

Raid Group design 7+1 raid 5 configuration and run on MetaLun configuration

Each LUN size is 300GB
Read more »