IBM x3950 M2 Powerful ESX Machine

Posted on November 27th, 2008 in Data Center, Hardware, Virtualization | 2 Comments »

For a while ago, I had spoken to my team and friends about the amazing concept if we do able to use the HPC concept to scale our Server hardware in term of processors, memory and I/O without increasing the amount of ESX server in the Virtual Environment. I remember the question had been raised to the VMware representative and this is not in the road map for the ESX servers. 2 days back when I attended the seminar in town and I found this interesting server which is IBM X3950 M2 had been presented with the capabilities I had been looked around. Some of my friend may think the idea of doing this is crazy or over limit, but in real scenario, you will realize the benefits in term of managing and scaling from time to time when you managing a real huge environment with a massive amount of Virtual Machines.

Read more »

Virtual I/O Xsigo

Posted on November 20th, 2008 in Hardware, Industry News, Storage | 5 Comments »

Virtual I/O is not new to the IT market and I was introduced by my friend about Xsigo recently.

Many enterprise who have implemented virtual environments as a way to consolidate servers and centralize management are running may facing I/O bottleneck issue. No matter how many servers, switches, SAN storage or operating systems that you have, you’re still require to go through IO device.

Xsigo
Read more »

System, Storage and I/O compatibility for ESX3i and ESX 3.5

Posted on August 22nd, 2008 in Virtualization | No Comments »

System, Storage and I/O compatibility for ESX3i and ESX 3.5

http://www.vmware.com/pdf/vi35_systems_guide.pdf

http://www.vmware.com/pdf/vi35_io_guide.pdf

http://www.vmware.com/pdf/vi3_35/esx_3/r35/vi3_35_25_compat_matrix.pdf

http://www.vmware.com/files/pdf/dmz_virtualization_vmware_infra_wp.pdf
Read more »

VMware VMFS Vs RDM ( Raw Device Mapping )

Posted on August 22nd, 2008 in Virtualization | 3 Comments »

Recently I had read a couple of article regarding the performance caparison chart from VMware, Netapps and some of the forum communities, I do really find out the real performance is much different with the technical white paper that I read before this.

As for the today, more users are actually deployed the mission critical and high I/O servers on the virtualization environment, but we do see some I/O bottle neck which cause by the storage performance always. VMDK do provide flexibility from management perspective, but it does sacrifice the performance you may require for your databases, files transfers and disk performance. I had run a couple of test with real case scenerio instead of I/O meter that been always use widely, and here is the summarize result I would like to share.

In disk perfomance, we always split it to 2 categories as sequential and random I/O. in sequential mode, you will see the huge different while you try to perform the file transfer locally or through network. My test environment is running with SAN storage from fiber channel with same LUN size and raid group which are created from the Storage Level. The only differences is VMFS Vs Raw.

Raid Group design 7+1 raid 5 configuration and run on MetaLun configuration

Each LUN size is 300GB
Read more »