Xsigo on ESX 3.5 Prove of Concept

Posted on December 18th, 2008 in Virtualization | 2 Comments »

Personally, I just initiated the prove of concept with the local vendor in our environment regarding the Xsigo Virtual I/O integration into our existing Network, Fiber Channel and ESX 3.5 Virtualization server. I would like to share some of my finding which I think may be useful for reference purpose.

Before the approach on Xsigo, I am actually working on the new design plan to improve the performance and capabilities of the existing virtual infrastructure for the large scale virtualization in our environment. I am actually think about the Cisco director switch on Fiber channel and the network switches of 6509 with 10Gb module, but the cost of doing this, is really going to kill us for the current economy down turn. Suddenly, I been introduce by my buddy with this brand new products which I setup the POC now and I found that is more make sense to look in this product to unlock the under utilize bandwidth in our DC include both Network and Fiber Channel.


  1. HCA is cheaper than HBA as we may need 2 HBA per server for redundancy purpose, which only provide 8Gb bandwidth, the HCA will provide 20Gb per HCA, which will be powerful enough for virtualization purpose.
  2. HCA will combine both network and fiber channel capabilities, reduce cabling requirement and improve bandwidth utilization.
  3. With Vnic and Vhba, we will able to create and utilize existing available bandwidth by reduce the number of FC and Network switches in the Data Center.
  4. I will able to reduce the cabling from 18 to 2 HCA connection per ESX in my environment

I am currently stress test the performance on the Xsigo chassis on our ESX, interesting that it will not sacrifice the performance as it able to perform as same as the previous configuration we done. Now, it provide the opportunity for me to utilize the FC ports in my Data Center without increase the number of FC switches, which is currently 95% utilize. Imagine, with 4Gb FC for HBA, except the high performance database server, we will not able to utilize the storage bandwidth on each FC ports. Same time, the gigabit network, will not be consume for more than 10% most of time, will provide more resources for the system which require extra bandwidth in the data center. Xsigo has the intelligent to manage the bandwidth and assign as needed.

Read more »

VMFS LUNs Report

Posted on December 4th, 2008 in Storage, Tips | 1 Comment »

Gabesvirtualworld posted Prevent your LUNs running out of space remind me about my VMFS LUNs space. Personally I don’t agreed about create a dummy vmdk on each LUN. Why? Arnim, you still have to wake up at 3am if received any calls. Just a joke!

Have a look here:
VMware ESX VMFS LUNs Report.
San Space

Read more »

DELL Equallogic VS Fiber Channel SAN

Posted on November 14th, 2008 in Hardware, Industry News, Virtualization | 3 Comments »

Economy Crisis this year has encouraged users to consider the ISCSI SAN Vs FC SAN today. Data Storage growth will never stop unless the business is stop. In order to keep the environment growth, the IT architect will have to provide a cost effective solution in the finance critical timing like now.

Performance wise, we may know that the Equallogic ISCSI might not beat the huge SAN box which easily cost you double as your TCO on ISCSI. I would like to share my finding relevant on the features it bundle with Dell Equallogic. In FC SAN, we are able to achieve performance and functionality, which require always additional license cost and expensive infrastructure to support it. Today, ISCSI provide more flexibility in term of FC due to the common understanding we all have on the IP technology which we deal with it everyday.

In Equallogic ISCSI, you will entitle every features which bundle together with the storages you purchase.

Read more »

Best sizing for single Storage LUN

Posted on September 26th, 2008 in Storage, Tips | No Comments »

VMware ESX host need to have storage available before create VMs. Normally the storage is a LUN created on SAN. The question is, what is the best size for a single LUN (storage) in ESX?

Our design for a storage on ESX is 300GB, allow maximum VMFS up to 256GB.

The reasons behind are:

  • better I/O performance: Each storage in ESX, we only assign to 5 VMs or less. Since more running VMs on a single storage will hit into the LUN I/O speed bottle neck.
  • better disk utilization: You can safe lots of unuse space when you do it small. Say You assign a 2TB storage that can keep 40 VMs, but you only have 20 now, you actually waste 1TB which is sitting there doing nothing. But if you make each storage small, you can create only when you need it. It is much more manageable on SAN disk allocation point of view.

Read more »

How to design Storage LUN for optimum perfomance

Posted on September 24th, 2008 in Storage, Tips | 2 Comments »

Previously Craig talked about Storage Planning – Virtualization, the following article will share about “How to design Storage LUNs for optimum performance” – stripe Single LUNs to MetaLUN.

The most important point to fasten up LUNs speed is increase the spindle count on your LUN. And this can be achieve by Meta up few Single LUNs.

If you have 3 RAID Groups with 7 + 1 RAID 5 Configuration (7 Disks join as a RAID 5 Group and 1 as Hot spare), and you create a 60GB Single LUN on Group 1, you have 7 spindles to run this LUN. But if you create 20GN Single LUNs on each of your RAID Groups and Meta Stripe them together to become a 60GB Meta LUN, you are having 3 X 7 = 21 Spindles to run this LUN (theoretically we should call this RAID as RAID 50).

The same setting can apply to RAID 10 LUNs too, it will then become RAID 100.
Read more »