Match RDM to Actual LUN on SAN Storage with vSphere

Posted on March 23rd, 2009 in Storage, vSphere | 2 Comments »

There are always challenges to match the Raw Device Mapping in VMware to the actual physical LUN from SAN storage. For current ESX 3.5 U3, what we had done to manage our RDM is all depend on the LUN name which presented at the management console from our EMC storage, and the LUN ID which publish at the vCenter management interface. In vCenter, there are numbers of LUNs presented to each ESX server which will be provided a unique LUN ID for each of the LUN. These LUN IDs should able to be match with the Host ID from the EMC navisphere web management GUI interface. At the same time, we had renamed the LUN to match with the virtual machine or ESX hosts which connecting to the LUNs for tracking and management purpose. These allow us to keep track every LUN been assigned to our Virtual Infrastructure.

In vSphere, the next version of ESX server, VMware had included the new features, which provide capability to rename the device’s name for each of the LUN been presented to the ESX hosts. This will provide alternative to keep track the physical LUNs which presented to ESX hosts and VMs too.

Read more »

Calculation of Max LUN Supported in ESX Server

Posted on February 20th, 2009 in Tips, Virtualization | 5 Comments »

I found my ESX servers could not recover the 65th LUNs that I tried to present to it and myself did log a support call and still pending the reply from VMware. Beside that, I found another interesting article with the details below.

Article Copy from VMware

In Multipathing Configurations the Number of Paths Per LUN Is Inconsistent
The hpsa driver in ESX Server might reduce the number of supportable LUNs below the expected maximum limit of 256 when the controller is used in multipath configurations. In multipath configurations, if all four paths are configured, the total supportable LUNs is reduced to 64. In certain multipath configurations, because each target path consumes an available LUN slot, the total number of supportable LUNs might be reduced to 60.

Read more »

VMFS LUNs Report

Posted on December 4th, 2008 in Storage, Tips | 1 Comment »

Gabesvirtualworld posted Prevent your LUNs running out of space remind me about my VMFS LUNs space. Personally I don’t agreed about create a dummy vmdk on each LUN. Why? Arnim, you still have to wake up at 3am if received any calls. Just a joke!

Have a look here:
VMware ESX VMFS LUNs Report.
San Space

Read more »

Best sizing for single Storage LUN

Posted on September 26th, 2008 in Storage, Tips | No Comments »

VMware ESX host need to have storage available before create VMs. Normally the storage is a LUN created on SAN. The question is, what is the best size for a single LUN (storage) in ESX?

Our design for a storage on ESX is 300GB, allow maximum VMFS up to 256GB.

The reasons behind are:

  • better I/O performance: Each storage in ESX, we only assign to 5 VMs or less. Since more running VMs on a single storage will hit into the LUN I/O speed bottle neck.
  • better disk utilization: You can safe lots of unuse space when you do it small. Say You assign a 2TB storage that can keep 40 VMs, but you only have 20 now, you actually waste 1TB which is sitting there doing nothing. But if you make each storage small, you can create only when you need it. It is much more manageable on SAN disk allocation point of view.

Read more »

How to design Storage LUN for optimum perfomance

Posted on September 24th, 2008 in Storage, Tips | 2 Comments »

Previously Craig talked about Storage Planning – Virtualization, the following article will share about “How to design Storage LUNs for optimum performance” – stripe Single LUNs to MetaLUN.

The most important point to fasten up LUNs speed is increase the spindle count on your LUN. And this can be achieve by Meta up few Single LUNs.

If you have 3 RAID Groups with 7 + 1 RAID 5 Configuration (7 Disks join as a RAID 5 Group and 1 as Hot spare), and you create a 60GB Single LUN on Group 1, you have 7 spindles to run this LUN. But if you create 20GN Single LUNs on each of your RAID Groups and Meta Stripe them together to become a 60GB Meta LUN, you are having 3 X 7 = 21 Spindles to run this LUN (theoretically we should call this RAID as RAID 50).

The same setting can apply to RAID 10 LUNs too, it will then become RAID 100.
Read more »