Archive for the ‘Storage’ Category

Storage vMotion with VAAI on EMC

Posted on August 16th, 2010 in Storage, Tips, vCenter, Virtualization, vSphere | No Comments »

In 1 of the demo video, I had seen the storage vMotion performance significant improve as much as 25% of the time and reduce the storage processor overhead at roughly 20% which offload the ESX performance during the storage vMotion execution. Previously, you may aware that the storage vMotion will usually take a long time and consume a lot of resource from Host & Storage. With vStorage API Array Integration, you will able to offload this to the storage array to handle directly which reduce the system overhead from ESX host itself.

Here is the option to turn on or off the VAAI, Value of Zero meant off, and 1 refer to ON.

Read more »

VSI Plug-in for vCenter with EMC storage

Posted on August 11th, 2010 in Storage, Tips, vSphere | 1 Comment »

The new VSI plugin from EMC allow VMware administrator to self service, manage and provide transparency and visibility about the back end SAN storage in the vSphere infrastructure. This had transformed the traditional way how the VMware administrator manage and operate the virtual infrastructure in the past. All of us may agreed that the SAN configuration details are not visible to us previously due to lack of visibility in the management console. If you would like to verify something on the SAN, you may require assistant from the SAN admin, due to restricted access to the SAN box. Now with the new VSI plug-in, it will able to provide the necessary configuration information to the VMware administrator without having to engage the SAN admin to verify some minor information. The VSI plugin support all platform for EMC product range include Clariion, Celerra and Symmetrix.


Here show the plug-in which integrated the the vi-client directly

Read more »

Surprise Finding on ESX Host after SAN switch outage

Posted on July 1st, 2010 in Server, Storage, Virtualization, vSphere | 2 Comments »

I was busy setup the demo solution for the Cisco summit yesterday. The Demo we had were displaying the VMware, Cisco UCS, Nexus 5000, MDS 9124 & Netapp Storage Solution. 1 of the Surprise thing happened during the setup, which the power source for our MDS 9124 had been tripped during the installation yesterday. In this scenerio, all our connection to ESX host and VM were disconnected. It took us for 25 mins to recovered the power failure and the MDS Switch was back on line after that. I was thought to reboot all the ESX host as we are performing BOOT FROM SAN for all the ESX hosts that we setup. Surprise happened here, which I found all the ESX host were still continue running. I did the command uptime and check the system uptime from vcenter, it showed that the ESX host were not rebooted during the SAN connection drop from UCS to our Netapp FAS storage.

I further checked the virtual machines been power on in the ESX servers, which show all the VM were continue running without system crash or rebooted. Now I realize that the failure on SAN switch may not necessary result system crash or hung, in fact it may allow you resume the system state once the SAN switch are back online, of course, this is no guarantee assumption, just some surprise finding experienced yesterday would like to share here. Read more »

Virtualization Design with Gigabits VS 10G Ehternet

Posted on May 12th, 2010 in Hardware, Industry News, Server, Storage, Virtualization | No Comments »

Today, 10 G Ethernet consider as common technology at least to the enterprise customer. Many of us are looking into 10G Ethernet from server to switch via L2 or L3 networking perspective. The main reason behind this are due to the Virtualization and Cloud Computing technology driven. Few years ago when we came out the design with multiple gigabits NICs on the ESX host to support NIC teaming, load balancing and etc, which may end up deliver around 14 Gbps of network bandwidth and 8Gbps of FC bandwidth per host. Each ESX host are consolidate 20 to 30 virtual machines. At that time, this may be the best option we could had done due to technology limitation. Is this design GOOD enough? I will say yes for previously, and NO for today technology availability.

Photo shot of the real environment which have 14 gigabits NIC Ports and 4 x 2 Gbps FC.

Read more »

Storage vMotion with Thin Provisioning

Posted on April 20th, 2010 in Storage, Tips, vCenter, Virtualization, vSphere | No Comments »

Some interesting finding to share during the last migration I did. I was performed the cold migration for all the virtual machines on the production. Before we migrated over to the new SAN storage, all the current virtual machines are running with thin provisioning enable from vSphere 4. During the storage migration process, you will need to choose either same as source, thin provision or none thin provision. I had chosen same as source option and perform the storage vmotion. After the storage vmotion done, I realized that the virtual machine had no longer with thin provision enable.

Read more »