VMware High Availability Behavior

Posted on March 6th, 2011 in Tips, Virtualization, vSphere | No Comments »

This post is to shared 1 of the incident happened to the previous deployment. There were 2 types of Network adapters running on the physical host, which are NetXen and Broadcom in this case. For the Broadcom adapter, it come with TOE, therefore we had configured the VMkernal on the adapters to handle the IP base datastore for the ESX host.

There was a case that the ESX host become isolated and unresponsive due to the Netxen Driver failure, which we will still able to type command from the server console. While this happen, as the isolation respond option was set to leave power on, the VMware HA will not kick in to force the virtual machine for the fail over process.

There are pros and cons while doing these, but according to my Local VMware friend, the system recognize the lock session on the specified virtual machines were detected in the datastore, therefore the HA clustering will not allow the surviving systems to take over the session. The explanation fall exactly the same with our incident as the VMkernel traffic was still alive during the host network outage happen on the Netxen NICs.

Read more »

VMware ESXi 4.1 HA Warning Message

Posted on September 9th, 2010 in ESXi, Tips, vSphere | 11 Comments »

VMware vSphere ESXi HAI was playing around with VMware vSphere ESXi 4.1 recently and I found the warning message “HA initiated a failover action in cluster your-cluster-name in datacenter your-datacenter-name” keep show on my vCenter. Nothing has been changed, HA working like a charm, no VMs have been reboot and the warning message just annoying.

The solution are pretty simple.
Read more »

High Consolidation ratio in Virtualization

Posted on March 21st, 2010 in Operating Systems, Server, Virtualization | 5 Comments »

Recently I had gone through a lot of posts from the internet as well as some discussion I had with the persons I met, some of them are concerns about the increasing number of virtual machines into a single physical host which generally putting too much eggs in 1 bucket. They could be right in certain extend, but I will not say they are absolutely correct. They are few missing items that they had forgotten how the IT suppose to run before the virtualization came in to the market with all the capabilities they demostrated VS traditional physical systems.

You may have 30 to 50 VMs into single host today due to high density server with more CPU core or more memory per single system. In the next second, you may face the hardware failure on 1 of the host, there will be around 50 VMs down at 1 time and require another 20 mins before all the virtual machine could be successful restarted on the surviving host. Some of them may consider this is high impact, therefore you decide to restrict the number of virtual machine in single host around 10 to 20 VM per ESX. What happen next, the TCO is high, and ROI is not efficient.  Is a tough point for most administrators to choose in this scenario. I will urge you to backward a little bit and look at the scenerio again. Before Virtualization, all the business system that only invested with standalone server without physical clustering, they do not entitle any HA as the aware off. If they want a HA in physical system, they will had to invest extra CAPEX and OPEX to maintain a same set of hardware and operating system, just for failover purpose. Again, even the operating system clustering does not provide 100% uptime.

Read more »

Virtualization on Blade

Posted on June 29th, 2009 in Data Center, Hardware, Tips, Virtualization | No Comments »

We see the growth in the market to be more aggressive for consolidation in the data center for both physical and virtual server from time to time. There are plenty of solutions in place allow blade to support virtualization today such as virtual connect from HP, pass through module, infiniband integration Xsigo, Cisco UCS and etc. This had significant resolved the I/O interfaces require per blade to host the virtualization host server. CPU and memory per blade and significant increase with the latest release from all the major server vendors, the CPU, memory and disk I/O are no longer the concerns for virutalization.

Read more »

VMotion compatible from ESX 3.5 to vSphere 4

Posted on June 4th, 2009 in Tips, vCenter, Virtualization, vSphere | 1 Comment »

I am currently doing some test to simulate the real update requirement for my production VMware Farm to be upgraded from ESX 3.5 to vSphere 4. To minimize the impact to our production system, we may want to do this with as minimal impact as possible. Due to the virtual hardware version and tools upgrade require, is pretty tough for us to perform the entire upgrade at 1 time. Therefore, our plan is get the host upgraded and follow by each individual virtual machine to be upgraded follow the suitable timing for different business unit. HA and DRS will need to be disable temp during the ESX upgrade.

I am able to get the ESX 3.5 to be manage by the latest vcenter. A vmotion from ESX 3.5 to vsphere 4 had been successes too, but the latest version of virtual machine which built from vSphere 4 might not compatible to vmotion back to the ESX 3.5 hosts. At the same time, if you have different processors chipset in the environment and require EVC to be turned on, it may be a little challenge to do so. You may want to ensure the EVC to be done with no down time.  You may need to refer to my previous post about how to enable EVC with no down time.

Read more »