VMware DRS disable caused by single VM

Posted on June 8th, 2011 in Tips, Virtualization | No Comments »

My Client had a weird issue recently on 1 of the production vSphere 4.1 host. Error message shown on summary tab as ” DRS is not able to function normally cause by insufficient resource”.This is not the exact message as I can’t remember the full error message, but it represent the similar meaning of the error. This message had been alerted for more than a week as we thought the mgmt-ware restart command should able to fix this.  The system is currently in production, therefore we are suggesting to vmotion out all the production VMs, and put the host to maintenance before we issue the command.

Read more »

High Consolidation ratio in Virtualization

Posted on March 21st, 2010 in Operating Systems, Server, Virtualization | 5 Comments »

Recently I had gone through a lot of posts from the internet as well as some discussion I had with the persons I met, some of them are concerns about the increasing number of virtual machines into a single physical host which generally putting too much eggs in 1 bucket. They could be right in certain extend, but I will not say they are absolutely correct. They are few missing items that they had forgotten how the IT suppose to run before the virtualization came in to the market with all the capabilities they demostrated VS traditional physical systems.

You may have 30 to 50 VMs into single host today due to high density server with more CPU core or more memory per single system. In the next second, you may face the hardware failure on 1 of the host, there will be around 50 VMs down at 1 time and require another 20 mins before all the virtual machine could be successful restarted on the surviving host. Some of them may consider this is high impact, therefore you decide to restrict the number of virtual machine in single host around 10 to 20 VM per ESX. What happen next, the TCO is high, and ROI is not efficient.  Is a tough point for most administrators to choose in this scenario. I will urge you to backward a little bit and look at the scenerio again. Before Virtualization, all the business system that only invested with standalone server without physical clustering, they do not entitle any HA as the aware off. If they want a HA in physical system, they will had to invest extra CAPEX and OPEX to maintain a same set of hardware and operating system, just for failover purpose. Again, even the operating system clustering does not provide 100% uptime.

Read more »

Virtualization on Blade

Posted on June 29th, 2009 in Data Center, Hardware, Tips, Virtualization | No Comments »

We see the growth in the market to be more aggressive for consolidation in the data center for both physical and virtual server from time to time. There are plenty of solutions in place allow blade to support virtualization today such as virtual connect from HP, pass through module, infiniband integration Xsigo, Cisco UCS and etc. This had significant resolved the I/O interfaces require per blade to host the virtualization host server. CPU and memory per blade and significant increase with the latest release from all the major server vendors, the CPU, memory and disk I/O are no longer the concerns for virutalization.

Read more »

VMotion compatible from ESX 3.5 to vSphere 4

Posted on June 4th, 2009 in Tips, vCenter, Virtualization, vSphere | 1 Comment »

I am currently doing some test to simulate the real update requirement for my production VMware Farm to be upgraded from ESX 3.5 to vSphere 4. To minimize the impact to our production system, we may want to do this with as minimal impact as possible. Due to the virtual hardware version and tools upgrade require, is pretty tough for us to perform the entire upgrade at 1 time. Therefore, our plan is get the host upgraded and follow by each individual virtual machine to be upgraded follow the suitable timing for different business unit. HA and DRS will need to be disable temp during the ESX upgrade.

I am able to get the ESX 3.5 to be manage by the latest vcenter. A vmotion from ESX 3.5 to vsphere 4 had been successes too, but the latest version of virtual machine which built from vSphere 4 might not compatible to vmotion back to the ESX 3.5 hosts. At the same time, if you have different processors chipset in the environment and require EVC to be turned on, it may be a little challenge to do so. You may want to ensure the EVC to be done with no down time.  You may need to refer to my previous post about how to enable EVC with no down time.

Read more »

Create VMFS with High Availability and Vmotion in local hard drive

Posted on February 28th, 2009 in Virtualization | No Comments »

I just read an interesting article and wat the demo video for the products of Stor Magic SvSAN which is capable to provide the flexibility for us to utilize the local Hard drive in our machine to act as a share storage. As we know, the 1TB SAS HDD is available in the market today, and most of the ESX servers we have today, are not running with local storages due to the requirement of HA, DRS and VMotion within ESX servers. In the video, it show the flexibility and opportunities to fully utilize the ESX servers we have. I am interesting into this particular products and idea and downloading for a try now.

More review will be publish after my test on this. If the success of this products is true, we should able to save some money for some of the cases which may not require big SAN box to their environment to entitle the HA, DRS and Vmotion features. Stay tune.
Read more »