Why ESX are not suitable to run on Blade

Posted on September 30th, 2008 in Data Center, Hardware | 7 Comments »

1st of all, the reason of being utilize blade system in the market are looking at the point of servers consolidation, reduce power consumption and reduce the TCO require to purchase in term of hardware compare to the 1U, 2 U and 4 U servers. When we do compare the reason of having blade, you will always notice it was comparable between 2U and 1U servers in the x86 family and data center environment. In large scale deployment, you will always see that the Blade allow you to scale and spend in the sense with more stand alone machine you can have with the limitted rack space and power you do have in your DC. These seems to make sense for us to start moving to blade, BUT it also have some risk which will become major issue later on.

Before you can use blade, you require higher power consumption per rack to support approximately 30 to 32 blades per racks on 42 servers rack. At the same time, the cooling unit design in you DC require to be customize to ensure your blade chassis is working in perfect condition. Once you have this, then you may able to start think about Blade.

Read more »

ESX & VM Networking Concepts

Posted on September 4th, 2008 in Data Center, Virtualization | 4 Comments »

This topic is specifically talk about the networking concept in VM infrastructure. In most of the cases we discuss and Virtualization and Consolidation, we always thought about number of servers we reduce in the data center, the powers we save as well as others facts. Some how, I could see most of the users today who may had already in the production for little while, and will start realize about some performance issues from the network, storage and servers perspective. WIth that particular challenges and reason, we start to hear these groups are trying to tell the customers or users, if you do want to run it on VM, it meant there is performance scarification.

I am strongly disagreed with these comments as most of us know that the reason of consolidate and virtualization, is not for performance reduce, is really to improve efficiency and utilization of the hardware that we purchased. Networking play a big parts in VM infrastructure and most of the time, it did become the performance bottle neck for most users. Let me talk about some example below.

1 of the case i saw here, which the engineer configure it’s ESX in to a server which only have 2 physical NIC connected for VMnetwork interface to allow VM to be connected to the production network. There is more than 10 VMs on the ESX servers which connected to the 2 gigabits NIC and share among each others. In physical environment, for 10 physical servers, they always get 1gb per servers and is not in shared condition. But now, since virtualized, they need to share 2gb with 10 VM. Guess what, the users start complaining slow performance on the network file transfer, the backup through Network is slow as well as any reason that the network slowing down due to the high peak bandwidth utilization from any of the VM which shared on the 2 NICs. Not only the NIC causing the performance issues, and the switches uplink to the DC switches had also another thing you may need to keep an eye on. No matter how many gigabits connection you connected your servers through, it will still depend of the total uplink for your switches to route the traffic to the DC.

In this case, they not really figure out the performance issues as they won’t notice this performance bottleneck in the performance chart from Virtual Center. Most of the time, only the network guys will able to identified these issues. It really hit hard to some of the engineer which push hard on the virtualization, but it did become performance sacrification to the customers at the end. I will not want this to happen for myself, as we had invested SAN storage & High capacity servers which is not cheap solution.

Read more »