Today, 10 G Ethernet consider as common technology at least to the enterprise customer. Many of us are looking into 10G Ethernet from server to switch via L2 or L3 networking perspective. The main reason behind this are due to the Virtualization and Cloud Computing technology driven. Few years ago when we came out the design with multiple gigabits NICs on the ESX host to support NIC teaming, load balancing and etc, which may end up deliver around 14 Gbps of network bandwidth and 8Gbps of FC bandwidth per host. Each ESX host are consolidate 20 to 30 virtual machines. At that time, this may be the best option we could had done due to technology limitation. Is this design GOOD enough? I will say yes for previously, and NO for today technology availability.

Photo shot of the real environment which have 14 gigabits NIC Ports and 4 x 2 Gbps FC.

If you are going to adopt virtualization technology in your environment today, you should consider the comparison within gigabits and 10G Ethernet. Of course, for some environment which may not require 10G if the number of virtual machines and IO requirement are low, and I absolutely agreed that we shouldn’t over provision what you require in the IT infrastructure. But if you are looking for medium and large scale virtualization and consolidation, I will say that 10G Ethernet are the way to go from now. 10G Ethernet provide lower latency vs traditional gigabits Ethernet. It does solve the physical and virtual management on the networking perspective on virtualization. From operational standpoint, it simplified the data center by reducing the complexity from switch or storage management.

From VMware architecture, if you are currently on gigabits environment, you may end up create multiple vswtich, multiple DVswitch, or multiple uplink for different port group and VLAN. Well, no problem of doing this of course, but when the number of interfaces increase to 18 NICs per Server, you will spend hours to just plug in and trace all the physical cable 1 by 1. An example here for reference, if you have 2 x 10G Ethernet per ESX host, the implementation will be much more simplified. a DV switch for all port group require. Of course, you will maintain the standard best practices which split out the port group to multiple interfaces as usual. Of course you can applied QoS if you wish to do so, but in most scenario, you may leave it default.  Now, you have 20Gbps VS 18 Gbps, and the entire management from both physical and virtual are much more simplified. With Intel Westmere and AMD 12 cores technology, you will no longer looking at virtualize 10:1 per ESX host. I had generally see the users at least virtualizing 18 to 25 per ESX host as a minimum today. Network IO become more important now since the number of virtual machine per host are increasing.

Screen short of 2 x 10G Ethernet configuration with DVswitch.

There are also plenty of storage vendor out there supported 10G Ethernet interface for ISCSI and NFS both. Meant you can have your virtual machine network and Storage traffic both running on 10G Ethernet.