10Gbit Ethernet: Killing Another Bottleneck?
by Johan De Gelas on March 8, 2010 12:00 PM EST- Posted in
- IT Computing
The Hardware
As always, we worked with the hardware we have available in the labs.
The Neterion Xframe-E is neither Neterion’s latest nor greatest. It came out in 2008 and was one of the first PCI Express NICs that supported VMware’s NetQueue feature in ESX 3.5, so we felt this pioneer in network virtualization should be included. The latest Neterion NICs are the X3100 series, but those cards were not available to us. The Xframe-E has eight transmit and eight receives queues available, TCP checksum offload, and TCP segment send/receive offload. The card uses a PCIe 1.1 x8 connector. Typical power consumption is around 12W. More info here. Our Neterion Xframe-E used optical SR multi-mode fiber, but a CX4 version is also available.
The Supermicro AOC-STG-I2 was also first made available in 2008, but we got the version that was sold in 2009. It is based on the Intel 82598EB chip. It supports checksum offloading and even iSCSI booting. 16 virtual queues are available. Power consumption should not be higher than 6.5W. More info here.
Native performance
We first tested on the Linux distribution CentOS 5.4, based on the 2.6.18 kernel. The first goal was to understand whether 10Gbit cards make sense for non-virtualized platforms.
The Neterion needed 6% of our 8 2.9GHz Opteron cores, the Intel chip on our Supermicro card only 2.5%. Despite these low percentages, both cards cannot reach half their potential. These rather expensive cards do not seem to be very attractive next to a simple quad-port 1Gbit/s Ethernet card.
49 Comments
View All Comments
fredsky - Tuesday, March 9, 2010 - link
we do use 10GbE at work, and i passed a long time finding the right solutiom- CX4 is outdated, huge cable, short length power hungry
- XFP is also outdated and fiber only
- SFP + is THE thing to get. very long power, and can used with copper twinax AS WELL as fiber. you can get a 7m twinax cable for 150$.
and the BEST card available are Myricom very powerfull for a decent price.
DanLikesTech - Tuesday, March 29, 2011 - link
CX4 is old? outdated? I just connected two VM host servers using CX4 at 20Gb (40Gb aggregate bandwidth)And it cost me $150. $50 for each card and $50 for the cable.
DanLikesTech - Tuesday, March 29, 2011 - link
And not to mention the low latency of InfiniBand compared to 10GbE.http://www.clustermonkey.net/content/view/222/1/
thehevy - Tuesday, March 9, 2010 - link
Great post. Here is a link to a white paper that I wrote to provide some best practice guidance when using 10G and VMware vShpere 4.Simplify VMware vSphere* 4 Networking with Intel® Ethernet 10 Gigabit Server Adapters white paper -- http://download.intel.com/support/network/sb/10gbe...">http://download.intel.com/support/network/sb/10gbe...
More white papers and details on Intel Ethernet products can be found at www.intel.com/go/ethernet
Brian Johnson, Product Marketing Engineer, 10GbE Silicon, LAN Access Division
Intel Corporation
Linkedin: www.linkedin.com/in/thehevy
twitter: http://twitter.com/thehevy">http://twitter.com/thehevy
emusln - Tuesday, March 9, 2010 - link
Be aware that VMDq is not SR-IOV. Yes, VMDq and NetQueue are methods for splitting the data stream across different interrupts and cpus, but they still go through the hypervisor and vSwitch from the one PCI device/function. With SR-IOV, the VM is directly connected to a virtual PCI function hosted on the SR-IOV capable device. The hypervisor is needed to set up the connection, then gets out of the way. This allows the NIC device, with a little help from an iommu, to DMA directly into the VM's memory, rather than jumping through hypervisor buffers. Intel supports this in their 82599 follow-on to the 82598 that you tested.megakilo - Tuesday, March 9, 2010 - link
Johan,Regarding the 10Gb performance on native Linux, I have tested Intel 10Gb (the 82598 chipset) on RHEL 5.4 with iperf/netperf. It runs at 9.x Gb/s with a single port NIC and about 16Gb/s with a dual-port NIC. I just have a little doubt about the Ixia IxChariot benchmark since I'm not familiar about it.
-Steven
megakilo - Tuesday, March 9, 2010 - link
BTW, in order to reach 9+ Gb/s, the iperf/netperf have to run multiple threads (about 2-4 threads) and use a large TCP window size (I used 512KB).JohanAnandtech - Tuesday, March 9, 2010 - link
Thanks. Good feedback! We'll try this out ourselves.sht - Wednesday, March 10, 2010 - link
I was surprised by the poor native Linux results as well. I got > 9 Gbit/s with Broadcom NetXtreme using nuttcp as well. I don't recall whether multiple threads were required to achieve those numbers. I don't think they were, but perhaps using a newer kernel helped, the Linux networking stack has improved substantially since 2.6.18.themelon - Tuesday, March 9, 2010 - link
Did I miss where you mention this or did you completely leave it out of the article?Intel has had VMDq in Gig-E for at least 3-4 years in the 82575/82576 chips. Basically, anything using the igb driver instead of the e1000g driver.