CloudNetEngine vSwitch boosts performance for Cloud and NFV use cases


CNE vSwitch boosts performance for both NFV and Cloud use cases in technical preview update 3, and we share more exciting data here.

Posted by Jun Xiao on Sept. 5, 2015

Today we publish more exciting virtual switch performance data for both NFV and Cloud use cases, and CNE vSwitch under test is technical preview update 3.

 

The NFV test configuration is as following: we use two back-to-back servers with two 10G link pairs, Host1 is as DUT and Host2 is acting as both packet generator and receiver, and the packet flow in the DUT is pNIC -> vSwitch -> VM -> vSwitch -> pNIC.

p1.png

The MPPS under different packet size is as following, and the data is collected by the packet receiver on Host2.p2.png

A few new observations are:

- CNE vSwitch is over 60% better than OVS-DPDK for packet size 64/128/256. (In technical preview udpate 2, the advantage is only around 10-20%)

- CNE vSwitch reaches line rate on packet size 256 and above. (In technical preview update 2, we can only get line rate for packet size 512 and above)

Note: For OVS-DPDK and CNE vSwitch, the data is collected under one PMD thread, and we could use more PMD threads to scale the performance.

p3.png

 

The TCP single host test configuration is as following:

p4.png

p5.png

p6.png

p7.png

For single host test, key observations are the same as previous posts except that we get throughput improved by around 5% (from 12Gbps to 12.6Gbps):

- OVS-DPDK is not designed to optimize for inter VMs communication, its tcp throughput is even worse than Native kernel OVS.

- OVS-DPDK only wins at TCP_RR, and this is reasonable as it always keeps a CPU core 100% busy.

- CNE vSwitch wins on almost all metrics (except OVS_DPDK's TCP_RR), and its TCP throughput/CPU efficiency is around 3X against Native kernel OVS while also 40% better on TCP_RR throughput than Native kernel OVS.

 

 

The TCP two hosts test configuration is as following:

p8.png

p9.png

p10.png

p11.png

For two hosts performance comparison, it almost reflects the same thing as we observed in single host test except throughput is limited by physical link bandwidth.

 

In summary, CNE vSwitch is very great in terms of throughput and CPU efficiency for both NFV and Cloud use cases, and at the same time CNE vSwitch supports TCP_RR throughput in a reasonable good level.

If you want to try CNE vSwitch technical preview update 3, don't hesitate to send an evaluation request to info@cloudnetengine.com.