More performance comparisons on virtual switches

CloudNetEngine vSwitch techical preview update 1 is released today, and we also share more performance comparison data on Native kernel OVS, OVS-DPDK and CNE vSwitch.

Posted by Jun Xiao on Aug. 15, 2015

Since we announced CloudNetEngine virtual switch technical preview, we received quite a few very good suggestions/feedbacks from evaluation customers, thank you all!

Today we release CloudNetEngine vSwitch technical preview update 1, which includes a number of performance improvements over last few weeks. Here we also share more performance comparisons on Native kernel OVS, OVS-DPDK, and CNE vSwitch technical preview update 1. The methodology for the comparison is also commonly used by virtualization vendors, i.e. end to end TCP throughput/CPU usage, and TCP_RR transaction throughput. For more information, please contact





For single host test, a few key observations:

- OVS-DPDK is not designed to optimize for intra VMs communication, its tcp throughput is even worse than Native kernel OVS.

- OVS-DPDK only wins at TCP_RR, and this is reasonable as it always keeps a CPU core 100% busy.

- CNE vSwitch wins on almost all metrics (except OVS_DPDK's TCP_RR), and its TCP throughput/CPU efficiency is around 3X against Native kernel OVS while also 40% better on TCP_RR throughput than Native kernel OVS.





For two hosts performance comparison, it almost reflects the same thing as we observed in single host test.

In summary, CNE vSwitch is significantly better than the other two vSwitches in terms of throughput and CPU efficiency, and at the same time CNE vSwitch supports TCP_RR throughput in a reasonable good level.

If you want to try CNE vSwitch technical preview update 1, don't hesitate to send a evaluation request to