As we know that vSwitch has two very important use cases: one is for typical data center applications which normally use TCP (we already shared a comparison data on the previous post ), and the other one is for telco NFV use case which normally use small packets. Here we share a simple performance comparison for NFV use case on native kernel OVS, OVS-DPDK and CNE vSwitch.
The test configuration is as following: we use two back-to-back servers with two 10G link pairs, one server is as DUT and the other one is as packet generator/receiver, and the packet flow in DUT is pNIC -> vSwitch -> VM -> vSwitch -> pNIC.
The MPPS under different packet size is as following, and the data is collected on the receiver:
A few observations are:
- Native kernel OVS MPPS is very bad comparing to OVS-DPDK and CNE vSwitch.
- CNE vSwitch has 10 - 20% MPPS better than OVS-DPDK for packet size under 512.
- CNE vSwitch reaches line rate on packet size 512 and above.
Note: For OVS-DPDK and CNE vSwitch, the data is collected under one PMD thread, and we could use two PMD threads to make 256 bytes traffic to line rate, but it's far from line rate for 64/128 bytes traffic as significant CPU cycles are used by packet copying between vNIC frontend and backend.
The CPU usage is as above:
- For Native kernel OVS, almost one core is used by ksoftirqd and two cores are used by the two vhost-net kernel threads, and the CPU usages are consistently around 290% for all different size of traffics under testing.
- As we mentioned above, for OVS-DPDK and CNE vSwitch, only one PMD thread is used and their CPU usages are 100% before line rate is reached (it's 512 bytes traffic for CNE vswitch), for packet size 1024 bytes traffic, we can see CNE vswitch only uses around 67% CPU.
If you have any comments/questions or want to try CNE vSwitch, don't hesitate to send emails to email@example.com.