Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Akraino Test Group Information

SmartNIC

...

deployed Architecture

We reuse the test architecture for smart NIC in R3 release. The below description is the same with the R3 release test documents.

To deploy the Test architecture for smartNIC, we use a private Jenkins and an Intel a server equipped with a BlueField v2 SmartNIC.

...

according to the ansible-playbook.

OVS-DPDK Test Architecture

...

OVS-DPDK on BlueField Test Architecture


Image RemovedImage Added

The testbed setup is shown in the above diagram. DUT stands for Device Under Test

...

TypeDescription
SmartNICsBlueField v2, 25Gbps
DPDKversion 20.11
vSwitch

OVS-DPDK 2.12 with VXLAN DECAP/ENCAP offload enabled.

https://github.com/bytedance/ovs-dpdk


Code Block
	Bridge br-intphy0
        fail_mode: standalone
        datapath_type: netdev
        Port br-intphy0
            Interface br-intphy0
                type: internal
        Port vhost-user-01p0
            Interface vhost-user-01p0
                type: dpdkvhostuserclientdpdk
                options: {vhostdpdk-server-path="/var/run/openvswitch/dpdkvhostclient01devargs="0000:03:00.0"}
        Port vxlan-vtpBridge br-int0
            Interface vxlan-vtpfail_mode: standalone
                type: vxlan
                options: {dst_port="4789", key=flow, local_ip="YOU IP", remote_ip=flow, tos=inherit}datapath_type: netdev
        Port br-ex-patchint0
            Interface br-ex-patchint0
                type: patch
                options: {peer=br-int-patch}
    Bridge br-exinternal
        datapath_type: netdev
        Port br-int-patchPort vxlan0
            Interface br-int-patchvxlan0
                type: patchvxlan
                options: {peer=br-ex-patchflags="0", key="101", local_ip="1.1.1.1", remote_ip="1.1.1.3"}
        Port eth2pf0vf0
            Interface eth2pf0vf0
                type: dpdk
                options: {dpdk-devargs="PCIE", n_rxq="4"}
        Port br-ex
            Interface br-ex
                type: internal
    0000:03:00.0,representor=[0]"}
    ovs_version: "2.1314.901"


Code Block
root:/home/ovs-dpdk# ovs-vsctl --format=csv --data=bare --no-headings --column=other_config list open_vswitch
"dpdk-extra=-w [PCIE]  -l 70 dpdk-init=true dpdk-socket-mem=2048,2048 emc-insert-inv-prob=0 n-handler-threads=1 n-revalidator-threads=4 neigh-notifier-enable=true pmd-cpu-mask=0xc00000000000c00000 pmd-pause=false pmd-rxq-assign=roundrobin smc-enable=true tx-flush-interval=0 userspace-tso-enable=true"


Code Block
root:/home/ovs-dpdk# ovs-vsctl --format=csv --data=bare --no-headings --column=other_config list open_vswitch
dpdk-extra="-w 0000:03:00.0,representor=[0-1],dv_xmeta_en=1,sys_mem_en=1", dpdk-init="true", dpdk-socket-mem="4096", hw-offload="true", max-idle="120000"


Traffic Generator

We will use DPDK pktgen as the Traffic Generator.

...

Code Block
languagebash
Br-int rules
table=0,arp,action=normal
table=1,priority=1,ip,ct_state=+trk+new,action=ct(commit),normal
table=0,ip,ct_state=-trk,action=ct(table=1)
table=1,priority=1,ip,ct_state=+trk+est,action=normal

Nginx configuration

Code Block
languagebash
user  nginx;
worker_processes  auto;
error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;
events {
    worker_connections  2000000;
}
http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                  '$status $body_bytes_sent "$http_referer" '
                  '"$http_user_agent" "$http_x_forwarded_for"';
    access_log off;
    sendfile        on;
    keepalive_timeout  65;
    include /etc/nginx/conf.d/*.conf;
}

...

Code Block
./wrk -t 32 -c 640 -d30s http://10.0.1.127/ -H "Connection: Close"

Performance Results


Tested For the optimized software-based OVS, we tested on a 48C24q VM, running NGINX as a server, CT is enabled on OVS-DPDK.( upstream version / our version )

...

pps

(not closing  connection after each query)

pps

(closing connection after each query)

connection initial rates

(closing connection after each query) 

QPS

(not closing connection after each query) 

1.66Mpps/2Mpps

1.66Mpps/2Mpps140Kcps/200Kcps889Kqps/1.14Mqps

...


 For the OVS-DPDK running on SmartNIC with CT function enabled, we tested on 17C8G VM running testpmd (where 1 core for lcore and 16 cores for nb cores) as the traffic forwarder and reached the performance below.

Note: the result also depends on the traffic generator side.

Frame size114 bytes
Packets per second~23Mpps

The test is to evaluate the performance of SmartNIC offloading.
Test API description

Thus we currently don't have any Test APIs provided.

...