[v2] test_plans/vm2vm_virtio_net_perf:update packed ring cbdma cases of qemu version

Message ID 20210702171734.839462-1-yinan.wang@intel.com (mailing list archive)
State Accepted
Headers
Series [v2] test_plans/vm2vm_virtio_net_perf:update packed ring cbdma cases of qemu version |

Commit Message

Wang, Yinan July 2, 2021, 5:17 p.m. UTC
  Signed-off-by: Yinan Wang <yinan.wang@intel.com>
---
 .../vm2vm_virtio_net_perf_test_plan.rst       | 114 ++++--------------
 1 file changed, 22 insertions(+), 92 deletions(-)
  

Comments

Tu, Lijuan July 12, 2021, 3:09 a.m. UTC | #1
> -----Original Message-----
> From: dts <dts-bounces@dpdk.org> On Behalf Of Yinan Wang
> Sent: 2021年7月3日 1:18
> To: dts@dpdk.org
> Cc: Wang, Yinan <yinan.wang@intel.com>
> Subject: [dts] [PATCH v2] test_plans/vm2vm_virtio_net_perf:update packed ring
> cbdma cases of qemu version
> 
> Signed-off-by: Yinan Wang <yinan.wang@intel.com>

Applied, thanks
  

Patch

diff --git a/test_plans/vm2vm_virtio_net_perf_test_plan.rst b/test_plans/vm2vm_virtio_net_perf_test_plan.rst
index 78418e00..3fb12f41 100644
--- a/test_plans/vm2vm_virtio_net_perf_test_plan.rst
+++ b/test_plans/vm2vm_virtio_net_perf_test_plan.rst
@@ -43,7 +43,9 @@  in the UDP/IP stack with vm2vm split ring and packed ring vhost-user/virtio-net
 2. Check the payload of large packet (larger than 1MB) is valid after forwarding packets with vm2vm split ring
 and packed ring vhost-user/virtio-net mergeable and non-mergeable path.
 3. Multi-queues number dynamic change in vm2vm vhost-user/virtio-net with split ring and packed ring when vhost enqueue operation with multi-CBDMA channels.
-Note: For packed virtqueue virtio-net test, need qemu version > 4.2.0 and VM kernel version > v5.1.
+Note: 
+1.For packed virtqueue virtio-net test, need qemu version > 4.2.0 and VM kernel version > v5.1.
+2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version > LTS 4.2.1, dut to old qemu exist reconnect issue when multi-queues test.
 
 Test flow
 =========
@@ -71,7 +73,7 @@  Test Case 1: VM2VM split ring vhost-user/virtio-net test with tcp traffic
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
     -chardev socket,id=char0,path=./vhost-net0 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10
 
    taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -82,7 +84,7 @@  Test Case 1: VM2VM split ring vhost-user/virtio-net test with tcp traffic
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
     -chardev socket,id=char0,path=./vhost-net1 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12
 
 3. On VM1, set virtio device IP and run arp protocal::
 
@@ -275,7 +277,7 @@  Test Case 5: VM2VM virtio-net split ring mergeable 8 queues CBDMA enable test wi
     --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=512'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
     testpmd>start
 
-2. Launch VM1 and VM2 using qemu3.0::
+2. Launch VM1 and VM2 using qemu 5.2.0::
 
     taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -368,7 +370,7 @@  Test Case 6: VM2VM virtio-net split ring non-mergeable 8 queues CBDMA enable tes
     --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=512'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
     testpmd>start
 
-2. Launch VM1 and VM2 using qemu3.0::
+2. Launch VM1 and VM2 using qemu 5.2.0::
 
     taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -461,7 +463,7 @@  Test Case 7: VM2VM packed ring vhost-user/virtio-net test with tcp traffic
     --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=2 --txd=1024 --rxd=1024
     testpmd>start
 
-2. Launch VM1 and VM2::
+2. Launch VM1 and VM2 with qemu 5.2.0::
 
     qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -516,7 +518,7 @@  Test Case 8: VM2VM packed ring vhost-user/virtio-net CBDMA enable test with tcp
     --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0@00:04.1],dmathr=512'  -- -i --nb-cores=2 --txd=1024 --rxd=1024
     testpmd>start
 
-2. Launch VM1 and VM2 on socket 1::
+2. Launch VM1 and VM2 on socket 1 with qemu 5.2.0::
 
     taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -573,7 +575,7 @@  Test Case 9: VM2VM packed ring vhost-user/virtio-net test with udp traffic
     --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=2 --txd=1024 --rxd=1024
     testpmd>start
 
-2. Launch VM1 and VM2::
+2. Launch VM1 and VM2 with qemu 5.2.0::
 
     qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 40 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -628,7 +630,7 @@  Test Case 10: Check packed ring virtio-net device capability
     --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=2 --txd=1024 --rxd=1024
     testpmd>start
 
-2. Launch VM1 and VM2,set TSO and UFO on in qemu command::
+2. Launch VM1 and VM2 with qemu 5.2.0,set TSO and UFO on in qemu command::
 
     qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -672,11 +674,11 @@  Test Case 11: VM2VM virtio-net packed ring mergeable 8 queues CBDMA enable test
 1. Launch the Vhost sample by below commands::
 
     rm -rf vhost-net*
-    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=512' \
-    --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=512'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
+    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=512' \
+    --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=512'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
     testpmd>start
 
-2. Launch VM1 and VM2::
+2. Launch VM1 and VM2 with qemu 5.2.0::
 
     taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
@@ -685,7 +687,7 @@  Test Case 11: VM2VM virtio-net packed ring mergeable 8 queues CBDMA enable test
     -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
-    -chardev socket,id=char0,path=./vhost-net0,server \
+    -chardev socket,id=char0,path=./vhost-net0 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10
 
@@ -696,7 +698,7 @@  Test Case 11: VM2VM virtio-net packed ring mergeable 8 queues CBDMA enable test
     -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
-    -chardev socket,id=char0,path=./vhost-net1,server \
+    -chardev socket,id=char0,path=./vhost-net1 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12
 
@@ -721,43 +723,7 @@  Test Case 11: VM2VM virtio-net packed ring mergeable 8 queues CBDMA enable test
     Under VM1, run: `iperf -s -i 1`
     Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
 
-7. Quit vhost ports and relaunch vhost ports w/o CBDMA channels::
-
-    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \
-    --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
-    testpmd>start
-
-8. Scp 1MB file form VM1 to VM2::
-
-    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
-
-9. Check the iperf performance and compare with CBDMA enable performance, ensure CMDMA enable performance is higher::
-
-    Under VM1, run: `iperf -s -i 1`
-    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
-
-10. Quit vhost ports and relaunch vhost ports with 1 queues::
-
-     ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \
-     --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1
-     testpmd>start
-
-11. On VM1, set virtio device::
-
-      ethtool -L ens5 combined 1
-
-12. On VM2, set virtio device::
-
-      ethtool -L ens5 combined 1
-
-13. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp::
-
-     Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
-
-14. Check the iperf performance, ensure queue0 can work from vhost side::
-
-     Under VM1, run: `iperf -s -i 1`
-     Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
+7. Rerun step 5-6 five times.
 
 Test Case 12: VM2VM virtio-net packed ring non-mergeable 8 queues CBDMA enable test with large packet payload valid check
 =========================================================================================================================
@@ -765,8 +731,8 @@  Test Case 12: VM2VM virtio-net packed ring non-mergeable 8 queues CBDMA enable t
 1. Launch the Vhost sample by below commands::
 
     rm -rf vhost-net*
-    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=512' \
-    --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=512'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
+    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7],dmathr=512' \
+    --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7],dmathr=512'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
     testpmd>start
 
 2. Launch VM1 and VM2::
@@ -778,7 +744,7 @@  Test Case 12: VM2VM virtio-net packed ring non-mergeable 8 queues CBDMA enable t
     -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
-    -chardev socket,id=char0,path=./vhost-net0,server \
+    -chardev socket,id=char0,path=./vhost-net0 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10
 
@@ -789,7 +755,7 @@  Test Case 12: VM2VM virtio-net packed ring non-mergeable 8 queues CBDMA enable t
     -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
-    -chardev socket,id=char0,path=./vhost-net1,server \
+    -chardev socket,id=char0,path=./vhost-net1 \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12
 
@@ -814,40 +780,4 @@  Test Case 12: VM2VM virtio-net packed ring non-mergeable 8 queues CBDMA enable t
     Under VM1, run: `iperf -s -i 1`
     Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
 
-7. Quit vhost ports and relaunch vhost ports w/o CBDMA channels::
-
-    ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \
-    --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
-    testpmd>start
-
-8. Scp 1MB file form VM1 to VM2::
-
-    Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
-
-9. Check the iperf performance and compare with CBDMA enable performance, ensure CMDMA enable performance is higher::
-
-    Under VM1, run: `iperf -s -i 1`
-    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
-
-10. Quit vhost ports and relaunch vhost ports with 1 queues::
-
-     ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \
-     --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1
-     testpmd>start
-
-11. On VM1, set virtio device::
-
-      ethtool -L ens5 combined 1
-
-12. On VM2, set virtio device::
-
-      ethtool -L ens5 combined 1
-
-13. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp::
-
-     Under VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name
-
-14. Check the iperf performance, ensure queue0 can work from vhost side::
-
-     Under VM1, run: `iperf -s -i 1`
-     Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
+7. Rerun step 5-6 five times.
\ No newline at end of file