From patchwork Thu May 19 09:10:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 111408 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CA14DA0503; Thu, 19 May 2022 11:11:34 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C292640156; Thu, 19 May 2022 11:11:34 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 1AC8040140 for ; Thu, 19 May 2022 11:11:32 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652951493; x=1684487493; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=5d7TS0/SAYWdqEgzgq0GQat1FPHWnkb2RT188nBkQLs=; b=Uil5vMH4WsVtxVtAp7W3SYXw+/JzGEYjgvkvAjpWL4CqE0bde3sMTl16 cBXf+Ia8VqwiWYOfIZC/hR7Z4KlZIvrw2oaX9vZwmGz5oL9gRiM3uuFRf vRTQKtnuensdk0aP9fzghIkFe/JpZ/n2bBtzosyCubYYsV4bP4UUtEbOW rzegiBib9TuSkL8y/P1ve6Iks42dC/5v7gZQXYUgoYwfUSkws3rCI9VsZ D3pxO6Q81ca173LSRTHbx2T6/ZaMSOiA94vuevMV1Eoxzgx1VniCP1cXj i5ZWNDgoPef/YXzOJ7h2PgIE8pqLBzNDvrjZ7s9erpFpuy6fYCLome7GH w==; X-IronPort-AV: E=McAfee;i="6400,9594,10351"; a="259684047" X-IronPort-AV: E=Sophos;i="5.91,237,1647327600"; d="scan'208";a="259684047" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 May 2022 02:11:31 -0700 X-IronPort-AV: E=Sophos;i="5.91,237,1647327600"; d="scan'208";a="714889789" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 May 2022 02:11:29 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V3 1/2] test_plans/vm2vm_virtio_net_perf_test_plan: delete CBDMA related testcases Date: Thu, 19 May 2022 05:10:07 -0400 Message-Id: <20220519091007.2818415-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Delete CBDMA related testcases. Signed-off-by: Wei Ling --- .../vm2vm_virtio_net_perf_test_plan.rst | 606 +----------------- 1 file changed, 5 insertions(+), 601 deletions(-) diff --git a/test_plans/vm2vm_virtio_net_perf_test_plan.rst b/test_plans/vm2vm_virtio_net_perf_test_plan.rst index 6e679b5b..077bbaf9 100644 --- a/test_plans/vm2vm_virtio_net_perf_test_plan.rst +++ b/test_plans/vm2vm_virtio_net_perf_test_plan.rst @@ -112,67 +112,7 @@ Test Case 1: VM2VM split ring vhost-user/virtio-net test with tcp traffic Port 0 should have tx packets above 1522 Port 1 should have rx packets above 1522 -Test Case 2: VM2VM split ring vhost-user/virtio-net CBDMA enable test with tcp traffic -====================================================================================== - -1. Bind 2 CBDMA channels to vfio-pci, then launch vhost by below command:: - - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:80:04.0]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0@0000:80:04.1]' \ - --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 - testpmd>vhost enable tx all - testpmd>start - -2. Launch VM1 and VM2:: - - taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10 - - taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12 - -3. On VM1, set virtio device IP and run arp protocol:: - - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 - -4. On VM2, set virtio device IP and run arp protocol:: - - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 - -5. Check the iperf performance between two VMs by below commands:: - - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` - -6. Check 2VMs can receive and send big packets to each other:: - - testpmd>show port xstats all - Port 0 should have tx packets above 1522 - Port 1 should have rx packets above 1522 - -7. Check throughput and compare with case1, CBDMA enable performance should larger than w/o CBDMA performance when cross socket. - -Test Case 3: VM2VM split ring vhost-user/virtio-net test with udp traffic +Test Case 2: VM2VM split ring vhost-user/virtio-net test with udp traffic ========================================================================= 1. Launch the Vhost sample by below commands:: @@ -229,7 +169,7 @@ Test Case 3: VM2VM split ring vhost-user/virtio-net test with udp traffic Port 0 should have tx packets above 1522 Port 1 should have rx packets above 1522 -Test Case 4: Check split ring virtio-net device capability +Test Case 3: Check split ring virtio-net device capability ========================================================== 1. Launch the Vhost sample by below commands:: @@ -279,241 +219,7 @@ Test Case 4: Check split ring virtio-net device capability tx-tcp-ecn-segmentation: on tx-tcp6-segmentation: on -Test Case 5: VM2VM split ring vhost-user/virtio-net mergeable 8 queues CBDMA enable test with large packet payload valid check -============================================================================================================================== - -1. Bind 16 CBDMA channels to vfio-pci, then launch vhost by below command:: - - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 - testpmd>vhost enable tx all - testpmd>start - -2. Launch VM1 and VM2 using qemu:: - - taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 - - taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 - -3. On VM1, set virtio device IP and run arp protocol:: - - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 - -4. On VM2, set virtio device IP and run arp protocol:: - - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 - -5. Scp 1MB file form VM1 to VM2:: - - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name - -6. Check the iperf performance between two VMs by below commands:: - - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` - -7. Quit and relaunch vhost w/ diff CBDMA channels:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 - testpmd>vhost enable tx all - testpmd>start - -8. Rerun step 5-6. - -9. Quit and relaunch vhost w/ iova=pa:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ - --iova=pa -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 - testpmd>vhost enable tx all - testpmd>start - -10. Rerun step 5-6. - -11. Quit and relaunch vhost w/o CBDMA channels:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4' \ - -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4 - testpmd>vhost enable tx all - testpmd>start - -12. On VM1, set virtio device:: - - ethtool -L ens5 combined 4 - -13. On VM2, set virtio device:: - - ethtool -L ens5 combined 4 - -14. Scp 1MB file form VM1 to VM2:: - - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name - -15. Check the iperf performance and compare with CBDMA enable performance, ensure CMDMA enable performance is higher:: - - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` - -16. Quit and relaunch vhost with 1 queues:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4' \ - -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1 - testpmd>vhost enable tx all - testpmd>start - -17. On VM1, set virtio device:: - - ethtool -L ens5 combined 1 - -18. On VM2, set virtio device:: - - ethtool -L ens5 combined 1 - -19. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp:: - - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name - -20. Check the iperf performance, ensure queue0 can work from vhost side:: - - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` - -Test Case 6: VM2VM split ring vhost-user/virtio-net non-mergeable 8 queues CBDMA enable test with large packet payload valid check -================================================================================================================================== - -1. Bind 16 CBDMA channels to vfio-pci, then launch vhost by below command:: - - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \ - -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 - testpmd>vhost enable tx all - testpmd>start - -2. Launch VM1 and VM2 using qemu:: - - taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 - - taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 - -3. On VM1, set virtio device IP and run arp protocol:: - - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 - -4. On VM2, set virtio device IP and run arp protocol:: - - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 - -5. Scp 1MB file form VM1 to VM2:: - - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name - -6. Check the iperf performance between two VMs by below commands:: - - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` - -7. Quit and relaunch vhost ports w/o CBDMA channels:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8' \ - -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 - testpmd>vhost enable tx all - testpmd>start - -8. Scp 1MB file form VM1 to VM2:: - - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name - -9. Check the iperf performance and compare with CBDMA enable performance, ensure CMDMA enable performance is higher:: - - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` - -10. Quit and relaunch vhost ports with 1 queues:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8' \ - -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1 - testpmd>vhost enable tx all - testpmd>start - -11. On VM1, set virtio device:: - - ethtool -L ens5 combined 1 - -12. On VM2, set virtio device:: - - ethtool -L ens5 combined 1 - -13. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp:: - - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name - -14. Check the iperf performance, ensure queue0 can work from vhost side:: - - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` - -Test Case 7: VM2VM packed ring vhost-user/virtio-net test with tcp traffic +Test Case 4: VM2VM packed ring vhost-user/virtio-net test with tcp traffic ========================================================================== 1. Launch the Vhost sample by below commands:: @@ -570,67 +276,7 @@ Test Case 7: VM2VM packed ring vhost-user/virtio-net test with tcp traffic Port 0 should have tx packets above 1522 Port 1 should have rx packets above 1522 -Test Case 8: VM2VM packed ring vhost-user/virtio-net CBDMA enable test with tcp traffic -======================================================================================= - -1. Bind 2 CBDMA channels to vfio-pci, then launch vhost by below command:: - - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:00:04.0]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0@0000:00:04.1]' \ - --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 - testpmd>vhost enable tx all - testpmd>start - -2. Launch VM1 and VM2 on socket 1 with qemu:: - - taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10 - - taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12 - -3. On VM1, set virtio device IP and run arp protocol:: - - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 - -4. On VM2, set virtio device IP and run arp protocol:: - - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 - -5. Check the iperf performance between two VMs by below commands:: - - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` - -6. Check 2VMs can receive and send big packets to each other:: - - testpmd>show port xstats all - Port 0 should have tx packets above 1522 - Port 1 should have rx packets above 1522 - -7. Check throughput and compare with case6, CBDMA enable performance should larger than w/o CBDMA performance when cross socket. - -Test Case 9: VM2VM packed ring vhost-user/virtio-net test with udp traffic +Test Case 5: VM2VM packed ring vhost-user/virtio-net test with udp traffic ========================================================================== 1. Launch the Vhost sample by below commands:: @@ -687,7 +333,7 @@ Test Case 9: VM2VM packed ring vhost-user/virtio-net test with udp traffic Port 0 should have tx packets above 1522 Port 1 should have rx packets above 1522 -Test Case 10: Check packed ring virtio-net device capability +Test Case 6: Check packed ring virtio-net device capability ============================================================ 1. Launch the Vhost sample by below commands:: @@ -736,245 +382,3 @@ Test Case 10: Check packed ring virtio-net device capability tx-tcp-segmentation: on tx-tcp-ecn-segmentation: on tx-tcp6-segmentation: on - -Test Case 11: VM2VM virtio-net packed ring mergeable 8 queues CBDMA enable test with large packet payload valid check -===================================================================================================================== - -1. Bind 16 CBDMA channels to vfio-pci, then launch vhost by below command:: - - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 - testpmd>vhost enable tx all - testpmd>start - -2. Launch VM1 and VM2 with qemu:: - - taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10 - - taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12 - -3. On VM1, set virtio device IP and run arp protocol:: - - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 - -4. On VM2, set virtio device IP and run arp protocol:: - - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 - -5. Scp 1MB file form VM1 to VM2:: - - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name - -6. Check the iperf performance between two VMs by below commands:: - - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` - -7. Rerun step 5-6 five times. - -Test Case 12: VM2VM virtio-net packed ring non-mergeable 8 queues CBDMA enable test with large packet payload valid check -========================================================================================================================= - -1. Bind 16 CBDMA channels to vfio-pci, then launch vhost by below command:: - - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 - testpmd>vhost enable tx all - testpmd>start - -2. Launch VM1 and VM2:: - - taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10 - - taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12 - -3. On VM1, set virtio device IP and run arp protocol:: - - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 - -4. On VM2, set virtio device IP and run arp protocol:: - - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 - -5. Scp 1MB file form VM1 to VM2:: - - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name - -6. Check the iperf performance between two VMs by below commands:: - - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` - -7. Rerun step 5-6 five times. - -Test Case 13: VM2VM packed ring vhost-user/virtio-net CBDMA enable test with tcp traffic when set iova=pa -========================================================================================================= - -1. Bind 2 CBDMA channels to vfio-pci, then launch vhost by below command:: - - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:00:04.0]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0@0000:00:04.1]' \ - --iova=pa -- -i --nb-cores=2 --txd=1024 --rxd=1024 - testpmd>vhost enable tx all - testpmd>start - -2. Launch VM1 and VM2 on socket 1 with qemu:: - - taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10 - - taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12 - -3. On VM1, set virtio device IP and run arp protocol:: - - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 - -4. On VM2, set virtio device IP and run arp protocol:: - - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 - -5. Scp 1MB file form VM1 to VM2:: - - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name - -6. Check the iperf performance between two VMs by below commands:: - - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` - -7. Check 2VMs can receive and send big packets to each other:: - - testpmd>show port xstats all - Port 0 should have tx packets above 1522 - Port 1 should have rx packets above 1522 - -Test Case 14: VM2VM virtio-net packed ring mergeable 8 queues CBDMA enable and PA mode test with large packet payload valid check -================================================================================================================================= - -1. Bind 16 CBDMA channels to vfio-pci, then launch vhost by below command:: - - rm -rf vhost-net* - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \ - --iova=pa -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 - testpmd>vhost enable tx all - testpmd>start - -2. Launch VM1 and VM2 with qemu:: - - taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10 - - taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12 - -3. On VM1, set virtio device IP and run arp protocol:: - - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 - -4. On VM2, set virtio device IP and run arp protocol:: - - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 - -5. Scp 1MB file form VM1 to VM2:: - - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name - -6. Check the iperf performance between two VMs by below commands:: - - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` - -7. Rerun step 5-6 five times. From patchwork Thu May 19 09:10:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 111409 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F01A6A0503; Thu, 19 May 2022 11:11:47 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E41AA40222; Thu, 19 May 2022 11:11:47 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id CA4AB40140 for ; Thu, 19 May 2022 11:11:45 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652951506; x=1684487506; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=zff1C/IeKZeXu5pRa5qTABuEMygOM91lfLazfciDVSs=; b=Tqof8I87s72LOVHgb1uZlGl/AUCG43rs1JBgDpwOYA/DLVAfVjalc4n5 qqj4wvIjoxxBgkmblDzO97oZqA7deZyAbcAWCUUvJkwk3SJOxJuJ0cbOk /0o0li91da1ElJIe0U/8+UHQQcD77ww1yHpjk8rWyWz3jQPLlwTMfqCSv SRZPu3DwkhUbQE6KiGLXqs1GwQjov1vQlnVfHCFmefAOv6WGOWb13plYi Qt2ycgEUR7hplFimWC5+R97LcFL2Yv6DXn8xKRndIHMdBuLrCCVazeIN8 65lCwd5PwZ3vzzegoHB06qQNuDjPuFjQ1W1CrxbbAeK+ZF7GHMVLtSlR3 g==; X-IronPort-AV: E=McAfee;i="6400,9594,10351"; a="271806167" X-IronPort-AV: E=Sophos;i="5.91,237,1647327600"; d="scan'208";a="271806167" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 May 2022 02:11:41 -0700 X-IronPort-AV: E=Sophos;i="5.91,237,1647327600"; d="scan'208";a="714889853" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 May 2022 02:11:40 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V3 2/2] tests/vm2vm_virtio_net_perf: delete CBDMA related testcases and code Date: Thu, 19 May 2022 05:10:18 -0400 Message-Id: <20220519091018.2818476-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Delete CBDMA related testcases and code. Signed-off-by: Wei Ling Tested-by: Chenyu Huang Acked-by: Xingguang He --- tests/TestSuite_vm2vm_virtio_net_perf.py | 540 +---------------------- 1 file changed, 13 insertions(+), 527 deletions(-) diff --git a/tests/TestSuite_vm2vm_virtio_net_perf.py b/tests/TestSuite_vm2vm_virtio_net_perf.py index 486f1acf..e10c8bd2 100644 --- a/tests/TestSuite_vm2vm_virtio_net_perf.py +++ b/tests/TestSuite_vm2vm_virtio_net_perf.py @@ -71,10 +71,6 @@ class TestVM2VMVirtioNetPerf(TestCase): self.vhost = self.dut.new_session(suite="vhost") self.pmd_vhost = PmdOutput(self.dut, self.vhost) self.app_testpmd_path = self.dut.apps_name["test-pmd"] - # get cbdma device - self.cbdma_dev_infos = [] - self.dmas_info = None - self.device_str = None self.checked_vm = False self.dut.restore_interfaces() @@ -86,67 +82,6 @@ class TestVM2VMVirtioNetPerf(TestCase): self.vm_dut = [] self.vm = [] - def get_cbdma_ports_info_and_bind_to_dpdk( - self, cbdma_num=2, allow_diff_socket=False - ): - """ - get all cbdma ports - """ - out = self.dut.send_expect( - "./usertools/dpdk-devbind.py --status-dev dma", "# ", 30 - ) - device_info = out.split("\n") - for device in device_info: - pci_info = re.search("\s*(0000:\S*:\d*.\d*)", device) - if pci_info is not None: - dev_info = pci_info.group(1) - # the numa id of ioat dev, only add the device which on same socket with nic dev - bus = int(dev_info[5:7], base=16) - if bus >= 128: - cur_socket = 1 - else: - cur_socket = 0 - if allow_diff_socket: - self.cbdma_dev_infos.append(pci_info.group(1)) - else: - if self.ports_socket == cur_socket: - self.cbdma_dev_infos.append(pci_info.group(1)) - self.verify( - len(self.cbdma_dev_infos) >= cbdma_num, - "There no enough cbdma device to run this suite", - ) - used_cbdma = self.cbdma_dev_infos[0:cbdma_num] - dmas_info = "" - for dmas in used_cbdma[0 : int(cbdma_num / 2)]: - number = used_cbdma[0 : int(cbdma_num / 2)].index(dmas) - dmas = "txq{}@{},".format(number, dmas) - dmas_info += dmas - for dmas in used_cbdma[int(cbdma_num / 2) :]: - number = used_cbdma[int(cbdma_num / 2) :].index(dmas) - dmas = "txq{}@{},".format(number, dmas) - dmas_info += dmas - self.dmas_info = dmas_info[:-1] - self.device_str = " ".join(used_cbdma) - self.dut.send_expect( - "./usertools/dpdk-devbind.py --force --bind=%s %s" - % (self.drivername, self.device_str), - "# ", - 60, - ) - - def bind_cbdma_device_to_kernel(self): - if self.device_str is not None: - self.dut.send_expect("modprobe ioatdma", "# ") - self.dut.send_expect( - "./usertools/dpdk-devbind.py -u %s" % self.device_str, "# ", 30 - ) - self.dut.send_expect( - "./usertools/dpdk-devbind.py --force --bind=ioatdma %s" - % self.device_str, - "# ", - 60, - ) - @property def check_2m_env(self): out = self.dut.send_expect( @@ -156,65 +91,43 @@ class TestVM2VMVirtioNetPerf(TestCase): def start_vhost_testpmd( self, - cbdma=False, no_pci=True, client_mode=False, enable_queues=1, nb_cores=2, rxq_txq=None, exchange_cbdma=False, - iova_mode="", ): """ launch the testpmd with different parameters """ - if cbdma is True: - dmas_info_list = self.dmas_info.split(",") - cbdma_arg_0_list = [] - cbdma_arg_1_list = [] - for item in dmas_info_list: - if dmas_info_list.index(item) < int(len(dmas_info_list) / 2): - cbdma_arg_0_list.append(item) - else: - cbdma_arg_1_list.append(item) - cbdma_arg_0 = ",dmas=[{}]".format(";".join(cbdma_arg_0_list)) - cbdma_arg_1 = ",dmas=[{}]".format(";".join(cbdma_arg_1_list)) - else: - cbdma_arg_0 = "" - cbdma_arg_1 = "" testcmd = self.app_testpmd_path + " " if not client_mode: - vdev1 = "--vdev 'net_vhost0,iface=%s/vhost-net0,queues=%d%s' " % ( + vdev1 = "--vdev 'net_vhost0,iface=%s/vhost-net0,queues=%d' " % ( self.base_dir, enable_queues, - cbdma_arg_0, ) - vdev2 = "--vdev 'net_vhost1,iface=%s/vhost-net1,queues=%d%s' " % ( + vdev2 = "--vdev 'net_vhost1,iface=%s/vhost-net1,queues=%d' " % ( self.base_dir, enable_queues, - cbdma_arg_1, ) else: - vdev1 = "--vdev 'net_vhost0,iface=%s/vhost-net0,client=1,queues=%d%s' " % ( + vdev1 = "--vdev 'net_vhost0,iface=%s/vhost-net0,client=1,queues=%d' " % ( self.base_dir, enable_queues, - cbdma_arg_0, ) - vdev2 = "--vdev 'net_vhost1,iface=%s/vhost-net1,client=1,queues=%d%s' " % ( + vdev2 = "--vdev 'net_vhost1,iface=%s/vhost-net1,client=1,queues=%d' " % ( self.base_dir, enable_queues, - cbdma_arg_1, ) if exchange_cbdma: - vdev1 = "--vdev 'net_vhost0,iface=%s/vhost-net0,client=1,queues=%d%s' " % ( + vdev1 = "--vdev 'net_vhost0,iface=%s/vhost-net0,client=1,queues=%d' " % ( self.base_dir, enable_queues, - cbdma_arg_1, ) - vdev2 = "--vdev 'net_vhost1,iface=%s/vhost-net1,client=1,queues=%d%s' " % ( + vdev2 = "--vdev 'net_vhost1,iface=%s/vhost-net1,client=1,queues=%d' " % ( self.base_dir, enable_queues, - cbdma_arg_0, ) eal_params = self.dut.create_eal_parameters( @@ -228,13 +141,8 @@ class TestVM2VMVirtioNetPerf(TestCase): rxq_txq, rxq_txq, ) - if iova_mode: - iova_parm = " --iova=" + iova_mode - else: - iova_parm = "" - self.command_line = testcmd + eal_params + vdev1 + vdev2 + iova_parm + params + self.command_line = testcmd + eal_params + vdev1 + vdev2 + params self.pmd_vhost.execute_cmd(self.command_line, timeout=30) - self.pmd_vhost.execute_cmd("vhost enable tx all", timeout=30) self.pmd_vhost.execute_cmd("start", timeout=30) def start_vms(self, server_mode=False, opt_queue=None, vm_config="vhost_sample"): @@ -309,13 +217,11 @@ class TestVM2VMVirtioNetPerf(TestCase): start vhost testpmd and qemu, and config the vm env """ self.start_vhost_testpmd( - cbdma=cbdma, no_pci=no_pci, client_mode=client_mode, enable_queues=enable_queues, nb_cores=nb_cores, rxq_txq=rxq_txq, - iova_mode=iova_mode, ) self.start_vms(server_mode=server_mode, opt_queue=opt_queue) self.config_vm_env(combined=combined, rxq_txq=rxq_txq) @@ -461,11 +367,10 @@ class TestVM2VMVirtioNetPerf(TestCase): def test_vm2vm_split_ring_iperf_with_tso(self): """ - TestCase1: VM2VM split ring vhost-user/virtio-net test with tcp traffic + Test Case 1: VM2VM split ring vhost-user/virtio-net test with tcp traffic """ self.vm_args = "disable-modern=false,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on" self.prepare_test_env( - cbdma=False, no_pci=True, client_mode=False, enable_queues=1, @@ -477,40 +382,12 @@ class TestVM2VMVirtioNetPerf(TestCase): ) self.start_iperf_and_verify_vhost_xstats_info(iperf_mode="tso") - def test_vm2vm_split_ring_with_tso_and_cbdma_enable(self): - """ - TestCase2: VM2VM split ring vhost-user/virtio-net CBDMA enable test with tcp traffic - """ - self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on" - self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=2) - self.prepare_test_env( - cbdma=True, - no_pci=False, - client_mode=False, - enable_queues=1, - nb_cores=2, - server_mode=False, - opt_queue=1, - combined=False, - rxq_txq=None, - ) - cbdma_value = self.start_iperf_and_verify_vhost_xstats_info(iperf_mode="tso") - expect_value = self.get_suite_cfg()["expected_throughput"][ - "test_vm2vm_split_ring_iperf_with_tso" - ] - self.verify( - cbdma_value > expect_value, - "CBDMA enable performance: %s is lower than CBDMA disable: %s." - % (cbdma_value, expect_value), - ) - def test_vm2vm_split_ring_iperf_with_ufo(self): """ - TestCase3: VM2VM split ring vhost-user/virtio-net test with udp traffic + Test Case 2: VM2VM split ring vhost-user/virtio-net test with udp traffic """ self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on" self.prepare_test_env( - cbdma=False, no_pci=True, client_mode=False, enable_queues=1, @@ -524,11 +401,10 @@ class TestVM2VMVirtioNetPerf(TestCase): def test_vm2vm_split_ring_device_capbility(self): """ - TestCase4: Check split ring virtio-net device capability + Test Case 3: Check split ring virtio-net device capability """ self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on" self.start_vhost_testpmd( - cbdma=False, no_pci=True, client_mode=False, enable_queues=1, @@ -539,240 +415,12 @@ class TestVM2VMVirtioNetPerf(TestCase): self.offload_capbility_check(self.vm_dut[0]) self.offload_capbility_check(self.vm_dut[1]) - def test_vm2vm_split_ring_with_mergeable_path_check_large_packet_and_cbdma_enable_8queue( - self, - ): - """ - TestCase5: VM2VM virtio-net split ring mergeable CBDMA enable test with large packet payload valid check - """ - ipef_result = [] - self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) - - self.logger.info("Launch vhost with CBDMA and with 8 queue with VA mode") - self.vm_args = "disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on" - self.prepare_test_env( - cbdma=True, - no_pci=False, - client_mode=True, - enable_queues=8, - nb_cores=4, - server_mode=True, - opt_queue=8, - combined=True, - rxq_txq=8, - iova_mode="va", - ) - self.check_scp_file_valid_between_vms() - iperf_data_cbdma_enable_8_queue = self.start_iperf_and_verify_vhost_xstats_info( - iperf_mode="tso" - ) - ipef_result.append( - [ - "Enable", - "mergeable path with VA mode", - 8, - iperf_data_cbdma_enable_8_queue, - ] - ) - - self.logger.info("Re-launch and exchange CBDMA and with 8 queue with VA mode") - self.vhost.send_expect("quit", "# ", 30) - self.start_vhost_testpmd( - cbdma=True, - no_pci=False, - client_mode=True, - enable_queues=8, - nb_cores=4, - rxq_txq=8, - exchange_cbdma=True, - iova_mode="va", - ) - self.check_scp_file_valid_between_vms() - iperf_data_cbdma_enable_8_queue_exchange = ( - self.start_iperf_and_verify_vhost_xstats_info(iperf_mode="tso") - ) - ipef_result.append( - [ - "Disable", - "mergeable path exchange CBDMA with VA mode", - 8, - iperf_data_cbdma_enable_8_queue_exchange, - ] - ) - - # This test step need to test on 1G guest hugepage ENV. - if not self.check_2m_env: - self.logger.info( - "Re-launch and exchange CBDMA and with 8 queue with PA mode" - ) - self.vhost.send_expect("quit", "# ", 30) - self.start_vhost_testpmd( - cbdma=True, - no_pci=False, - client_mode=True, - enable_queues=8, - nb_cores=4, - rxq_txq=8, - exchange_cbdma=True, - iova_mode="pa", - ) - self.check_scp_file_valid_between_vms() - iperf_data_cbdma_enable_8_queue_exchange_pa = ( - self.start_iperf_and_verify_vhost_xstats_info(iperf_mode="tso") - ) - ipef_result.append( - [ - "Disable", - "mergeable path exchange CBDMA with PA mode", - 8, - iperf_data_cbdma_enable_8_queue_exchange_pa, - ] - ) - - self.logger.info("Re-launch without CBDMA and with 4 queue") - self.vhost.send_expect("quit", "# ", 30) - self.start_vhost_testpmd( - cbdma=False, - no_pci=False, - client_mode=True, - enable_queues=4, - nb_cores=4, - rxq_txq=4, - ) - self.config_vm_env(combined=True, rxq_txq=4) - self.check_scp_file_valid_between_vms() - iperf_data_cbdma_disable_4_queue = ( - self.start_iperf_and_verify_vhost_xstats_info(iperf_mode="tso") - ) - ipef_result.append( - [ - "Disable", - "mergeable path without CBDMA with 4 queue", - 4, - iperf_data_cbdma_disable_4_queue, - ] - ) - - self.logger.info("Re-launch without CBDMA and with 1 queue") - self.vhost.send_expect("quit", "# ", 30) - self.start_vhost_testpmd( - cbdma=False, - no_pci=False, - client_mode=True, - enable_queues=4, - nb_cores=4, - rxq_txq=1, - ) - self.config_vm_env(combined=True, rxq_txq=1) - self.check_scp_file_valid_between_vms() - iperf_data_cbdma_disable_1_queue = ( - self.start_iperf_and_verify_vhost_xstats_info(iperf_mode="tso") - ) - ipef_result.append( - [ - "Disable", - "mergeable path without CBDMA with 1 queue", - 1, - iperf_data_cbdma_disable_1_queue, - ] - ) - - self.table_header = ["CBDMA Enable/Disable", "Mode", "rxq/txq", "Gbits/sec"] - self.result_table_create(self.table_header) - for table_row in ipef_result: - self.result_table_add(table_row) - self.result_table_print() - self.verify( - iperf_data_cbdma_enable_8_queue > iperf_data_cbdma_disable_4_queue, - "CMDMA enable: %s is lower than CBDMA disable: %s" - % (iperf_data_cbdma_enable_8_queue, iperf_data_cbdma_disable_4_queue), - ) - - def test_vm2vm_split_ring_with_no_mergeable_path_check_large_packet_and_cbdma_enable_8queue( - self, - ): - """ - TestCase6: VM2VM virtio-net split ring non-mergeable CBDMA enable test with large packet payload valid check - """ - ipef_result = [] - self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) - - self.logger.info("Launch vhost-testpmd with CBDMA and used 8 queue") - self.vm_args = "disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on" - self.prepare_test_env( - cbdma=True, - no_pci=False, - client_mode=True, - enable_queues=8, - nb_cores=4, - server_mode=True, - opt_queue=8, - combined=True, - rxq_txq=8, - ) - self.check_scp_file_valid_between_vms() - iperf_data_cbdma_enable_8_queue = self.start_iperf_and_verify_vhost_xstats_info( - iperf_mode="tso" - ) - ipef_result.append( - ["Enable", "no-mergeable path", 8, iperf_data_cbdma_enable_8_queue] - ) - - self.logger.info("Re-launch without CBDMA and used 8 queue") - self.vhost.send_expect("quit", "# ", 30) - self.start_vhost_testpmd( - cbdma=False, - no_pci=False, - client_mode=True, - enable_queues=8, - nb_cores=4, - rxq_txq=8, - ) - self.check_scp_file_valid_between_vms() - iperf_data_cbdma_disable_8_queue = ( - self.start_iperf_and_verify_vhost_xstats_info(iperf_mode="tso") - ) - ipef_result.append( - ["Disable", "no-mergeable path", 8, iperf_data_cbdma_disable_8_queue] - ) - - self.logger.info("Re-launch without CBDMA and used 1 queue") - self.vhost.send_expect("quit", "# ", 30) - self.start_vhost_testpmd( - cbdma=False, - no_pci=False, - client_mode=True, - enable_queues=8, - nb_cores=4, - rxq_txq=1, - ) - self.config_vm_env(combined=True, rxq_txq=1) - self.check_scp_file_valid_between_vms() - iperf_data_cbdma_disable_1_queue = ( - self.start_iperf_and_verify_vhost_xstats_info(iperf_mode="tso") - ) - ipef_result.append( - ["Disable", "no-mergeable path", 1, iperf_data_cbdma_disable_1_queue] - ) - - self.table_header = ["CBDMA Enable/Disable", "Mode", "rxq/txq", "Gbits/sec"] - self.result_table_create(self.table_header) - for table_row in ipef_result: - self.result_table_add(table_row) - self.result_table_print() - self.verify( - iperf_data_cbdma_enable_8_queue > iperf_data_cbdma_disable_8_queue, - "CMDMA enable: %s is lower than CBDMA disable: %s" - % (iperf_data_cbdma_enable_8_queue, iperf_data_cbdma_disable_8_queue), - ) - def test_vm2vm_packed_ring_iperf_with_tso(self): """ - TestCase7: VM2VM packed ring vhost-user/virtio-net test with tcp traffic + Test Case 4: VM2VM packed ring vhost-user/virtio-net test with tcp traffic """ self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on" self.prepare_test_env( - cbdma=False, no_pci=True, client_mode=False, enable_queues=1, @@ -784,32 +432,12 @@ class TestVM2VMVirtioNetPerf(TestCase): ) self.start_iperf_and_verify_vhost_xstats_info(iperf_mode="tso") - def test_vm2vm_packed_ring_iperf_with_tso_and_cbdma_enable(self): - """ - TestCase8: VM2VM packed ring vhost-user/virtio-net CBDMA enable test with tcp traffic - """ - self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=2) - self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on" - self.prepare_test_env( - cbdma=True, - no_pci=False, - client_mode=False, - enable_queues=1, - nb_cores=2, - server_mode=False, - opt_queue=None, - combined=False, - rxq_txq=None, - ) - self.start_iperf_and_verify_vhost_xstats_info(iperf_mode="tso") - def test_vm2vm_packed_ring_iperf_with_ufo(self): """ - Test Case 9: VM2VM packed ring vhost-user/virtio-net test with udp trafficc + Test Case 5: VM2VM packed ring vhost-user/virtio-net test with udp traffic """ self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on" self.prepare_test_env( - cbdma=False, no_pci=True, client_mode=False, enable_queues=1, @@ -823,11 +451,10 @@ class TestVM2VMVirtioNetPerf(TestCase): def test_vm2vm_packed_ring_device_capbility(self): """ - Test Case 10: Check packed ring virtio-net device capability + Test Case 6: Check packed ring virtio-net device capability """ self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on" self.start_vhost_testpmd( - cbdma=False, no_pci=True, client_mode=False, enable_queues=1, @@ -838,153 +465,12 @@ class TestVM2VMVirtioNetPerf(TestCase): self.offload_capbility_check(self.vm_dut[0]) self.offload_capbility_check(self.vm_dut[1]) - def test_vm2vm_packed_ring_with_mergeable_path_check_large_packet_and_cbdma_enable_8queue( - self, - ): - """ - Test Case 11: VM2VM virtio-net packed ring mergeable 8 queues CBDMA enable test with large packet payload valid check - """ - ipef_result = [] - self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) - - self.logger.info("Launch vhost-testpmd with CBDMA and used 8 queue") - self.vm_args = "disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on" - self.prepare_test_env( - cbdma=True, - no_pci=False, - client_mode=False, - enable_queues=8, - nb_cores=4, - server_mode=False, - opt_queue=8, - combined=True, - rxq_txq=8, - ) - for i in range(0, 5): - self.check_scp_file_valid_between_vms() - iperf_data_cbdma_enable_8_queue = ( - self.start_iperf_and_verify_vhost_xstats_info(iperf_mode="tso") - ) - ipef_result.append( - ["Enable_%d" % i, "mergeable path", 8, iperf_data_cbdma_enable_8_queue] - ) - self.table_header = ["CBDMA Enable/Disable", "Mode", "rxq/txq", "Gbits/sec"] - self.result_table_create(self.table_header) - for table_row in ipef_result: - self.result_table_add(table_row) - self.result_table_print() - - def test_vm2vm_packed_ring_with_no_mergeable_path_check_large_packet_and_cbdma_enable_8queue( - self, - ): - """ - Test Case 12: VM2VM virtio-net packed ring non-mergeable 8 queues CBDMA enable test with large packet payload valid check - """ - ipef_result = [] - self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) - - self.logger.info("Launch vhost-testpmd with CBDMA and used 8 queue") - self.vm_args = "disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on" - self.prepare_test_env( - cbdma=True, - no_pci=False, - client_mode=False, - enable_queues=8, - nb_cores=4, - server_mode=False, - opt_queue=8, - combined=True, - rxq_txq=8, - ) - for i in range(0, 5): - self.check_scp_file_valid_between_vms() - iperf_data_cbdma_enable_8_queue = ( - self.start_iperf_and_verify_vhost_xstats_info(iperf_mode="tso") - ) - ipef_result.append( - ["Enable", "mergeable path", 8, iperf_data_cbdma_enable_8_queue] - ) - self.table_header = ["CBDMA Enable/Disable", "Mode", "rxq/txq", "Gbits/sec"] - self.result_table_create(self.table_header) - for table_row in ipef_result: - self.result_table_add(table_row) - self.result_table_print() - - def test_vm2vm_packed_ring_with_tso_and_cbdma_enable_iova_pa(self): - """ - Test Case 13: VM2VM packed ring vhost-user/virtio-net CBDMA enable test with tcp traffic when set iova=pa - """ - # This test case need to test on 1G guest hugepage ENV. - self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on" - self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=2) - self.prepare_test_env( - cbdma=True, - no_pci=False, - client_mode=False, - enable_queues=1, - nb_cores=2, - server_mode=False, - opt_queue=1, - combined=False, - rxq_txq=None, - iova_mode="pa", - ) - self.check_scp_file_valid_between_vms() - cbdma_value = self.start_iperf_and_verify_vhost_xstats_info(iperf_mode="tso") - expect_value = self.get_suite_cfg()["expected_throughput"][ - "test_vm2vm_split_ring_iperf_with_tso" - ] - self.verify( - cbdma_value > expect_value, - "CBDMA enable performance: %s is lower than CBDMA disable: %s." - % (cbdma_value, expect_value), - ) - - def test_vm2vm_packed_ring_with_mergeable_path_check_large_packet_and_cbdma_enable_8queue_iova_pa( - self, - ): - """ - Test Case 14: VM2VM virtio-net packed ring mergeable 8 queues CBDMA enable and PA mode test with large packet payload valid check - """ - # This test case need to test on 1G guest hugepage ENV. - ipef_result = [] - self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) - - self.logger.info("Launch vhost-testpmd with CBDMA and used 8 queue") - self.vm_args = "disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on" - self.prepare_test_env( - cbdma=True, - no_pci=False, - client_mode=False, - enable_queues=8, - nb_cores=4, - server_mode=False, - opt_queue=8, - combined=True, - rxq_txq=8, - iova_mode="pa", - ) - for i in range(0, 5): - self.check_scp_file_valid_between_vms() - iperf_data_cbdma_enable_8_queue = ( - self.start_iperf_and_verify_vhost_xstats_info(iperf_mode="tso") - ) - ipef_result.append( - ["Enable_%d" % i, "mergeable path", 8, iperf_data_cbdma_enable_8_queue] - ) - self.table_header = ["CBDMA Enable/Disable", "Mode", "rxq/txq", "Gbits/sec"] - self.result_table_create(self.table_header) - for table_row in ipef_result: - self.result_table_add(table_row) - self.result_table_print() - def tear_down(self): """ run after each test case. """ self.stop_all_apps() self.dut.kill_all() - self.bind_cbdma_device_to_kernel() def tear_down_all(self): """