From patchwork Fri Nov 11 07:38:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 119773 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0E11FA0542; Fri, 11 Nov 2022 08:45:14 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 069324014F; Fri, 11 Nov 2022 08:45:14 +0100 (CET) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id 7DDA5400EF for ; Fri, 11 Nov 2022 08:45:12 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1668152712; x=1699688712; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=0b5ma/3Ka/4PjLR79aRmmEKzANYEl9uxznAMjh7gK+M=; b=hXsmKBVK5JqBJshTDMbqOQqg+RQU/DCq9/5zFqgMozozM8Ud0Rik6DSS 7xTL5h61ctvHjz6z8NPaC+P1XzR4LeSx4YqDqMI/IZOpbimlTwdpxBCPl CFLaoPfJW/wktZfYjZo8RBlFmVROJNUKdtyJZ+1wS/JIWOmn489OGNFvW KU3YyovufG2ZrTg40XedjUdylLii/nH9OC/ju/h068uEBxSOoPt7bPgK5 dMHPjCWXvPV9Yy36AIOJcUhjxOCixCtNjxPUOOGX1VCItf4JgpS1hRY9C taZ1TQd3vJ4339rKB75pWokF8IfRm6rfltZsnkOVkc9VRowghj0I5m0G6 A==; X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="309173726" X-IronPort-AV: E=Sophos;i="5.96,156,1665471600"; d="scan'208,223";a="309173726" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 23:45:11 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="639928750" X-IronPort-AV: E=Sophos;i="5.96,156,1665471600"; d="scan'208,223";a="639928750" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 23:45:09 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 1/3] test_plans/basic_4k_pages_dsa_test_plan: modify the dmas parameter Date: Fri, 11 Nov 2022 15:38:30 +0800 Message-Id: <20221111073830.2424805-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org From DPDK-22.11, the dmas parameter have changed from `lcore-dma=[lcore1@0000:00:04.0]` to `dmas=[txq0@0000:00:04.0]` by DPDK local patch,so modify the dmas parameter. Signed-off-by: Wei Ling --- test_plans/basic_4k_pages_dsa_test_plan.rst | 502 ++++++++++---------- 1 file changed, 246 insertions(+), 256 deletions(-) diff --git a/test_plans/basic_4k_pages_dsa_test_plan.rst b/test_plans/basic_4k_pages_dsa_test_plan.rst index 69cf7fc1..c37a1a51 100644 --- a/test_plans/basic_4k_pages_dsa_test_plan.rst +++ b/test_plans/basic_4k_pages_dsa_test_plan.rst @@ -3,7 +3,7 @@ ============================================= Basic 4k-pages test with DSA driver test plan -============================================== +============================================= Description =========== @@ -22,9 +22,9 @@ and packed ring vhost-user/virtio-net mergeable and non-mergeable path. 5. Vhost-user using 1G hugepges and virtio-user using 4k-pages. Note: -1. When DMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may +1.When DMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may exceed IOMMU's max capability, better to use 1G guest hugepage. -2. DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. +2.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. Prerequisites ============= @@ -41,13 +41,13 @@ General set up CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc ninja -C x86_64-native-linuxapp-gcc -j 110 -3. Get the PCI device ID and DSA device ID of DUT, for example, 0000:6a:00.0 is PCI device ID, 0000:6a:01.0 - 0000:f6:01.0 are DSA device IDs:: +3. Get the PCI device ID and DSA device ID of DUT, for example, 0000:27:00.0 is PCI device ID, 0000:6a:01.0 - 0000:f6:01.0 are DSA device IDs:: # ./usertools/dpdk-devbind.py -s Network devices using kernel driver =================================== - 0000:6a:00.0 'Ethernet Controller E810-C for QSFP 1592' drv=ice unused=vfio-pci + 0000:27:00.0 'Ethernet Controller E810-C for QSFP 1592' drv=ice unused=vfio-pci 4DMA devices using kernel driver 4=============================== @@ -112,28 +112,27 @@ Common steps Check WQ by 'ls /dev/dsa' and can find "wq0.0 wq1.0 wq1.1 wq1.2 wq1.3" Test Case 1: PVP split ring multi-queues with 4K-pages and dsa dpdk driver ------------------------------------------------------------------------------- +-------------------------------------------------------------------------- This case tests split ring with multi-queues can work normally in 4k-pages environment when vhost uses the asynchronous operations with dsa dpdk driver. 1. Bind 2 dsa device and one nic port to vfio-pci like common step 1-2:: - # ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0 + # ./usertools/dpdk-devbind.py -b vfio-pci 27:00.0 # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 2. Launch vhost by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --no-huge -m 1024 -a 0000:6a:00.0 -a 0000:e7:01.0 \ - --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=0 \ - --lcore-dma=[lcore11@0000:e7:01.0-q0,lcore11@0000:e7:01.0-q1,lcore12@0000:e7:01.0-q2,lcore12@0000:e7:01.0-q3,lcore13@0000:e7:01.0-q4,lcore13@0000:e7:01.0-q5,lcore14@0000:e7:01.0-q6,lcore14@0000:e7:01.0-q7] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --no-huge -m 1024 -a 0000:27:00.0 -a 0000:e7:01.0 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;txq2@0000:e7:01.0-q0;txq3@0000:e7:01.0-q0;txq4@0000:e7:01.0-q1;txq5@0000:e7:01.0-q1;rxq2@0000:e7:01.0-q2;rxq3@0000:e7:01.0-q2;rxq4@0000:e7:01.0-q3;rxq5@0000:e7:01.0-q3;rxq6@0000:e7:01.0-q3;rxq7@0000:e7:01.0-q3]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=0 testpmd>set fwd mac testpmd>start 3. Launch virtio-user with inorder mergeable path:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-huge -m 1024 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8,server=1 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 testpmd>set fwd csum testpmd>start @@ -152,10 +151,9 @@ This case tests split ring with multi-queues can work normally in 4k-pages envir 7. Quit and relaunch vhost with 1G hugepage:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:6a:00.0 -a 0000:e7:01.0,max_queues=4 -a 0000:ec:01.0,max_queues=4 \ - --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \ - --lcore-dma=[lcore11@0000:e7:01.0-q0,lcore11@0000:e7:01.0-q1,lcore12@0000:e7:01.0-q2,lcore12@0000:e7:01.0-q3,lcore13@0000:ec:01.0-q0,lcore13@0000:ec:01.0-q1,lcore14@0000:ec:01.0-q2,lcore14@0000:ec:01.0-q3] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:27:00.0 -a 0000:e7:01.0,max_queues=4 -a 0000:ec:01.0,max_queues=4 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;txq2@0000:e7:01.0-q1;txq3@0000:e7:01.0-q1;txq4@0000:e7:01.0-q2;txq5@0000:e7:01.0-q2;txq6@0000:e7:01.0-q3;txq7@0000:e7:01.0-q3;rxq0@0000:ec:01.0-q0;rxq1@0000:ec:01.0-q0;rxq2@0000:ec:01.0-q1;rxq3@0000:ec:01.0-q1;rxq4@0000:ec:01.0-q2;rxq5@0000:ec:01.0-q2;rxq6@0000:ec:01.0-q3;rxq7@0000:ec:01.0-q3]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 testpmd>set fwd mac testpmd>start @@ -163,38 +161,37 @@ This case tests split ring with multi-queues can work normally in 4k-pages envir 9. Quit and relaunch virtio-user with mergeable path:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-huge -m 1024 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=8,server=1 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 testpmd>set fwd csum testpmd>start 10. Rerun step 4-6. Test Case 2: PVP packed ring multi-queues with 4K-pages and dsa dpdk driver ------------------------------------------------------------------------------- +--------------------------------------------------------------------------- This case tests packed ring with multi-queues can work normally in 4k-pages environment when vhost uses the asynchronous operations with dsa dpdk driver. 1. Bind 2 dsa device and one nic port to vfio-pci like common step 1-2:: - # ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0 - # ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 f6:01.0 + # ./usertools/dpdk-devbind.py -b vfio-pci 27:00.0 + # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 2. Launch vhost by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --no-huge -m 1024 -a 0000:6a:00.0 -a 0000:f1:01.0 \ - --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=0 \ - --lcore-dma=[lcore11@0000:f1:01.0-q0,lcore11@0000:f1:01.0-q1,lcore12@0000:f1:01.0-q2,lcore12@0000:f1:01.0-q3,lcore13@0000:f1:01.0-q4,lcore13@0000:f1:01.0-q5,lcore14@0000:f1:01.0-q6,lcore14@0000:f1:01.0-q7] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --no-huge -m 1024 -a 0000:27:00.0 -a 0000:e7:01.0 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;txq2@0000:e7:01.0-q0;txq3@0000:e7:01.0-q0;txq4@0000:e7:01.0-q1;txq5@0000:e7:01.0-q1;rxq2@0000:e7:01.0-q2;rxq3@0000:e7:01.0-q2;rxq4@0000:e7:01.0-q3;rxq5@0000:e7:01.0-q3;rxq6@0000:e7:01.0-q3;rxq7@0000:e7:01.0-q3]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=0 testpmd>set fwd mac testpmd>start 3. Launch virtio-user with inorder mergeable path:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-huge -m 1024 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=8,server=1 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd>set fwd mac + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum testpmd>start 4. Send tcp imix packets [64,1518] from packet generator, check the throughput can get expected data:: @@ -212,10 +209,9 @@ This case tests packed ring with multi-queues can work normally in 4k-pages envi 7. Quit and relaunch vhost with with 1G hugepage:::: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=4 -a 0000:f6:01.0,max_queues=4 \ - --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \ - --lcore-dma=[lcore11@0000:f1:01.0-q0,lcore11@0000:f1:01.0-q1,lcore12@0000:f1:01.0-q2,lcore12@0000:f1:01.0-q3,lcore13@0000:f6:01.0-q0,lcore13@0000:f6:01.0-q1,lcore14@0000:f6:01.0-q2,lcore14@0000:f6:01.0-q3] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:27:00.0 -a 0000:e7:01.0,max_queues=4 -a 0000:ec:01.0,max_queues=4 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;txq2@0000:e7:01.0-q1;txq3@0000:e7:01.0-q1;txq4@0000:e7:01.0-q2;txq5@0000:e7:01.0-q2;txq6@0000:e7:01.0-q3;txq7@0000:e7:01.0-q3;rxq0@0000:ec:01.0-q0;rxq1@0000:ec:01.0-q0;rxq2@0000:ec:01.0-q1;rxq3@0000:ec:01.0-q1;rxq4@0000:ec:01.0-q2;rxq5@0000:ec:01.0-q2;rxq6@0000:ec:01.0-q3;rxq7@0000:ec:01.0-q3]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 testpmd>set fwd mac testpmd>start @@ -223,17 +219,19 @@ This case tests packed ring with multi-queues can work normally in 4k-pages envi 9. Quit and relaunch virtio-user with mergeable path:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-huge -m 1024 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,packed_vq=1,queues=8,server=1 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd>set fwd mac + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum testpmd>start 10.Rerun step 4-6. Test Case 3: VM2VM split ring vhost-user/virtio-net 4K-pages and dsa dpdk driver test with tcp traffic --------------------------------------------------------------------------------------------------------- -This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net split ring mergeable path by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous operations with dsa dpdk driver in 4k-pages environment. +------------------------------------------------------------------------------------------------------ +This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net split ring mergeable path +by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous operations with dsa dpdk driver +in 4k-pages environment. 1. Bind 1 dsa device to vfio-pci like common step 2:: @@ -242,16 +240,16 @@ This case test the function of Vhost tx offload in the topology of vhost-user/vi 2. Launch vhost by below command:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:f1:01.0,max_queues=2 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0;rxq0]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0;rxq0]' \ - --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --no-numa --socket-num=0 --lcore-dma=[lcore3@0000:f1:01.0-q0,lcore4@0000:f1:01.0-q1] + --vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0@0000:f1:01.0-q0;rxq0@0000:f1:01.0-q0]' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0@0000:f1:01.0-q1;rxq0@0000:f1:01.0-q1]' \ + --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --no-numa --socket-num=0 testpmd>start 3. Launch VM1 and VM2:: - taskset -c 10 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ + taskset -c 10 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04.img \ -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -260,9 +258,9 @@ This case test the function of Vhost tx offload in the topology of vhost-user/vi -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10 - taskset -c 11 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ + taskset -c 11 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k_2,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04-2.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04-2.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -291,9 +289,10 @@ This case test the function of Vhost tx offload in the topology of vhost-user/vi testpmd>show port xstats all Test Case 4: VM2VM packed ring vhost-user/virtio-net 4K-pages and dsa dpdk driver test with tcp traffic ---------------------------------------------------------------------------------------------------------- +------------------------------------------------------------------------------------------------------- This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net packed ring mergeable path -by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous operations with dsa dpdk driver in 4k-pages environment. +by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous operations with dsa dpdk driver +in 4k-pages environment. 1. Bind 1 dsa device to vfio-pci like common step 2:: @@ -302,16 +301,16 @@ by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous o 2. Launch vhost by below command:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:f1:01.0,max_queues=2 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0;rxq0]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0;rxq0]' \ - --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --no-numa --socket-num=0 --lcore-dma=[lcore3@0000:f1:01.0-q0,lcore4@0000:f1:01.0-q1] + --vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0@0000:f1:01.0-q0;rxq0@0000:f1:01.0-q1]' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0@0000:f1:01.0-q0;rxq0@0000:f1:01.0-q1]' \ + --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --no-numa --socket-num=0 testpmd>start 3. Launch VM1 and VM2:: - taskset -c 32 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ + taskset -c 32 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04.img \ -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -320,9 +319,9 @@ by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous o -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10 - taskset -c 33 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ + taskset -c 33 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k_2,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04-2.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04-2.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -351,7 +350,7 @@ by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous o testpmd>show port xstats all Test Case 5: VM2VM vhost/virtio-net split packed ring multi queues with 1G/4k-pages and dsa dpdk driver ---------------------------------------------------------------------------------------------------------- +------------------------------------------------------------------------------------------------------- This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in vm2vm vhost-user/virtio-net multi-queues mergeable path when vhost uses the asynchronous operations with dsa dpdk driver. And one virtio-net is split ring, the other is packed ring. The vhost run in 1G hugepages and the virtio-user run in 4k-pages environment. @@ -363,17 +362,16 @@ And one virtio-net is split ring, the other is packed ring. The vhost run in 1G 2. Launch vhost:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:f1:01.0,max_queues=4 -a 0000:f6:01.0,max_queues=4 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ - --lcore-dma=[lcore2@0000:f1:01.0-q0,lcore2@0000:f1:01.0-q1,lcore3@0000:f1:01.0-q2,lcore3@0000:f1:01.0-q3,lcore4@0000:f6:01.0-q0,lcore4@0000:f6:01.0-q1,lcore5@0000:f6:01.0-q2,lcore3@0000:f6:01.0-q3] + --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@0000:f1:01.0-q0;txq1@0000:f1:01.0-q0;txq2@0000:f1:01.0-q0;txq3@0000:f1:01.0-q0;txq4@0000:f1:01.0-q1;txq5@0000:f1:01.0-q1;rxq2@0000:f1:01.0-q2;rxq3@0000:f1:01.0-q2;rxq4@0000:f1:01.0-q3;rxq5@0000:f1:01.0-q3;rxq6@0000:f1:01.0-q3;rxq7@0000:f1:01.0-q3]' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@0000:f6:01.0-q0;txq1@0000:f6:01.0-q0;txq2@0000:f6:01.0-q0;txq3@0000:f6:01.0-q0;txq4@0000:f6:01.0-q1;txq5@0000:f6:01.0-q1;rxq2@0000:f6:01.0-q2;rxq3@0000:f6:01.0-q2;rxq4@0000:f6:01.0-q3;rxq5@0000:f6:01.0-q3;rxq6@0000:f6:01.0-q3;rxq7@0000:f6:01.0-q3]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 testpmd>start 3. Launch VM qemu:: - taskset -c 10 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ + taskset -c 10 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04.img \ -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -382,9 +380,9 @@ And one virtio-net is split ring, the other is packed ring. The vhost run in 1G -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10 - taskset -c 11 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ + taskset -c 11 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k_2,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04-2.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04-2.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -417,102 +415,95 @@ And one virtio-net is split ring, the other is packed ring. The vhost run in 1G 8. Relaunch vm1 and rerun step 4-7. Test Case 6: VM2VM vhost/virtio-net split ring multi queues with 1G/4k-pages and dsa dpdk driver ---------------------------------------------------------------------------------------------------- +------------------------------------------------------------------------------------------------ This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in vm2vm vhost-user/virtio-net split ring mergeable path when vhost uses the asynchronous operations with dsa dpdk driver. The vhost run in 1G hugepages and the virtio-user run in 4k-pages environment. -1. Bind 2 dsa channel to vfio-pci, launch vhost:: +1. Bind 2 dsa channel to vfio-pci, - ls /dev/dsa #check wq configure, reset if exist - ./usertools/dpdk-devbind.py -u f1:01.0 f1:01.0 - ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 f1:01.0 - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:f1:01.0,max_queues=4 -a 0000:f6:01.0,max_queues=4 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,tso=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,tso=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ - --lcore-dma=[lcore2@0000:f1:01.0-q0,lcore2@0000:f1:01.0-q1,lcore3@0000:f1:01.0-q2,lcore3@0000:f1:01.0-q3,lcore4@0000:f6:01.0-q0,lcore4@0000:f6:01.0-q1,lcore5@0000:f6:01.0-q2,lcore5@0000:f6:01.0-q3] - testpmd>start + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u f1:01.0 f1:01.0 + ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 f1:01.0 -2. Prepare tmpfs with 4K-pages:: +2. Launch vhost:: - mkdir /mnt/tmpfs_4k - mkdir /mnt/tmpfs_4k_2 - mount tmpfs /mnt/tmpfs_4k -t tmpfs -o size=4G - mount tmpfs /mnt/tmpfs_4k_2 -t tmpfs -o size=4G + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:f1:01.0,max_queues=4 -a 0000:f6:01.0,max_queues=4 \ + --vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,tso=1,dmas=[txq0@0000:f1:01.0-q0;txq1@0000:f1:01.0-q0;txq2@0000:f1:01.0-q0;txq3@0000:f1:01.0-q0;txq4@0000:f1:01.0-q1;txq5@0000:f1:01.0-q1;rxq2@0000:f1:01.0-q2;rxq3@0000:f1:01.0-q2;rxq4@0000:f1:01.0-q3;rxq5@0000:f1:01.0-q3;rxq6@0000:f1:01.0-q3;rxq7@0000:f1:01.0-q3]' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,tso=1,dmas=[txq0@0000:f6:01.0-q0;txq1@0000:f6:01.0-q0;txq2@0000:f6:01.0-q0;txq3@0000:f6:01.0-q0;txq4@0000:f6:01.0-q1;txq5@0000:f6:01.0-q1;rxq2@0000:f6:01.0-q2;rxq3@0000:f6:01.0-q2;rxq4@0000:f6:01.0-q3;rxq5@0000:f6:01.0-q3;rxq6@0000:f6:01.0-q3;rxq7@0000:f6:01.0-q3]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 + testpmd>start 3. Launch VM qemu:: - taskset -c 32 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04.img \ - -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10 - - taskset -c 33 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k_2,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12 + taskset -c 32 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04.img \ + -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=./vhost-net0,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10 + + taskset -c 33 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k_2,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04-2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net1,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12 4. On VM1, set virtio device IP and run arp protocal:: - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 5. On VM2, set virtio device IP and run arp protocal:: - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 6. Scp 1MB file form VM1 to VM2:: - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name 7. Check the iperf performance between two VMs by below commands:: - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` 8. Quit and relaunch vhost w/ diff dsa channels:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:f1:01.0,max_queues=2 -a 0000:f6:01.0,max_queues=2 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,tso=1,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,tso=1,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4 \ - --lcore-dma=[lcore2@0000:f1:01.0-q0,lcore3@0000:f1:01.0-q1,lcore4@0000:f6:01.0-q0,lcore5@0000:f6:01.0-q1] - testpmd>start + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:f1:01.0,max_queues=2 -a 0000:f6:01.0,max_queues=2 \ + --vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,tso=1,dmas=[txq0@0000:f1:01.0-q0;txq1@0000:f1:01.0-q1;txq2@0000:f6:01.0-q0;txq3@0000:f6:01.0-q1;rxq0@0000:f1:01.0-q0;rxq1@0000:f1:01.0-q1;rxq2@0000:f6:01.0-q0;rxq3@0000:f6:01.0-q1]' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,tso=1,dmas=[txq0@0000:f1:01.0-q0;txq1@0000:f1:01.0-q1;txq2@0000:f6:01.0-q0;txq3@0000:f6:01.0-q1;rxq0@0000:f1:01.0-q0;rxq1@0000:f1:01.0-q1;rxq2@0000:f6:01.0-q0;rxq3@0000:f6:01.0-q1]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4 + testpmd>start 9. On VM1, set virtio device:: - # ethtool -L ens5 combined 4 + # ethtool -L ens5 combined 4 10. On VM2, set virtio device:: - # ethtool -L ens5 combined 4 + # ethtool -L ens5 combined 4 11. Rerun step 6-7. Test Case 7: PVP split ring multi-queues with 4K-pages and dsa kernel driver --------------------------------------------------------------------------------- +---------------------------------------------------------------------------- This case tests split ring with multi-queues can work normally in 4k-pages environment when vhost uses the asynchronous operations with dsa kernel driver. 1. Bind one nic port to vfio-pci and 2 dsa device to idxd like common step 1 and 3:: - # ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0 + # ./usertools/dpdk-devbind.py -b vfio-pci 27:00.0 .ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 @@ -523,19 +514,18 @@ This case tests split ring with multi-queues can work normally in 4k-pages envir 2. Launch vhost by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -m 1024 --no-huge -a 0000:6a:00.0 \ - --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=0 \ - --lcore-dma=[lcore11@wq0.0,lcore12@wq0.1,lcore13@wq0.2,lcore14@wq0.3] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -m 1024 --no-huge -a 0000:27:00.0 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=0 testpmd>set fwd mac testpmd>start 3. Launch virtio-user with inorder mergeable path:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-huge -m 1024 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8,server=1 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd>set fwd mac + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum testpmd>start 4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data:: @@ -551,47 +541,55 @@ This case tests split ring with multi-queues can work normally in 4k-pages envir testpmd>start testpmd>show port stats all -7. Quit and relaunch vhost with diff dsa virtual channels and 1G-page:::: +7. Quit and relaunch vhost with diff dsa virtual channels and 1G-page:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:6a:00.0 \ - --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \ - --lcore-dma=[lcore11@wq0.0,lcore12@wq0.0,lcore13@wq0.1,lcore13@wq1.0,lcore14@wq1.1,lcore14@wq1.2] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:27:00.0 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.1;txq3@wq0.1;txq4@wq0.2;txq5@wq0.2;txq6@wq0.3;txq7@wq0.3;rxq0@wq0.0;rxq1@wq0.0;rxq2@wq0.1;rxq3@wq0.1;rxq4@wq0.2;rxq5@wq0.2;rxq6@wq0.3;rxq7@wq0.3]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 testpmd>set fwd mac testpmd>start 8. Rerun step 4-6. +9. Quit and relaunch virtio-user with mergeable path:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-huge -m 1024 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=8,server=1 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +10. Rerun step 4-6. + Test Case 8: PVP packed ring multi-queues with 4K-pages and dsa kernel driver ---------------------------------------------------------------------------------- +----------------------------------------------------------------------------- This case tests split ring with multi-queues can work normally in 4k-pages environment when vhost uses the asynchronous operations with dsa kernel driver. 1. Bind one nic port to vfio-pci and 2 dsa device to idxd like common step 1 and 3:: - # ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0 + # ./usertools/dpdk-devbind.py -b vfio-pci 27:00.0 .ls /dev/dsa #check wq configure, reset if exist - # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 + # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 - # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 - # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0 + # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 1 ls /dev/dsa #check wq configure success 2. Launch vhost by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -m 1024 --no-huge -a 0000:6a:00.0 \ - --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=0 \ - --lcore-dma=[lcore11@wq0.0,lcore12@wq0.1,lcore13@wq0.2,lcore14@wq0.3] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -m 1024 --no-huge -a 0000:27:00.0 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=0 testpmd>set fwd mac testpmd>start 3. Launch virtio-user with inorder mergeable path:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-huge -m 1024 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=8,server=1 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd>set fwd mac + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum testpmd>start 4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data:: @@ -609,17 +607,26 @@ This case tests split ring with multi-queues can work normally in 4k-pages envir 7. Quit and relaunch vhost:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:6a:00.0 \ - --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \ - --lcore-dma=[lcore11@wq0.0,lcore11@wq0.1,lcore12@wq1.0,lcore2@wq1.1] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:27:00.0 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;txq6@wq0.1;txq7@wq0.1;rxq0@wq0.0;rxq1@wq0.0;rxq2@wq0.0;rxq3@wq0.0;rxq4@wq0.1;rxq5@wq0.1;rxq6@wq0.1;rxq7@wq0.1]' \ + --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --txq=8 --rxq=8 testpmd>set fwd mac testpmd>start 8. Rerun step 4-6. +9. Quit and relaunch virtio-user with mergeable path:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-huge -m 1024 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,packed_vq=1,queues=8,server=1 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start + +10.Rerun step 4-6. + Test Case 9: VM2VM split ring vhost-user/virtio-net 4K-pages and dsa kernel driver test with tcp traffic ---------------------------------------------------------------------------------------------------------- +-------------------------------------------------------------------------------------------------------- This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net split ring mergeable path by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous operations with dsa dpdk driver in 4k-pages environment. @@ -629,22 +636,22 @@ in 4k-pages environment. ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u 6a:01.0 # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 - # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0 ls /dev/dsa #check wq configure success 2. Launch the Vhost sample by below commands:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0;rxq0]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0;rxq0]' \ - --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --rxq=1 --txq=1 --no-numa --socket-num=0 --lcore-dma=[lcore2@wq0.0,lcore2@wq0.1,lcore3@wq0.2,lcore3@wq0.3] + --vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0@wq0.0;rxq0@wq0.1]' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0@wq0.2;rxq0@wq0.3]' \ + --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --rxq=1 --txq=1 --no-numa --socket-num=0 testpmd>start 3. Launch VM1 and VM2 on socket 1:: - taskset -c 7 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ + taskset -c 7 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04.img \ -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -653,9 +660,9 @@ in 4k-pages environment. -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 - taskset -c 8 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ + taskset -c 8 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k_2,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04-2.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04-2.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -684,33 +691,32 @@ in 4k-pages environment. testpmd>show port xstats all Test Case 10: VM2VM packed ring vhost-user/virtio-net 4K-pages and dsa kernel driver test with tcp traffic ------------------------------------------------------------------------------------------------------------ +---------------------------------------------------------------------------------------------------------- This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net packed ring mergeable path by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous operations with dsa dpdk driver in 4k-pages environment. -1. Bind 2 dsa device to idxd like common step 2:: +1. Bind 1 dsa device to idxd like common step 2:: ls /dev/dsa #check wq configure, reset if exist - # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 - # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 - # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 - # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 + # ./usertools/dpdk-devbind.py -u 6a:01.0 + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0 ls /dev/dsa #check wq configure success 2. Launch the Vhost sample by below commands:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-huge -m 1024 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0;rxq0]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0;rxq0]' \ - --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --no-numa --socket-num=0 --lcore-dma=[lcore3@wq0.0,lcore4@wq1.0] + --vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0@wq0.0;rxq0@wq0.0]' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0@wq0.1;rxq0@wq0.1]' \ + --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --no-numa --socket-num=0 testpmd>start 3. Launch VM1 and VM2 with qemu:: - taskset -c 7 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ + taskset -c 7 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04.img \ -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -719,9 +725,9 @@ in 4k-pages environment. -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10 - taskset -c 8 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ + taskset -c 8 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k_2,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04-2.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04-2.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -750,40 +756,33 @@ in 4k-pages environment. testpmd>show port xstats all Test Case 11: VM2VM vhost/virtio-net split packed ring multi queues with 1G/4k-pages and dsa kernel driver ------------------------------------------------------------------------------------------------------------ +---------------------------------------------------------------------------------------------------------- This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in vm2vm vhost-user/virtio-net split and packed ring mergeable path when vhost uses the asynchronous operations with dsa kernel driver. The vhost run in 1G hugepages and the virtio-user run in 4k-pages environment. -1. Bind 8 dsa device to idxd like common step 3:: +1. Bind 2 dsa device to idxd like common step 3:: ls /dev/dsa #check wq configure, reset if exist - # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0 - # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0 - # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 - # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 - # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2 - # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 3 - # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 4 - # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 5 - # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 6 - # .//drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 7 + # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 1 ls /dev/dsa #check wq configure success 2. Launch vhost:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ - --lcore-dma=[lcore2@wq0.0,lcore2@wq1.1,lcore2@wq2.2,lcore2@wq3.3,lcore3@wq0.0,lcore3@wq2.2,lcore3@wq4.4,lcore3@wq5.5,lcore3@wq6.6,lcore3@wq7.7,lcore4@wq1.1,lcore4@wq3.3,lcore4@wq0.1,lcore4@wq1.2,lcore4@wq2.3,lcore4@wq3.4,lcore4@wq4.5,lcore4@wq5.6,lcore4@wq6.7,lcore5@wq7.0] + --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@wq0.2;txq1@wq0.2;txq2@wq0.2;txq3@wq0.2;txq4@wq0.3;txq5@wq0.3;rxq2@wq1.2;rxq3@wq1.2;rxq4@wq1.3;rxq5@wq1.3;rxq6@wq1.3;rxq7@wq1.3]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 testpmd>start 3. Launch VM qemu:: - taskset -c 32 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ + taskset -c 32 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04.img \ -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -792,9 +791,9 @@ dsa kernel driver. The vhost run in 1G hugepages and the virtio-user run in 4k-p -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10 - taskset -c 33 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ + taskset -c 33 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k_2,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04-2.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04-2.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ @@ -827,95 +826,86 @@ dsa kernel driver. The vhost run in 1G hugepages and the virtio-user run in 4k-p 8. Relaunch vm1 and rerun step 4-7. Test Case 12: VM2VM vhost/virtio-net split ring multi queues with 1G/4k-pages and dsa kernel driver ------------------------------------------------------------------------------------------------------ +--------------------------------------------------------------------------------------------------- This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in vm2vm vhost-user/virtio-net split ring mergeable path when vhost uses the asynchronous operations with dsa kernel driver. The vhost run in 1G hugepages and the virtio-user run in 4k-pages environment. 1. Bind 2 dsa channel to idxd, launch vhost:: - ls /dev/dsa #check wq configure, reset if exist - ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 - ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 - ./drivers/raw/ioat/dpdk_idxd_cfg.py -q 4 0 - ./drivers/raw/ioat/dpdk_idxd_cfg.py -q 4 1 + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 + ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 + ./drivers/raw/ioat/dpdk_idxd_cfg.py -q 4 0 + ./drivers/raw/ioat/dpdk_idxd_cfg.py -q 4 1 2. Launch vhost:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:6a:01.0,max_queues=4 -a 0000:6f:01.0,max_queues=4 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ - --lcore-dma=[lcore2@wq0.0,lcore2@wq0.1,lcore3@wq0.2,lcore3@wq0.3,lcore4@wq1.0,lcore4@wq1.1,lcore5@wq1.2,lcore5@wq1.3] - testpmd>start - -3. Prepare tmpfs with 4K-pages:: - - mkdir /mnt/tmpfs_4k - mkdir /mnt/tmpfs_4k_2 - mount tmpfs /mnt/tmpfs_4k -t tmpfs -o size=4G - mount tmpfs /mnt/tmpfs_4k_2 -t tmpfs -o size=4G - -4. Launch VM qemu:: + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:6a:01.0,max_queues=4 -a 0000:6f:01.0,max_queues=4 \ + --vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,tso=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,tso=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 + testpmd>start - taskset -c 10 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04.img \ - -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10 +3. Launch VM qemu:: - taskset -c 11 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k_2,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12 + taskset -c 10 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04.img \ + -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=./vhost-net0,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10 + + taskset -c 11 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k_2,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04-2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net1,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12 -5. On VM1, set virtio device IP and run arp protocal:: +4. On VM1, set virtio device IP and run arp protocal:: - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 -6. On VM2, set virtio device IP and run arp protocal:: +5. On VM2, set virtio device IP and run arp protocal:: - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 -7. Scp 1MB file form VM1 to VM2:: +6. Scp 1MB file form VM1 to VM2:: - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name -8. Check the iperf performance between two VMs by below commands:: +7. Check the iperf performance between two VMs by below commands:: - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` -9. Quit and relaunch vhost w/ diff dsa channels:: +8. Quit and relaunch vhost w/ diff dsa channels:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4 \ - --lcore-dma=[lcore2@wq0.0,lcore3@wq0.1,lcore4@wq1.0,lcore5@wq1.1] - testpmd>start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ + --vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,tso=1,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.3;rxq0@wq0.0;rxq1@wq0.1;rxq2@wq0.2;rxq3@wq0.3]' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,tso=1,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.3;rxq0@wq0.0;rxq1@wq0.1;rxq2@wq0.2;rxq3@wq0.3]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4 + testpmd>start -10. On VM1, set virtio device:: +9. On VM1, set virtio device:: - # ethtool -L ens5 combined 4 + # ethtool -L ens5 combined 4 -11. On VM2, set virtio device:: +10. On VM2, set virtio device:: - # ethtool -L ens5 combined 4 + # ethtool -L ens5 combined 4 -12. Rerun step 6-7. +11. Rerun step 6-7. From patchwork Fri Nov 11 07:38:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 119774 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 30E5EA0542; Fri, 11 Nov 2022 08:45:25 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2C047410F2; Fri, 11 Nov 2022 08:45:25 +0100 (CET) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id C30E040F16 for ; Fri, 11 Nov 2022 08:45:22 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1668152723; x=1699688723; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=LxGK9Xb2fqvP2hizD9hqErieZ0UaFwBGPAckW1MTQWI=; b=XFRwBtIufnJU1cJeC6TZi83VHHfeQce5vbzzCA9iA5mIYRDeDVXbUV2j DcGxC3fcbvTFscGU8X/xGLyWgy8u5QnL8k4Po02S84eMwMh8Ya56EohVn It2SQ68xO7xO3Ad9gViXcAcoItcw/zXjvWiUuWxjoWgYh1ysli8LaUNQU 0J3BhZqwDptM6TqLvtTFv/0nxzVUp/hw3NNsm1BT2FsUBsi7QwbWbcAyH 8ioAT/sbcSD12Sd0fHIamkxXbKEs2f9p8uraqTyE88UkrBAaO986Y0BpK sbhbDUB5MhyTqb0MTNvMpxoX0utCfAqsETVdakhPdE64VvI4EIWJpTLSv A==; X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="294906807" X-IronPort-AV: E=Sophos;i="5.96,156,1665471600"; d="scan'208";a="294906807" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 23:45:21 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="726703005" X-IronPort-AV: E=Sophos;i="5.96,156,1665471600"; d="scan'208";a="726703005" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 23:45:19 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 2/3] tests/basic_4k_pages_dsa: add the new testsuite Date: Fri, 11 Nov 2022 15:38:40 +0800 Message-Id: <20221111073840.2424865-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Upstream the new testsuite TestSuite_basic_4k_pages_dsa.py. Signed-off-by: Wei Ling --- tests/TestSuite_basic_4k_pages_dsa.py | 1677 +++++++++++++++++++++++++ 1 file changed, 1677 insertions(+) create mode 100644 tests/TestSuite_basic_4k_pages_dsa.py diff --git a/tests/TestSuite_basic_4k_pages_dsa.py b/tests/TestSuite_basic_4k_pages_dsa.py new file mode 100644 index 00000000..1c22a1b8 --- /dev/null +++ b/tests/TestSuite_basic_4k_pages_dsa.py @@ -0,0 +1,1677 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2022 Intel Corporation +# + +""" +DPDK Test suite. +vhost/virtio-user pvp with 4K pages. +""" + +import os +import random +import re +import string +import time + +from framework.config import VirtConf +from framework.packet import Packet +from framework.pktgen import PacketGeneratorHelper +from framework.pmd_output import PmdOutput +from framework.qemu_kvm import QEMUKvm +from framework.settings import CONFIG_ROOT_PATH, HEADER_SIZE +from framework.test_case import TestCase + +from .dsadev_common import DsaDev_common as DC + + +class TestBasic4kPagesDsa(TestCase): + def get_virt_config(self, vm_name): + conf = VirtConf(CONFIG_ROOT_PATH + os.sep + self.suite_name + ".cfg") + conf.load_virt_config(vm_name) + virt_conf = conf.get_virt_config() + return virt_conf + + def set_up_all(self): + """ + Run at the start of each test suite. + """ + self.dut_ports = self.dut.get_ports() + self.ports_socket = self.dut.get_numa_id(self.dut_ports[0]) + self.cores_num = len([n for n in self.dut.cores if int(n["socket"]) == 0]) + self.verify(len(self.dut_ports) >= 1, "Insufficient ports for testing") + self.verify( + self.cores_num >= 4, + "There has not enought cores to test this suite %s" % self.suite_name, + ) + self.cores_list = self.dut.get_core_list(config="all", socket=self.ports_socket) + self.vhost_core_list = self.cores_list[0:9] + self.virtio0_core_list = self.cores_list[9:14] + self.vhost_user = self.dut.new_session(suite="vhost-user") + self.virtio_user0 = self.dut.new_session(suite="virtio-user") + self.vhost_user_pmd = PmdOutput(self.dut, self.vhost_user) + self.virtio_user0_pmd = PmdOutput(self.dut, self.virtio_user0) + self.frame_sizes = [64, 128, 256, 512, 1024, 1518] + self.out_path = "/tmp/%s" % self.suite_name + out = self.tester.send_expect("ls -d %s" % self.out_path, "# ") + if "No such file or directory" in out: + self.tester.send_expect("mkdir -p %s" % self.out_path, "# ") + # create an instance to set stream field setting + self.pktgen_helper = PacketGeneratorHelper() + self.number_of_ports = 1 + self.app_testpmd_path = self.dut.apps_name["test-pmd"] + self.testpmd_name = self.app_testpmd_path.split("/")[-1] + self.virtio_mac = "00:01:02:03:04:05" + self.vm_num = 2 + self.virtio_ip1 = "1.1.1.1" + self.virtio_ip2 = "1.1.1.2" + self.virtio_mac1 = "52:54:00:00:00:01" + self.virtio_mac2 = "52:54:00:00:00:02" + self.base_dir = self.dut.base_dir.replace("~", "/root") + self.random_string = string.ascii_letters + string.digits + self.headers_size = HEADER_SIZE["eth"] + HEADER_SIZE["ip"] + HEADER_SIZE["tcp"] + self.DC = DC(self) + + self.mount_tmpfs_for_4k(number=2) + self.vm0_virt_conf = self.get_virt_config(vm_name="vm0") + for param in self.vm0_virt_conf: + if "cpu" in param.keys(): + self.vm0_cpupin = param["cpu"][0]["cpupin"] + self.vm0_lcore = ",".join(list(self.vm0_cpupin.split())) + self.vm0_lcore_smp = len(list(self.vm0_cpupin.split())) + if "qemu" in param.keys(): + self.vm0_qemu_path = param["qemu"][0]["path"] + if "mem" in param.keys(): + self.vm0_mem_size = param["mem"][0]["size"] + if "disk" in param.keys(): + self.vm0_image_path = param["disk"][0]["file"] + if "vnc" in param.keys(): + self.vm0_vnc = param["vnc"][0]["displayNum"] + if "login" in param.keys(): + self.vm0_user = param["login"][0]["user"] + self.vm0_passwd = param["login"][0]["password"] + + self.vm1_virt_conf = self.get_virt_config(vm_name="vm1") + for param in self.vm1_virt_conf: + if "cpu" in param.keys(): + self.vm1_cpupin = param["cpu"][0]["cpupin"] + self.vm1_lcore = ",".join(list(self.vm1_cpupin.split())) + self.vm1_lcore_smp = len(list(self.vm1_cpupin.split())) + if "qemu" in param.keys(): + self.vm1_qemu_path = param["qemu"][0]["path"] + if "mem" in param.keys(): + self.vm1_mem_size = param["mem"][0]["size"] + if "disk" in param.keys(): + self.vm1_image_path = param["disk"][0]["file"] + if "vnc" in param.keys(): + self.vm1_vnc = param["vnc"][0]["displayNum"] + if "login" in param.keys(): + self.vm1_user = param["login"][0]["user"] + self.vm1_passwd = param["login"][0]["password"] + + def set_up(self): + """ + Run before each test case. + """ + self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "# ") + self.dut.send_expect("killall -s INT qemu-system-x86_64", "#") + self.dut.send_expect("rm -rf /root/dpdk/vhost-net*", "# ") + # Prepare the result table + self.table_header = ["Frame"] + self.table_header.append("Mode") + self.table_header.append("Mpps") + self.table_header.append("% linerate") + self.result_table_create(self.table_header) + self.vm_dut = [] + self.vm = [] + self.use_dsa_list = [] + self.DC.reset_all_work_queue() + self.DC.bind_all_dsa_to_kernel() + self.packed = False + + def start_vm0(self, packed=False, queues=1, server=False): + packed_param = ",packed=on" if packed else "" + server = ",server" if server else "" + self.qemu_cmd0 = ( + f"taskset -c {self.vm0_lcore} {self.vm0_qemu_path} -name vm0 -enable-kvm " + f"-pidfile /tmp/.vm0.pid -daemonize -monitor unix:/tmp/vm0_monitor.sock,server,nowait " + f"-netdev user,id=nttsip1,hostfwd=tcp:%s:6000-:22 -device e1000,netdev=nttsip1 " + f"-chardev socket,id=char0,path=/root/dpdk/vhost-net0{server} " + f"-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues={queues} " + f"-device virtio-net-pci,netdev=netdev0,mac=%s," + f"disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on{packed_param} " + f"-cpu host -smp {self.vm0_lcore_smp} -m {self.vm0_mem_size} -object memory-backend-file,id=mem,size={self.vm0_mem_size}M,mem-path=/mnt/tmpfs_nohuge0,share=on " + f"-numa node,memdev=mem -mem-prealloc -drive file={self.vm0_image_path} " + f"-chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial " + f"-device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.0 -vnc :{self.vm0_vnc} " + ) + + self.vm0_session = self.dut.new_session(suite="vm0_session") + cmd0 = self.qemu_cmd0 % ( + self.dut.get_ip_address(), + self.virtio_mac1, + ) + self.vm0_session.send_expect(cmd0, "# ") + time.sleep(10) + self.vm0_dut = self.connect_vm0() + self.verify(self.vm0_dut is not None, "vm start fail") + self.vm_session = self.vm0_dut.new_session(suite="vm_session") + + def start_vm1(self, packed=False, queues=1, server=False): + packed_param = ",packed=on" if packed else "" + server = ",server" if server else "" + self.qemu_cmd1 = ( + f"taskset -c {self.vm1_lcore} {self.vm1_qemu_path} -name vm1 -enable-kvm " + f"-pidfile /tmp/.vm1.pid -daemonize -monitor unix:/tmp/vm1_monitor.sock,server,nowait " + f"-netdev user,id=nttsip1,hostfwd=tcp:%s:6001-:22 -device e1000,netdev=nttsip1 " + f"-chardev socket,id=char0,path=/root/dpdk/vhost-net1{server} " + f"-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues={queues} " + f"-device virtio-net-pci,netdev=netdev0,mac=%s," + f"disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on{packed_param} " + f"-cpu host -smp {self.vm1_lcore_smp} -m {self.vm1_mem_size} -object memory-backend-file,id=mem,size={self.vm1_mem_size}M,mem-path=/mnt/tmpfs_nohuge1,share=on " + f"-numa node,memdev=mem -mem-prealloc -drive file={self.vm1_image_path} " + f"-chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial " + f"-device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.0 -vnc :{self.vm1_vnc} " + ) + self.vm1_session = self.dut.new_session(suite="vm1_session") + cmd1 = self.qemu_cmd1 % ( + self.dut.get_ip_address(), + self.virtio_mac2, + ) + self.vm1_session.send_expect(cmd1, "# ") + time.sleep(10) + self.vm1_dut = self.connect_vm1() + self.verify(self.vm1_dut is not None, "vm start fail") + self.vm_session = self.vm1_dut.new_session(suite="vm_session") + + def connect_vm0(self): + self.vm0 = QEMUKvm(self.dut, "vm0", self.suite_name) + self.vm0.net_type = "hostfwd" + self.vm0.hostfwd_addr = "%s:6000" % self.dut.get_ip_address() + self.vm0.def_driver = "vfio-pci" + self.vm0.driver_mode = "noiommu" + self.wait_vm_net_ready(vm_index=0) + vm_dut = self.vm0.instantiate_vm_dut(autodetect_topo=False, bind_dev=False) + if vm_dut: + return vm_dut + else: + return None + + def connect_vm1(self): + self.vm1 = QEMUKvm(self.dut, "vm1", "vm_hotplug") + self.vm1.net_type = "hostfwd" + self.vm1.hostfwd_addr = "%s:6001" % self.dut.get_ip_address() + self.vm1.def_driver = "vfio-pci" + self.vm1.driver_mode = "noiommu" + self.wait_vm_net_ready(vm_index=1) + vm_dut = self.vm1.instantiate_vm_dut(autodetect_topo=False, bind_dev=False) + if vm_dut: + return vm_dut + else: + return None + + def wait_vm_net_ready(self, vm_index=0): + self.vm_net_session = self.dut.new_session(suite="vm_net_session") + self.start_time = time.time() + cur_time = time.time() + time_diff = cur_time - self.start_time + while time_diff < 120: + try: + out = self.vm_net_session.send_expect( + "~/QMP/qemu-ga-client --address=/tmp/vm%s_qga0.sock ifconfig" + % vm_index, + "#", + ) + except Exception as EnvironmentError: + pass + if "10.0.2" in out: + pos = self.vm0.hostfwd_addr.find(":") + ssh_key = ( + "[" + + self.vm0.hostfwd_addr[:pos] + + "]" + + self.vm0.hostfwd_addr[pos:] + ) + os.system("ssh-keygen -R %s" % ssh_key) + break + time.sleep(1) + cur_time = time.time() + time_diff = cur_time - self.start_time + self.dut.close_session(self.vm_net_session) + + def send_imix_and_verify(self, mode): + """ + Send imix packet with packet generator and verify + """ + frame_sizes = [64, 128, 256, 512, 1024, 1518] + tgenInput = [] + for frame_size in frame_sizes: + payload_size = frame_size - self.headers_size + port = self.tester.get_local_port(self.dut_ports[0]) + fields_config = { + "ip": { + "src": {"action": "random"}, + }, + } + pkt = Packet() + pkt.assign_layers(["ether", "ipv4", "tcp", "raw"]) + pkt.config_layers( + [ + ("ether", {"dst": "%s" % self.virtio_mac}), + ("ipv4", {"src": "1.1.1.1"}), + ("raw", {"payload": ["01"] * int("%d" % payload_size)}), + ] + ) + pkt.save_pcapfile( + self.tester, + "%s/%s_%s.pcap" % (self.out_path, self.suite_name, frame_size), + ) + tgenInput.append( + ( + port, + port, + "%s/%s_%s.pcap" % (self.out_path, self.suite_name, frame_size), + ) + ) + + self.tester.pktgen.clear_streams() + streams = self.pktgen_helper.prepare_stream_from_tginput( + tgenInput, 100, fields_config, self.tester.pktgen + ) + bps, pps = self.tester.pktgen.measure_throughput(stream_ids=streams) + Mpps = pps / 1000000.0 + Mbps = bps / 1000000.0 + self.verify( + Mbps > 0, + f"{self.running_case} can not receive packets of frame size {frame_sizes}", + ) + bps_linerate = self.wirespeed(self.nic, 64, 1) * 8 * (64 + 20) + throughput = Mbps * 100 / float(bps_linerate) + results_row = ["imix"] + results_row.append(mode) + results_row.append(Mpps) + results_row.append(throughput) + self.result_table_add(results_row) + + def start_vhost_user_testpmd( + self, + cores, + eal_param="", + param="", + no_pci=False, + ports="", + port_options="", + ): + """ + launch the testpmd as virtio with vhost_user + """ + if not no_pci and port_options != "": + self.vhost_user_pmd.start_testpmd( + cores=cores, + eal_param=eal_param, + param=param, + port_options=port_options, + ports=ports, + prefix="vhost", + fixed_prefix=True, + ) + elif not no_pci and port_options == "": + self.vhost_user_pmd.start_testpmd( + cores=cores, + eal_param=eal_param, + param=param, + ports=ports, + prefix="vhost", + fixed_prefix=True, + ) + else: + self.vhost_user_pmd.start_testpmd( + cores=cores, + eal_param=eal_param, + param=param, + no_pci=no_pci, + prefix="vhost", + fixed_prefix=True, + ) + + def start_virtio_user0_testpmd(self, cores, eal_param="", param=""): + """ + launch the testpmd as virtio with vhost_net0 + """ + self.virtio_user0_pmd.start_testpmd( + cores=cores, + eal_param=eal_param, + param=param, + no_pci=True, + prefix="virtio-user0", + fixed_prefix=True, + ) + + def config_vm_ip(self): + """ + set virtio device IP and run arp protocal + """ + vm1_intf = self.vm0_dut.ports_info[0]["intf"] + vm2_intf = self.vm1_dut.ports_info[0]["intf"] + self.vm0_dut.send_expect( + "ifconfig %s %s" % (vm1_intf, self.virtio_ip1), "#", 10 + ) + self.vm1_dut.send_expect( + "ifconfig %s %s" % (vm2_intf, self.virtio_ip2), "#", 10 + ) + self.vm0_dut.send_expect( + "arp -s %s %s" % (self.virtio_ip2, self.virtio_mac2), "#", 10 + ) + self.vm1_dut.send_expect( + "arp -s %s %s" % (self.virtio_ip1, self.virtio_mac1), "#", 10 + ) + + def config_vm_combined(self, combined=1): + """ + set virtio device combined + """ + vm1_intf = self.vm0_dut.ports_info[0]["intf"] + vm2_intf = self.vm1_dut.ports_info[0]["intf"] + self.vm0_dut.send_expect( + "ethtool -L %s combined %d" % (vm1_intf, combined), "#", 10 + ) + self.vm1_dut.send_expect( + "ethtool -L %s combined %d" % (vm2_intf, combined), "#", 10 + ) + + def check_ping_between_vms(self): + ping_out = self.vm0_dut.send_expect( + "ping {} -c 4".format(self.virtio_ip2), "#", 20 + ) + self.logger.info(ping_out) + + def check_scp_file_valid_between_vms(self, file_size=1024): + """ + scp file form VM1 to VM2, check the data is valid + """ + # default file_size=1024K + data = "" + for _ in range(file_size * 1024): + data += random.choice(self.random_string) + self.vm0_dut.send_expect('echo "%s" > /tmp/payload' % data, "# ") + # scp this file to vm1 + out = self.vm1_dut.send_command( + "scp root@%s:/tmp/payload /root" % self.virtio_ip1, timeout=5 + ) + if "Are you sure you want to continue connecting" in out: + self.vm1_dut.send_command("yes", timeout=3) + self.vm1_dut.send_command(self.vm0_passwd, timeout=3) + # get the file info in vm1, and check it valid + md5_send = self.vm0_dut.send_expect("md5sum /tmp/payload", "# ") + md5_revd = self.vm1_dut.send_expect("md5sum /root/payload", "# ") + md5_send = md5_send[: md5_send.find(" ")] + md5_revd = md5_revd[: md5_revd.find(" ")] + self.verify( + md5_send == md5_revd, "the received file is different with send file" + ) + + def start_iperf(self): + """ + run perf command between to vms + """ + iperf_server = "iperf -s -i 1" + iperf_client = "iperf -c {} -i 1 -t 60".format(self.virtio_ip1) + self.vm0_dut.send_expect("{} > iperf_server.log &".format(iperf_server), "", 10) + self.vm1_dut.send_expect("{} > iperf_client.log &".format(iperf_client), "", 60) + time.sleep(60) + + def get_iperf_result(self): + """ + get the iperf test result + """ + self.table_header = ["Mode", "[M|G]bits/sec"] + self.result_table_create(self.table_header) + self.vm0_dut.send_expect("pkill iperf", "# ") + self.vm1_dut.session.copy_file_from("%s/iperf_client.log" % self.dut.base_dir) + fp = open("./iperf_client.log") + fmsg = fp.read() + fp.close() + # remove the server report info from msg + index = fmsg.find("Server Report") + if index != -1: + fmsg = fmsg[:index] + iperfdata = re.compile("\S*\s*[M|G]bits/sec").findall(fmsg) + # the last data of iperf is the ave data from 0-30 sec + self.verify(len(iperfdata) != 0, "The iperf data between to vms is 0") + self.logger.info("The iperf data between vms is %s" % iperfdata[-1]) + + # put the result to table + results_row = ["vm2vm", iperfdata[-1]] + self.result_table_add(results_row) + + # print iperf resut + self.result_table_print() + # rm the iperf log file in vm + self.vm0_dut.send_expect("rm iperf_server.log", "#", 10) + self.vm1_dut.send_expect("rm iperf_client.log", "#", 10) + + def verify_xstats_info_on_vhost(self): + """ + check both 2VMs can receive and send big packets to each other + """ + out_tx = self.vhost_user_pmd.execute_cmd("show port xstats 0") + out_rx = self.vhost_user_pmd.execute_cmd("show port xstats 1") + + tx_info = re.search("tx_q0_size_1519_max_packets:\s*(\d*)", out_tx) + rx_info = re.search("rx_q0_size_1519_max_packets:\s*(\d*)", out_rx) + + self.verify( + int(rx_info.group(1)) > 0, "Port 1 not receive packet greater than 1522" + ) + self.verify( + int(tx_info.group(1)) > 0, "Port 0 not forward packet greater than 1522" + ) + + def mount_tmpfs_for_4k(self, number=1): + """ + Prepare tmpfs with 4K-pages + """ + for num in range(number): + self.dut.send_expect("mkdir /mnt/tmpfs_nohuge{}".format(num), "# ") + self.dut.send_expect( + "mount tmpfs /mnt/tmpfs_nohuge{} -t tmpfs -o size=4G".format(num), "# " + ) + + def umount_tmpfs_for_4k(self): + """ + Prepare tmpfs with 4K-pages + """ + out = self.dut.send_expect( + "mount |grep 'mnt/tmpfs' |awk -F ' ' {'print $3'}", "#" + ) + if out != "": + mount_points = out.replace("\r", "").split("\n") + else: + mount_points = [] + if len(mount_points) != 0: + for mount_info in mount_points: + self.dut.send_expect("umount {}".format(mount_info), "# ") + + def check_packets_of_vhost_each_queue(self, queues): + self.vhost_user_pmd.execute_cmd("show port stats all") + out = self.vhost_user_pmd.execute_cmd("stop") + self.logger.info(out) + for queue in range(queues): + reg = "Queue= %d" % queue + index = out.find(reg) + rx = re.search("RX-packets:\s*(\d*)", out[index:]) + tx = re.search("TX-packets:\s*(\d*)", out[index:]) + rx_packets = int(rx.group(1)) + tx_packets = int(tx.group(1)) + self.verify( + rx_packets > 0 and tx_packets > 0, + "The queue {} rx-packets or tx-packets is 0 about ".format(queue) + + "rx-packets: {}, tx-packets: {}".format(rx_packets, tx_packets), + ) + + def test_perf_pvp_split_ring_multi_queues_with_4k_pages_and_dsa_dpdk_driver( + self, + ): + """ + Test Case 1: PVP split ring multi-queues with 4K-pages and dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci" + ) + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q0;" + "txq3@%s-q0;" + "txq4@%s-q1;" + "txq5@%s-q1;" + "rxq2@%s-q2;" + "rxq3@%s-q2;" + "rxq4@%s-q3;" + "rxq5@%s-q3;" + "rxq6@%s-q3;" + "rxq7@%s-q3" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + vhost_eal_param = ( + "--no-huge -m 1024 --vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" + % dmas + ) + vhost_param = ( + "--nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=%d" + % self.ports_socket + ) + ports = [self.dut.ports_info[0]["pci"]] + for i in self.use_dsa_list: + ports.append(i) + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=ports, + ) + self.vhost_user_pmd.execute_cmd("set fwd mac") + self.vhost_user_pmd.execute_cmd("start") + + virtio_eal_param = "--no-huge -m 1024 --vdev net_virtio_user0,mac=00:01:02:03:04:05,path=vhost-net0,mrg_rxbuf=1,in_order=1,queues=8,server=1" + virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + self.start_virtio_user0_testpmd( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + ) + self.virtio_user0_pmd.execute_cmd("set fwd csum") + self.virtio_user0_pmd.execute_cmd("start") + self.send_imix_and_verify(mode="split ring inorder mergeable with 4k page") + self.check_packets_of_vhost_each_queue(queues=8) + + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify( + mode="split ring inorder mergeable with 4k page restart vhost" + ) + + self.vhost_user_pmd.quit() + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q1;" + "txq3@%s-q1;" + "txq4@%s-q2;" + "txq5@%s-q2;" + "txq6@%s-q3;" + "txq7@%s-q3;" + "rxq0@%s-q0;" + "rxq1@%s-q0;" + "rxq2@%s-q1;" + "rxq3@%s-q1;" + "rxq4@%s-q2;" + "rxq5@%s-q2;" + "rxq6@%s-q3;" + "rxq7@%s-q3" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + ) + + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8" + ports = [self.dut.ports_info[0]["pci"]] + for i in self.use_dsa_list: + ports.append(i) + port_options = { + self.use_dsa_list[0]: "max_queues=4", + self.use_dsa_list[1]: "max_queues=4", + } + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=ports, + port_options=port_options, + ) + self.vhost_user_pmd.execute_cmd("set fwd mac") + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify(mode="split ring inorder mergeable with 1G page") + self.check_packets_of_vhost_each_queue(queues=8) + + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify( + mode="split ring inorder mergeable with 1G page restart vhost" + ) + + self.virtio_user0_pmd.quit() + virtio_eal_param = "--no-huge -m 1024 --vdev net_virtio_user0,mac=00:01:02:03:04:05,path=vhost-net0,mrg_rxbuf=1,in_order=0,queues=8,server=1" + self.start_virtio_user0_testpmd( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + ) + self.virtio_user0_pmd.execute_cmd("set fwd csum") + self.virtio_user0_pmd.execute_cmd("start") + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify(mode="split ring mergeable with 1G page") + self.check_packets_of_vhost_each_queue(queues=8) + + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify( + mode="split ring mergeable with 1G page restart vhost" + ) + self.result_table_print() + + def test_perf_pvp_packed_ring_multi_queues_with_4k_pages_and_dsa_dpdk_driver( + self, + ): + """ + Test Case 2: PVP packed ring multi-queues with 4K-pages and dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci" + ) + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q0;" + "txq3@%s-q0;" + "txq4@%s-q1;" + "txq5@%s-q1;" + "rxq2@%s-q2;" + "rxq3@%s-q2;" + "rxq4@%s-q3;" + "rxq5@%s-q3;" + "rxq6@%s-q3;" + "rxq7@%s-q3" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + vhost_eal_param = ( + "--no-huge -m 1024 --vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" + % dmas + ) + vhost_param = ( + "--nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=%d" + % self.ports_socket + ) + ports = [self.dut.ports_info[0]["pci"]] + for i in self.use_dsa_list: + ports.append(i) + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=ports, + ) + self.vhost_user_pmd.execute_cmd("set fwd mac") + self.vhost_user_pmd.execute_cmd("start") + + virtio_eal_param = "--no-huge -m 1024 --vdev net_virtio_user0,mac=00:01:02:03:04:05,path=vhost-net0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=8,server=1" + virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + self.start_virtio_user0_testpmd( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + ) + self.virtio_user0_pmd.execute_cmd("set fwd csum") + self.virtio_user0_pmd.execute_cmd("start") + self.send_imix_and_verify(mode="packed ring inorder mergeable with 4k page") + self.check_packets_of_vhost_each_queue(queues=8) + + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify( + mode="packed ring inorder mergeable with 4k page restart vhost" + ) + + self.vhost_user_pmd.quit() + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q1;" + "txq3@%s-q1;" + "txq4@%s-q2;" + "txq5@%s-q2;" + "txq6@%s-q3;" + "txq7@%s-q3;" + "rxq0@%s-q0;" + "rxq1@%s-q0;" + "rxq2@%s-q1;" + "rxq3@%s-q1;" + "rxq4@%s-q2;" + "rxq5@%s-q2;" + "rxq6@%s-q3;" + "rxq7@%s-q3" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8" + ports = [self.dut.ports_info[0]["pci"]] + for i in self.use_dsa_list: + ports.append(i) + port_options = { + self.use_dsa_list[0]: "max_queues=4", + self.use_dsa_list[1]: "max_queues=4", + } + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=ports, + port_options=port_options, + ) + self.vhost_user_pmd.execute_cmd("set fwd mac") + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify(mode="packed ring inorder mergeable with 1G page") + self.check_packets_of_vhost_each_queue(queues=8) + + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify( + mode="packed ring inorder mergeable with 1G page restart vhost" + ) + + self.virtio_user0_pmd.quit() + virtio_eal_param = "--no-huge -m 1024 --vdev net_virtio_user0,mac=00:01:02:03:04:05,path=vhost-net0,mrg_rxbuf=1,in_order=0,packed_vq=1,queues=8,server=1" + self.start_virtio_user0_testpmd( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + ) + self.virtio_user0_pmd.execute_cmd("set fwd csum") + self.virtio_user0_pmd.execute_cmd("start") + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify(mode="packed ring mergeable with 1G page") + self.check_packets_of_vhost_each_queue(queues=8) + + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify( + mode="packed ring mergeable with 1G page restart vhost" + ) + self.result_table_print() + + def test_vm2vm_split_ring_vhost_user_virtio_net_4k_pages_and_dsa_dpdk_driver_test_with_tcp_traffic( + self, + ): + """ + Test Case 3: VM2VM split ring vhost-user/virtio-net 4K-pages and dsa dpdk driver test with tcp traffic + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci" + ) + dmas1 = "txq0@%s-q0;" "rxq0@%s-q0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q1;" "rxq0@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--no-huge -m 1024 " + + "--vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[%s]'" % dmas1 + + " --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[%s]'" % dmas2 + ) + vhost_param = ( + " --nb-cores=2 --txd=1024 --rxd=1024 --no-numa --socket-num=%d" + % self.ports_socket + ) + port_options = {self.use_dsa_list[0]: "max_queues=2"} + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.use_dsa_list, + port_options=port_options, + ) + self.vhost_user_pmd.execute_cmd("start") + + self.start_vm0(packed=False, queues=1, server=False) + self.start_vm1(packed=False, queues=1, server=False) + self.config_vm_ip() + self.check_ping_between_vms() + self.start_iperf() + self.get_iperf_result() + self.verify_xstats_info_on_vhost() + + self.vm0.stop() + self.vm1.stop() + self.vhost_user_pmd.quit() + + def test_vm2vm_packed_ring_vhost_user_virtio_net_4k_pages_and_dsa_dpdk_driver_test_with_tcp_traffic( + self, + ): + """ + Test Case 4: VM2VM packed ring vhost-user/virtio-net 4K-pages and dsa dpdk driver test with tcp traffic + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci" + ) + dmas1 = "txq0@%s-q0;" "rxq0@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q0;" "rxq0@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--no-huge -m 1024 " + + "--vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[%s]'" % dmas1 + + " --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[%s]'" % dmas2 + ) + vhost_param = ( + " --nb-cores=2 --txd=1024 --rxd=1024 --no-numa --socket-num=%d" + % self.ports_socket + ) + port_options = {self.use_dsa_list[0]: "max_queues=2"} + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.use_dsa_list, + port_options=port_options, + ) + self.vhost_user_pmd.execute_cmd("start") + + self.start_vm0(packed=True, queues=1, server=False) + self.start_vm1(packed=True, queues=1, server=False) + self.config_vm_ip() + self.check_ping_between_vms() + self.start_iperf() + self.get_iperf_result() + self.verify_xstats_info_on_vhost() + + self.vm0.stop() + self.vm1.stop() + self.vhost_user_pmd.quit() + + def test_vm2vm_vhost_virtio_net_split_packed_ring_multi_queues_with_1G_4k_pages_and_dsa_dpdk_driver( + self, + ): + """ + Test Case 5: VM2VM vhost/virtio-net split packed ring multi queues with 1G/4k-pages and dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci" + ) + dmas1 = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q0;" + "txq3@%s-q0;" + "txq4@%s-q1;" + "txq5@%s-q1;" + "rxq2@%s-q2;" + "rxq3@%s-q2;" + "rxq4@%s-q3;" + "rxq5@%s-q3;" + "rxq6@%s-q3;" + "rxq7@%s-q3" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + dmas2 = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q0;" + "txq3@%s-q0;" + "txq4@%s-q1;" + "txq5@%s-q1;" + "rxq2@%s-q2;" + "rxq3@%s-q2;" + "rxq4@%s-q3;" + "rxq5@%s-q3;" + "rxq6@%s-q3;" + "rxq7@%s-q3" + % ( + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[%s]'" % dmas1 + + " --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[%s]'" % dmas2 + ) + vhost_param = " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8" + port_options = { + self.use_dsa_list[0]: "max_queues=4", + self.use_dsa_list[1]: "max_queues=4", + } + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.use_dsa_list, + port_options=port_options, + ) + self.vhost_user_pmd.execute_cmd("start") + + self.start_vm0(packed=False, queues=8, server=False) + self.start_vm1(packed=True, queues=8, server=False) + self.config_vm_ip() + self.config_vm_combined(combined=8) + self.check_ping_between_vms() + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_iperf_result() + + self.vm0.stop() + self.vm1.stop() + self.vhost_user_pmd.quit() + + def test_vm2vm_vhost_virtio_net_split_ring_multi_queues_with_1G_4k_pages_and_dsa_dpdk_driver( + self, + ): + """ + Test Case 6: VM2VM vhost/virtio-net split ring multi queues with 1G/4k-pages and dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci" + ) + dmas1 = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q0;" + "txq3@%s-q0;" + "txq4@%s-q1;" + "txq5@%s-q1;" + "rxq2@%s-q2;" + "rxq3@%s-q2;" + "rxq4@%s-q3;" + "rxq5@%s-q3;" + "rxq6@%s-q3;" + "rxq7@%s-q3" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + dmas2 = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q0;" + "txq3@%s-q0;" + "txq4@%s-q1;" + "txq5@%s-q1;" + "rxq2@%s-q2;" + "rxq3@%s-q2;" + "rxq4@%s-q3;" + "rxq5@%s-q3;" + "rxq6@%s-q3;" + "rxq7@%s-q3" + % ( + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,tso=1,dmas=[%s]'" + % dmas1 + + " --vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,tso=1,dmas=[%s]'" + % dmas2 + ) + vhost_param = " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8" + port_options = { + self.use_dsa_list[0]: "max_queues=4", + self.use_dsa_list[1]: "max_queues=4", + } + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.use_dsa_list, + port_options=port_options, + ) + self.vhost_user_pmd.execute_cmd("start") + + self.start_vm0(packed=False, queues=8, server=True) + self.start_vm1(packed=False, queues=8, server=True) + self.config_vm_ip() + self.config_vm_combined(combined=8) + self.check_ping_between_vms() + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_iperf_result() + + self.vhost_user_pmd.quit() + dmas1 = ( + "txq0@%s-q0;" + "txq1@%s-q1;" + "txq2@%s-q0;" + "txq3@%s-q1;" + "rxq0@%s-q0;" + "rxq1@%s-q1;" + "rxq2@%s-q0;" + "rxq3@%s-q1" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + ) + dmas2 = ( + "txq0@%s-q0;" + "txq1@%s-q1;" + "txq2@%s-q0;" + "txq3@%s-q1;" + "rxq0@%s-q0;" + "rxq1@%s-q1;" + "rxq2@%s-q0;" + "rxq3@%s-q1" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,tso=1,dmas=[%s]'" + % dmas1 + + " --vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,tso=1,dmas=[%s]'" + % dmas2 + ) + vhost_param = " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4" + port_options = { + self.use_dsa_list[0]: "max_queues=2", + self.use_dsa_list[1]: "max_queues=2", + } + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.use_dsa_list, + port_options=port_options, + ) + self.vhost_user_pmd.execute_cmd("start") + self.config_vm_combined(combined=4) + self.check_ping_between_vms() + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_iperf_result() + + self.vm0.stop() + self.vm1.stop() + self.vhost_user_pmd.quit() + + def test_perf_pvp_split_ring_multi_queues_with_4k_pages_and_dsa_kernel_driver(self): + """ + Test Case 7: PVP split ring multi-queues with 4K-pages and dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=4, dsa_index=0) + self.DC.create_work_queue(work_queue_number=4, dsa_index=1) + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.0;" + "txq2@wq0.0;" + "txq3@wq0.0;" + "txq4@wq0.1;" + "txq5@wq0.1;" + "rxq2@wq1.0;" + "rxq3@wq1.0;" + "rxq4@wq1.1;" + "rxq5@wq1.1;" + "rxq6@wq1.1;" + "rxq7@wq1.1" + ) + vhost_eal_param = ( + "--no-huge -m 1024 --vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" + % dmas + ) + vhost_param = ( + "--nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=%d" + % self.ports_socket + ) + ports = [self.dut.ports_info[0]["pci"]] + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=ports, + port_options="", + ) + self.vhost_user_pmd.execute_cmd("set fwd mac") + self.vhost_user_pmd.execute_cmd("start") + + virtio_eal_param = "--no-huge -m 1024 --vdev net_virtio_user0,mac=00:01:02:03:04:05,path=vhost-net0,mrg_rxbuf=1,in_order=1,queues=8,server=1" + virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + self.start_virtio_user0_testpmd( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + ) + self.virtio_user0_pmd.execute_cmd("set fwd csum") + self.virtio_user0_pmd.execute_cmd("start") + self.send_imix_and_verify(mode="split ring inorder mergeable with 4k page") + self.check_packets_of_vhost_each_queue(queues=8) + + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify( + mode="split ring inorder mergeable with 4k page restart vhost" + ) + + self.vhost_user_pmd.quit() + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.0;" + "txq2@wq0.1;" + "txq3@wq0.1;" + "txq4@wq0.2;" + "txq5@wq0.2;" + "txq6@wq0.3;" + "txq7@wq0.3;" + "rxq0@wq0.0;" + "rxq1@wq0.0;" + "rxq2@wq0.1;" + "rxq3@wq0.1;" + "rxq4@wq0.2;" + "rxq5@wq0.2;" + "rxq6@wq0.3;" + "rxq7@wq0.3" + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8" + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=ports, + port_options="", + ) + self.vhost_user_pmd.execute_cmd("set fwd mac") + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify(mode="split ring inorder mergeable with 1G page") + self.check_packets_of_vhost_each_queue(queues=8) + + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify( + mode="split ring inorder mergeable with 1G page restart vhost" + ) + + self.virtio_user0_pmd.quit() + virtio_eal_param = "--no-huge -m 1024 --vdev net_virtio_user0,mac=00:01:02:03:04:05,path=vhost-net0,mrg_rxbuf=1,in_order=0,queues=8,server=1" + self.start_virtio_user0_testpmd( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + ) + self.virtio_user0_pmd.execute_cmd("set fwd csum") + self.virtio_user0_pmd.execute_cmd("start") + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify(mode="split ring mergeable with 1G page") + self.check_packets_of_vhost_each_queue(queues=8) + + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify( + mode="split ring mergeable with 1G page restart vhost" + ) + self.result_table_print() + + def test_perf_pvp_packed_ring_multi_queues_with_4k_pages_and_dsa_kernel_driver( + self, + ): + """ + Test Case 8: PVP packed ring multi-queues with 4K-pages and dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=4, dsa_index=0) + self.DC.create_work_queue(work_queue_number=4, dsa_index=1) + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.0;" + "txq2@wq0.0;" + "txq3@wq0.0;" + "txq4@wq0.1;" + "txq5@wq0.1;" + "rxq2@wq1.0;" + "rxq3@wq1.0;" + "rxq4@wq1.1;" + "rxq5@wq1.1;" + "rxq6@wq1.1;" + "rxq7@wq1.1" + ) + vhost_eal_param = ( + "--no-huge -m 1024 --vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" + % dmas + ) + vhost_param = ( + "--nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=%d" + % self.ports_socket + ) + ports = [self.dut.ports_info[0]["pci"]] + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=ports, + port_options="", + ) + self.vhost_user_pmd.execute_cmd("set fwd mac") + self.vhost_user_pmd.execute_cmd("start") + + virtio_eal_param = "--no-huge -m 1024 --vdev net_virtio_user0,mac=00:01:02:03:04:05,path=vhost-net0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=8,server=1" + virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + self.start_virtio_user0_testpmd( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + ) + self.virtio_user0_pmd.execute_cmd("set fwd csum") + self.virtio_user0_pmd.execute_cmd("start") + self.send_imix_and_verify(mode="packed ring inorder mergeable with 4k page") + self.check_packets_of_vhost_each_queue(queues=8) + + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify( + mode="packed ring inorder mergeable with 4k page restart vhost" + ) + + self.vhost_user_pmd.quit() + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.0;" + "txq2@wq0.0;" + "txq3@wq0.0;" + "txq4@wq0.1;" + "txq5@wq0.1;" + "txq6@wq0.1;" + "txq7@wq0.1;" + "rxq0@wq0.0;" + "rxq1@wq0.0;" + "rxq2@wq0.0;" + "rxq3@wq0.0;" + "rxq4@wq0.1;" + "rxq5@wq0.1;" + "rxq6@wq0.1;" + "rxq7@wq0.1" + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8" + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=ports, + port_options="", + ) + self.vhost_user_pmd.execute_cmd("set fwd mac") + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify(mode="packed ring inorder mergeable with 1G page") + self.check_packets_of_vhost_each_queue(queues=8) + + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify( + mode="packed ring inorder mergeable with 1G page restart vhost" + ) + + self.virtio_user0_pmd.quit() + virtio_eal_param = "--no-huge -m 1024 --vdev net_virtio_user0,mac=00:01:02:03:04:05,path=vhost-net0,mrg_rxbuf=1,in_order=0,packed_vq=1,queues=8,server=1" + self.start_virtio_user0_testpmd( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + ) + self.virtio_user0_pmd.execute_cmd("set fwd csum") + self.virtio_user0_pmd.execute_cmd("start") + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify(mode="packed ring mergeable with 1G page") + self.check_packets_of_vhost_each_queue(queues=8) + + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_and_verify( + mode="packed ring mergeable with 1G page restart vhost" + ) + self.result_table_print() + + def test_vm2vm_split_ring_vhost_user_virtio_net_4k_pages_and_dsa_kernel_driver_test_with_tcp_traffic( + self, + ): + """ + Test Case 9: VM2VM split ring vhost-user/virtio-net 4K-pages and dsa kernel driver test with tcp traffic + """ + self.DC.create_work_queue(work_queue_number=4, dsa_index=0) + dmas1 = "txq0@wq0.0;rxq0@wq0.1" + dmas2 = "txq0@wq0.2;rxq0@wq0.3" + vhost_eal_param = ( + "--no-huge -m 1024 " + + "--vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[%s]'" % dmas1 + + " --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[%s]'" % dmas2 + ) + vhost_param = ( + " --nb-cores=2 --txd=1024 --rxd=1024 --no-numa --socket-num=%d" + % self.ports_socket + ) + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + ) + self.vhost_user_pmd.execute_cmd("start") + + self.start_vm0(packed=False, queues=1, server=False) + self.start_vm1(packed=False, queues=1, server=False) + self.config_vm_ip() + self.check_ping_between_vms() + self.start_iperf() + self.get_iperf_result() + self.verify_xstats_info_on_vhost() + + self.vm0.stop() + self.vm1.stop() + self.vhost_user_pmd.quit() + + def test_vm2vm_packed_ring_vhost_user_virtio_net_4k_pages_and_dsa_kernel_driver_test_with_tcp_traffic( + self, + ): + """ + Test Case 10: VM2VM packed ring vhost-user/virtio-net 4K-pages and dsa kernel driver test with tcp traffic + """ + self.DC.create_work_queue(work_queue_number=2, dsa_index=0) + dmas1 = "txq0@wq0.0;rxq0@wq0.0" + dmas2 = "txq0@wq0.1;rxq0@wq0.1" + vhost_eal_param = ( + "--no-huge -m 1024 " + + "--vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[%s]'" % dmas1 + + " --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[%s]'" % dmas2 + ) + vhost_param = ( + " --nb-cores=2 --txd=1024 --rxd=1024 --no-numa --socket-num=%d" + % self.ports_socket + ) + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + ) + self.vhost_user_pmd.execute_cmd("start") + + self.start_vm0(packed=True, queues=1, server=False) + self.start_vm1(packed=True, queues=1, server=False) + self.config_vm_ip() + self.check_ping_between_vms() + self.start_iperf() + self.get_iperf_result() + self.verify_xstats_info_on_vhost() + + self.vm0.stop() + self.vm1.stop() + self.vhost_user_pmd.quit() + + def test_vm2vm_vhost_virtio_net_split_packed_ring_multi_queues_with_1G_4k_pages_and_dsa_kenel_driver( + self, + ): + """ + Test Case 11: VM2VM vhost/virtio-net split packed ring multi queues with 1G/4k-pages and dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=4, dsa_index=0) + self.DC.create_work_queue(work_queue_number=4, dsa_index=1) + dmas1 = ( + "txq0@wq0.0;" + "txq1@wq0.0;" + "txq2@wq0.0;" + "txq3@wq0.0;" + "txq4@wq0.1;" + "txq5@wq0.1;" + "rxq2@wq1.0;" + "rxq3@wq1.0;" + "rxq4@wq1.1;" + "rxq5@wq1.1;" + "rxq6@wq1.1;" + "rxq7@wq1.1" + ) + dmas2 = ( + "txq0@wq0.2;" + "txq1@wq0.2;" + "txq2@wq0.2;" + "txq3@wq0.2;" + "txq4@wq0.3;" + "txq5@wq0.3;" + "rxq2@wq1.2;" + "rxq3@wq1.2;" + "rxq4@wq1.3;" + "rxq5@wq1.3;" + "rxq6@wq1.3;" + "rxq7@wq1.3" + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[%s]'" % dmas1 + + " --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[%s]'" % dmas2 + ) + vhost_param = " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8" + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + ) + self.vhost_user_pmd.execute_cmd("start") + + self.start_vm0(packed=False, queues=8, server=False) + self.start_vm1(packed=True, queues=8, server=False) + self.config_vm_ip() + self.config_vm_combined(combined=8) + self.check_ping_between_vms() + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_iperf_result() + + self.vm0.stop() + self.vm1.stop() + self.vhost_user_pmd.quit() + + def test_vm2vm_vhost_virtio_net_split_ring_multi_queues_with_1G_4k_pages_and_dsa_kernel_driver( + self, + ): + """ + Test Case 12: VM2VM vhost/virtio-net split ring multi queues with 1G/4k-pages and dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=4, dsa_index=0) + self.DC.create_work_queue(work_queue_number=4, dsa_index=1) + dmas1 = ( + "txq0@wq0.0;" + "txq1@wq0.0;" + "txq2@wq0.0;" + "txq3@wq0.0;" + "txq4@wq0.1;" + "txq5@wq0.1;" + "rxq2@wq1.0;" + "rxq3@wq1.0;" + "rxq4@wq1.1;" + "rxq5@wq1.1;" + "rxq6@wq1.1;" + "rxq7@wq1.1" + ) + dmas2 = ( + "txq0@wq0.0;" + "txq1@wq0.0;" + "txq2@wq0.0;" + "txq3@wq0.0;" + "txq4@wq0.1;" + "txq5@wq0.1;" + "rxq2@wq1.0;" + "rxq3@wq1.0;" + "rxq4@wq1.1;" + "rxq5@wq1.1;" + "rxq6@wq1.1;" + "rxq7@wq1.1" + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,tso=1,dmas=[%s]'" + % dmas1 + + " --vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,tso=1,dmas=[%s]'" + % dmas2 + ) + vhost_param = " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8" + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + ) + self.vhost_user_pmd.execute_cmd("start") + + self.start_vm0(packed=False, queues=8, server=True) + self.start_vm1(packed=False, queues=8, server=True) + self.config_vm_ip() + self.config_vm_combined(combined=8) + self.check_ping_between_vms() + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_iperf_result() + + self.vhost_user_pmd.quit() + dmas1 = ( + "txq0@wq0.0;" + "txq1@wq0.1;" + "txq2@wq0.2;" + "txq3@wq0.3;" + "rxq0@wq0.0;" + "rxq1@wq0.1;" + "rxq2@wq0.2;" + "rxq3@wq0.3" + ) + dmas2 = ( + "txq0@wq0.0;" + "txq1@wq0.1;" + "txq2@wq0.2;" + "txq3@wq0.3;" + "rxq0@wq0.0;" + "rxq1@wq0.1;" + "rxq2@wq0.2;" + "rxq3@wq0.3" + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,tso=1,dmas=[%s]'" + % dmas1 + + " --vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,tso=1,dmas=[%s]'" + % dmas2 + ) + vhost_param = " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4" + self.start_vhost_user_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + ) + self.vhost_user_pmd.execute_cmd("start") + self.config_vm_combined(combined=4) + self.check_ping_between_vms() + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_iperf_result() + + self.vm0.stop() + self.vm1.stop() + self.vhost_user_pmd.quit() + + def tear_down(self): + """ + Run after each test case. + """ + self.virtio_user0_pmd.quit() + self.vhost_user_pmd.quit() + self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "# ") + self.dut.send_expect("killall -s INT qemu-system-x86_64", "#") + self.dut.send_expect("rm -rf /tmp/vhost-net*", "# ") + self.DC.reset_all_work_queue() + self.DC.bind_all_dsa_to_kernel() + + def tear_down_all(self): + """ + Run after each test suite. + """ + self.umount_tmpfs_for_4k() + self.dut.close_session(self.vhost_user) + self.dut.close_session(self.virtio_user0) From patchwork Fri Nov 11 07:38:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 119775 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4FE33A0542; Fri, 11 Nov 2022 08:45:33 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4B514427EB; Fri, 11 Nov 2022 08:45:33 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 70A9142686 for ; Fri, 11 Nov 2022 08:45:31 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1668152731; x=1699688731; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=QIi4yEhifjx3/z2Ju9Aj2VHNSP0v0Xum/nrjcQXLegE=; b=n8sr+YJaLbiZUn8U2Zr2NqGeL9WSYbeovwlNX/+Pz6Kq97vPVm2KZ/s1 ljiRhZg2mStNy3IVG+P57d/y51Ass/R0cvGQSdCr+EaSUwPaVWzc+1OuB IsgIBT9EjXtCTfVsPy38CdATGc3/Qg/Th2ArWqdhcKoSSODGHlx/7jed5 MzgURMYCNBe+b+Vm4+K/Aln55I4s0FmqwuLCsC2b5Dg4HFKp0vpvv8YWV ggjp8CNWm5sYYgzdqU3TlRUkP8cAapNdDX/Cog4+tRdluXZl0jLGzCdtj EQGEHw5ov8F4Dx+rRpSWc6/H1DREerTJxfdz3gwhbJaqD3V7oJ/V5qqy3 g==; X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="299059033" X-IronPort-AV: E=Sophos;i="5.96,156,1665471600"; d="scan'208";a="299059033" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 23:45:30 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="726703043" X-IronPort-AV: E=Sophos;i="5.96,156,1665471600"; d="scan'208";a="726703043" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 23:45:29 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 3/3] conf/basic_4k_pages_dsa: add the suite config Date: Fri, 11 Nov 2022 15:38:50 +0800 Message-Id: <20221111073850.2424925-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add the new suite config basic_4k_pages_dsa.cfg. Signed-off-by: Wei Ling --- conf/basic_4k_pages_dsa.cfg | 36 ++++++++++++++++++++++++++++++++++++ 1 file changed, 36 insertions(+) create mode 100644 conf/basic_4k_pages_dsa.cfg diff --git a/conf/basic_4k_pages_dsa.cfg b/conf/basic_4k_pages_dsa.cfg new file mode 100644 index 00000000..ed905c2c --- /dev/null +++ b/conf/basic_4k_pages_dsa.cfg @@ -0,0 +1,36 @@ +[vm0] +cpu = + model=host,number=8,cpupin=20 21 22 23 24 25 26 27; +mem = + size=4096,hugepage=yes; +disk = + file=/home/image/ubuntu2004.img; +login = + user=root,password=tester; +vnc = + displayNum=4; +net = + type=user,opt_vlan=2; + type=nic,opt_vlan=2; +daemon = + enable=yes; +qemu = + path=/home/QEMU/qemu-7.1.0/bin/qemu-system-x86_64; +[vm1] +cpu = + model=host,number=8,cpupin=48 49 50 51 52 53 54 55; +mem = + size=4096,hugepage=yes; +disk = + file=/home/image/ubuntu2004_2.img; +login = + user=root,password=tester; +net = + type=nic,opt_vlan=3; + type=user,opt_vlan=3; +vnc = + displayNum=5; +daemon = + enable=yes; +qemu = + path=/home/QEMU/qemu-7.1.0/bin/qemu-system-x86_64;