From patchwork Wed Aug 3 01:36:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 114548 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 334E5A00C5; Wed, 3 Aug 2022 03:42:21 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4AB41427F6; Wed, 3 Aug 2022 03:42:20 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id B346340141 for ; Wed, 3 Aug 2022 03:42:18 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1659490938; x=1691026938; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=C8WmXLAR+2D9x2RQZELmX+/49BxQr5NNRsbUm98uOAc=; b=QD9bYbVVmmT+O+0K9nvZic2dvM4CYAFAf7UnqEb2fCBgjkvoBGbROYDH Q5rKknLFV4PML2SzIUFA7hM8s8u7q3IfNcGkoxiZhnHH5DTY6iwwVErPe cRz5FgMkB3WbfMcbtrj9Yc/Cs444JpW0YYpXKRFjGqz+q5AmiSutIC34e Zg9jk8JHhwx9CX6IL+JgH06LIBhap6N3loSKrscJ3D+j6+yHjtvfGA8IV KMI12E1OohY/WoWS+yxswEGBlzPOK5VR5N418vvJn48GThEWz36iL1EF1 e4Z1JT/0a7lyy2lLxCIZv7HFKZ/gNRcAcMWqby4mB+aQf3+ylKQ4++NV9 Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10427"; a="289569913" X-IronPort-AV: E=Sophos;i="5.93,212,1654585200"; d="scan'208,223";a="289569913" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Aug 2022 18:42:17 -0700 X-IronPort-AV: E=Sophos;i="5.93,212,1654585200"; d="scan'208,223";a="599487423" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Aug 2022 18:42:16 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V2 1/2] test_plans/vm2vm_virtio_net_perf_cbdma_test_plan: modify testplan to test virito dequeue Date: Tue, 2 Aug 2022 21:36:39 -0400 Message-Id: <20220803013639.1122052-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org From DPDK-22.07, virtio support async dequeue for split and packed ring path, so modify vm2vm_virtio_net_perf_cbdma testplan to test the split and packed ring async dequeue feature. Signed-off-by: Wei Ling --- .../vm2vm_virtio_net_perf_cbdma_test_plan.rst | 160 +++++++++--------- 1 file changed, 80 insertions(+), 80 deletions(-) diff --git a/test_plans/vm2vm_virtio_net_perf_cbdma_test_plan.rst b/test_plans/vm2vm_virtio_net_perf_cbdma_test_plan.rst index 8433b3d4..622d3e82 100644 --- a/test_plans/vm2vm_virtio_net_perf_cbdma_test_plan.rst +++ b/test_plans/vm2vm_virtio_net_perf_cbdma_test_plan.rst @@ -10,11 +10,11 @@ Description Vhost asynchronous data path leverages DMA devices to offload memory copies from the CPU and it is implemented in an asynchronous way. In addition, vhost supports M:N mapping between vrings and DMA virtual channels. Specifically, one vring can use multiple different DMA -channels and one DMA channel can be shared by multiple vrings at the same time.Vhost enqueue operation with CBDMA channels is supported -in both split and packed ring. +channels and one DMA channel can be shared by multiple vrings at the same time.From DPDK22.07, Vhost enqueue and dequeue operation with +CBDMA channels is supported in both split and packed ring. This document provides the test plan for testing the following features when Vhost-user using asynchronous data path with CBDMA in VM2VM virtio-net topology. -1. check Vhost tx offload function by verifing the TSO/cksum in the TCP/IP stack with vm2vm split ring and packed ring +1. Check Vhost tx offload(TSO) function by verifing the TSO/cksum in the TCP/IP stack with vm2vm split ring and packed ring vhost-user/virtio-net mergeable path. 2.Check the payload of large packet (larger than 1MB) is valid after forwarding packets with vm2vm split ring and packed ring vhost-user/virtio-net mergeable and non-mergeable path. @@ -22,7 +22,7 @@ and packed ring vhost-user/virtio-net mergeable and non-mergeable path. Note: 1.For packed virtqueue virtio-net test, need qemu version > 4.2.0 and VM kernel version > 5.1, and packed ring multi-queues not support reconnect in qemu yet. -2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version >= 5.2.0, dut to old qemu exist reconnect issue when multi-queues test. +2.For split virtqueue virtio-net with multi-queues server mode test, better to use qemu version >= 5.2.0, dut to qemu(v4.2.0~v5.1.0) exist split ring multi-queues reconnection issue. 3.When DMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may exceed IOMMU's max capability, better to use 1G guest hugepage. 4.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. @@ -36,35 +36,35 @@ Prerequisites Topology -------- - Test flow: Virtio-net-->Vhost-->Testpmd-->Vhost-->Virtio-net + Test flow: Virtio-net-->Vhost-->Testpmd-->Vhost-->Virtio-net Software -------- - iperf - qemu: https://download.qemu.org/qemu-6.2.0.tar.xz + iperf + qemu: https://download.qemu.org/qemu-7.0.0.tar.xz General set up -------------- 1. Compile DPDK:: - # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library= - # ninja -C -j 110 - For example: - CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=x86_64-native-linuxapp-gcc - ninja -C x86_64-native-linuxapp-gcc -j 110 + # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static + # ninja -C -j 110 + For example: + CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc + ninja -C x86_64-native-linuxapp-gcc -j 110 2. Get the PCI device ID and DMA device ID of DUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 is DMA device ID:: - # ./usertools/dpdk-devbind.py -s + # ./usertools/dpdk-devbind.py -s - Network devices using kernel driver - =================================== - 0000:18:00.0 'Device 159b' if=ens785f0 drv=ice unused=vfio-pci + Network devices using kernel driver + =================================== + 0000:18:00.0 'Device 159b' if=ens785f0 drv=ice unused=vfio-pci - DMA devices using kernel driver - =============================== - 0000:00:04.0 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci - 0000:00:04.1 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci + DMA devices using kernel driver + =============================== + 0000:00:04.0 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci + 0000:00:04.1 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci Test case ========= @@ -73,15 +73,15 @@ Common steps ------------ 1. Bind 2 CBDMA channels to vfio-pci:: - # ./usertools/dpdk-devbind.py -b vfio-pci + # ./usertools/dpdk-devbind.py -b vfio-pci - For example, Bind 1 NIC port and 2 CBDMA channels: - # ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:04.0,0000:00:04.1 + For example, Bind 1 NIC port and 2 CBDMA channels: + # ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:04.0,0000:00:04.1 -Test Case 1: VM2VM split ring vhost-user/virtio-net CBDMA enable test with tcp traffic --------------------------------------------------------------------------------------- -This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net split ring mergeable path -by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous enqueue operations with CBDMA channels. +Test Case 1: VM2VM virtio-net split ring CBDMA enable test with tcp traffic +--------------------------------------------------------------------------- +This case test the function of Vhost TSO in the topology of vhost-user/virtio-net split ring mergeable path +by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous operations with CBDMA channels. 1. Bind 2 CBDMA channels to vfio-pci, as common step 1. @@ -89,9 +89,9 @@ by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous e # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --file-prefix=vhost \ -a 0000:00:04.0 -a 0000:00:04.1 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0],dma_ring_size=2048' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0],dma_ring_size=2048' \ - --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --txq=1 --rxq==1 --lcore-dma=[lcore3@0000:00:04.0,lcore4@0000:00:04.1] + --vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0;rxq0]' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0;rxq0]' \ + --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --txq=1 --rxq=1 --lcore-dma=[lcore3@0000:00:04.0,lcore4@0000:00:04.1] testpmd>start 3. Launch VM1 and VM2:: @@ -139,11 +139,11 @@ by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous e testpmd>show port xstats all -Test Case 2: VM2VM split ring vhost-user/virtio-net mergeable 8 queues CBDMA enable test with large packet payload valid check ------------------------------------------------------------------------------------------------------------------------------- +Test Case 2: VM2VM virtio-net split ring mergeable 8 queues CBDMA enable test with large packet payload valid check +------------------------------------------------------------------------------------------------------------------- This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in -vm2vm vhost-user/virtio-net split ring mergeable path when vhost uses the asynchronous enqueue operations with CBDMA channels. -The dynamic change of multi-queues number, iova as VA and PA mode also test. +vm2vm vhost-user/virtio-net split ring mergeable path when vhost uses the asynchronous operations with CBDMA channels. +The dynamic change of multi-queues number and iova as VA and PA mode also test. 1. Bind 16 CBDMA channels to vfio-pci, as common step 1. @@ -152,8 +152,8 @@ The dynamic change of multi-queues number, iova as VA and PA mode also test. # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,tso=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,tso=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \ --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore2@0000:00:04.4,lcore2@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7] testpmd>start @@ -210,8 +210,8 @@ The dynamic change of multi-queues number, iova as VA and PA mode also test. # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,tso=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,tso=1,queues=8,dmas=[txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \ --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore3@0000:00:04.0,lcore3@0000:00:04.2,lcore3@0000:00:04.4,lcore3@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:00:04.1,lcore4@0000:00:04.3,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7] testpmd>start @@ -223,8 +223,8 @@ The dynamic change of multi-queues number, iova as VA and PA mode also test. # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,tso=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,tso=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ --iova=pa -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \ --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore3@0000:00:04.0,lcore3@0000:00:04.2,lcore3@0000:00:04.4,lcore3@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:00:04.1,lcore4@0000:00:04.3,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7] testpmd>start @@ -234,7 +234,8 @@ The dynamic change of multi-queues number, iova as VA and PA mode also test. 12. Quit and relaunch vhost w/o CBDMA channels:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4' \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,tso=1,queues=4' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,tso=1,queues=4' \ -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=4 --rxq=4 testpmd>start @@ -258,7 +259,8 @@ The dynamic change of multi-queues number, iova as VA and PA mode also test. 17. Quit and relaunch vhost with 1 queues:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4' \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,tso=1,queues=4' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,tso=1,queues=4' \ -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=1 --rxq=1 testpmd>start @@ -279,11 +281,11 @@ The dynamic change of multi-queues number, iova as VA and PA mode also test. Under VM1, run: `iperf -s -i 1` Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` -Test Case 3: VM2VM split ring vhost-user/virtio-net non-mergeable 8 queues CBDMA enable test with large packet payload valid check ----------------------------------------------------------------------------------------------------------------------------------- +Test Case 3: VM2VM virtio-net split ring non-mergeable 8 queues CBDMA enable test with large packet payload valid check +----------------------------------------------------------------------------------------------------------------------- This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in -vm2vm vhost-user/virtio-net split ring non-mergeable path when vhost uses the asynchronous enqueue operations with dsa dpdk driver. -The dynamic change of multi-queues number also test. +vm2vm vhost-user/virtio-net split ring non-mergeable path when vhost uses the asynchronous operations with CBDMA channels. +The dynamic change of multi-queues number and the reconnection also test. 1. Bind 16 CBDMA channels to vfio-pci, as common step 1. @@ -292,8 +294,8 @@ The dynamic change of multi-queues number also test. # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,tso=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,tso=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \ --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore2@0000:00:04.4,lcore2@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7] testpmd>start @@ -350,8 +352,8 @@ The dynamic change of multi-queues number also test. # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq1;txq2;txq3;txq4;txq5;txq6]' \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,tso=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,tso=1,queues=8,dmas=[txq1;txq2;txq3;txq4;txq5;txq6]' \ --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \ --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore3@0000:00:04.0,lcore3@0000:00:04.2,lcore3@0000:00:04.4,lcore3@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:00:04.1,lcore4@0000:00:04.3,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7] testpmd>start @@ -361,7 +363,7 @@ The dynamic change of multi-queues number also test. 10. Quit and relaunch vhost ports w/o CBDMA channels:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8' \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,tso=1,queues=8' --vdev 'net_vhost1,iface=vhost-net1,client=1,tso=1,queues=8' \ -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 testpmd>start @@ -377,7 +379,7 @@ The dynamic change of multi-queues number also test. 13. Quit and relaunch vhost ports with 1 queues:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8' \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,tso=1,queues=8' --vdev 'net_vhost1,iface=vhost-net1,client=1,tso=1,queues=8' \ -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=1 --rxq=1 testpmd>start @@ -398,11 +400,10 @@ The dynamic change of multi-queues number also test. Under VM1, run: `iperf -s -i 1` Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` -Test Case 4: VM2VM split ring vhost-user/virtio-net mergeable 16 queues CBDMA enable test with large packet payload valid check -------------------------------------------------------------------------------------------------------------------------------- +Test Case 4: VM2VM virtio-net split ring mergeable 16 queues CBDMA enable test with large packet payload valid check +-------------------------------------------------------------------------------------------------------------------- This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in -vm2vm vhost-user/virtio-net split ring mergeable path and 16 queues when vhost uses the asynchronous enqueue operations with dsa dpdk -and kernel driver. +vm2vm vhost-user/virtio-net split ring mergeable path and 16 queues when vhost uses the asynchronous operations with CBDMA channels. 1. Bind 16 CBDMA channels to vfio-pci, as common step 1. @@ -411,8 +412,8 @@ and kernel driver. # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-9 -n 4 --file-prefix=vhost \ -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=16,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13;txq14;txq15]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=16,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13;txq14;txq15]' \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,tso=1,queues=16,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13;txq14;txq15;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7;rxq8;rxq9;rxq10;rxq11;rxq12;rxq13;rxq14;rxq15]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,tso=1,queues=16,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13;txq14;txq15;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7;rxq8;rxq9;rxq10;rxq11;rxq12;rxq13;rxq14;rxq15]' \ --iova=va -- -i --nb-cores=8 --txd=1024 --rxd=1024 --txq=16 --rxq=16 \ --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore3@0000:00:04.2,lcore3@0000:00:04.3,lcore4@0000:00:04.4,lcore4@0000:00:04.5,lcore5@0000:00:04.6,lcore5@0000:00:04.7,lcore6@0000:80:04.0,lcore6@0000:80:04.1,lcore7@0000:80:04.2,lcore7@0000:80:04.3,lcore8@0000:80:04.4,lcore8@0000:80:04.5,lcore9@0000:80:04.6,lcore9@0000:80:04.7] testpmd>start @@ -464,10 +465,10 @@ and kernel driver. Under VM1, run: `iperf -s -i 1` Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` -Test Case 5: VM2VM packed ring vhost-user/virtio-net CBDMA enable test with tcp traffic ---------------------------------------------------------------------------------------- +Test Case 5: VM2VM virtio-net packed ring CBDMA enable test with tcp traffic +---------------------------------------------------------------------------- This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net packed ring mergeable path -by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous enqueue operations with CBDMA channels. +by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous operations with CBDMA channels. 1. Bind 2 CBDMA channels to vfio-pci, as common step 1. @@ -475,8 +476,8 @@ by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous e # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --file-prefix=vhost \ -a 0000:00:04.0 -a 0000:00:04.1 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0]' \ + --vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0;rxq0]' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0;rxq0]' \ --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --txq=1 --rxq=1 --lcore-dma=[lcore3@0000:00:04.0,lcore4@0000:00:04.1] testpmd>start @@ -528,7 +529,7 @@ by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous e Test Case 6: VM2VM virtio-net packed ring mergeable 8 queues CBDMA enable test with large packet payload valid check -------------------------------------------------------------------------------------------------------------------- This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in -vm2vm vhost-user/virtio-net packed ring mergeable path and 8 queues when vhost uses the asynchronous enqueue operations with CBDMA channels. +vm2vm vhost-user/virtio-net packed ring mergeable path and 8 queues when vhost uses the asynchronous operations with CBDMA channels. 1. Bind 16 CBDMA channels to vfio-pci, as common step 1. @@ -537,8 +538,8 @@ vm2vm vhost-user/virtio-net packed ring mergeable path and 8 queues when vhost u # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ + --vdev 'net_vhost0,iface=vhost-net0,queues=8,tso=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=8,tso=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \ --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore3@0000:00:04.0,lcore3@0000:00:04.2,lcore3@0000:00:04.4,lcore3@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:00:04.1,lcore4@0000:00:04.3,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7] testpmd>start @@ -595,7 +596,7 @@ vm2vm vhost-user/virtio-net packed ring mergeable path and 8 queues when vhost u Test Case 7: VM2VM virtio-net packed ring non-mergeable 8 queues CBDMA enable test with large packet payload valid check ------------------------------------------------------------------------------------------------------------------------ This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in -vm2vm vhost-user/virtio-net packed ring non-mergeable path and 8 queues when vhost uses the asynchronous enqueue operations with CBDMA channels. +vm2vm vhost-user/virtio-net packed ring non-mergeable path and 8 queues when vhost uses the asynchronous operations with CBDMA channels. 1. Bind 16 CBDMA channels to vfio-pci, as common step 1. @@ -604,8 +605,8 @@ vm2vm vhost-user/virtio-net packed ring non-mergeable path and 8 queues when vho # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ + --vdev 'net_vhost0,iface=vhost-net0,queues=8,tso=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=8,tso=1,dmas=[txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5]' \ --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \ --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore2@0000:00:04.4,lcore2@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7] testpmd>start @@ -662,8 +663,7 @@ vm2vm vhost-user/virtio-net packed ring non-mergeable path and 8 queues when vho Test Case 8: VM2VM virtio-net packed ring mergeable 16 queues CBDMA enabled test with large packet payload valid check ---------------------------------------------------------------------------------------------------------------------- This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in -vm2vm vhost-user/virtio-net packed ring mergeable path and 16 queues when vhost uses the asynchronous enqueue operations with dsa dpdk -and kernel driver. +vm2vm vhost-user/virtio-net packed ring mergeable path and 16 queues when vhost uses the asynchronous operations with CBDMA channels. 1. Bind 16 CBDMA channels to vfio-pci, as common step 1. @@ -672,8 +672,8 @@ and kernel driver. # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-9 -n 4 --file-prefix=vhost \ -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=16,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11,txq12,txq13;txq14;txq15]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=16,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11,txq12,txq13;txq14;txq15]' \ + --vdev 'net_vhost0,iface=vhost-net0,queues=16,tso=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11,txq12,txq13;txq14;txq15;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7;rxq8;rxq9;rxq10;rxq11;rxq12;rxq13;rxq14;rxq15]' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=16,tso=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11,txq12,txq13;txq14;txq15;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7;rxq8;rxq9;rxq10;rxq11;rxq12;rxq13;rxq14;rxq15]' \ --iova=pa -- -i --nb-cores=8 --txd=1024 --rxd=1024 --txq=16 --rxq=16 \ --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore3@0000:00:04.2,lcore3@0000:00:04.3,lcore4@0000:00:04.4,lcore4@0000:00:04.5,lcore5@0000:00:04.6,lcore5@0000:00:04.7,lcore6@0000:80:04.0,lcore6@0000:80:04.1,lcore7@0000:80:04.2,lcore7@0000:80:04.3,lcore8@0000:80:04.4,lcore8@0000:80:04.5,lcore9@0000:80:04.6,lcore9@0000:80:04.7] testpmd>start @@ -727,8 +727,8 @@ and kernel driver. 8. Rerun step 6-7 five times. -Test Case 9: VM2VM packed ring vhost-user/virtio-net CBDMA enable test with tcp traffic when set iova=pa --------------------------------------------------------------------------------------------------------- +Test Case 9: VM2VM virtio-net packed ring CBDMA enable test with tcp traffic when set iova=pa +--------------------------------------------------------------------------------------------- This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net packed ring mergeable path by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous enqueue operations with CBDMA channels and iova as PA mode. @@ -739,8 +739,8 @@ and iova as PA mode. # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --file-prefix=vhost \ -a 0000:00:04.0 -a 0000:00:04.1 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0]' \ + --vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0;rxq0]' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0;rxq0]' \ --iova=pa -- -i --nb-cores=2 --txd=1024 --rxd=1024 --txq=1 --rxq=1 --lcore-dma=[lcore3@0000:00:04.0,lcore4@0000:00:04.1] testpmd>start @@ -796,7 +796,7 @@ and iova as PA mode. Test Case 10: VM2VM virtio-net packed ring mergeable 8 queues CBDMA enable and PA mode test with large packet payload valid check --------------------------------------------------------------------------------------------------------------------------------- This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in -vm2vm vhost-user/virtio-net packed ring mergeable path and 8 queues when vhost uses the asynchronous enqueue operations with CBDMA channels +vm2vm vhost-user/virtio-net packed ring mergeable path and 8 queues when vhost uses the asynchronous operations with CBDMA channels and iova as PA mode. 1. Bind 16 CBDMA channels to vfio-pci, as common step 1. @@ -806,8 +806,8 @@ and iova as PA mode. # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ + --vdev 'net_vhost0,iface=vhost-net0,queues=8,tso=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=8,tso=1,dmas=[txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5]' \ --iova=pa -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \ --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore2@0000:00:04.4,lcore2@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7] testpmd>start From patchwork Wed Aug 3 01:36:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 114549 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 959CAA00C5; Wed, 3 Aug 2022 03:42:31 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 83C3E40E50; Wed, 3 Aug 2022 03:42:31 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id 9223840141 for ; Wed, 3 Aug 2022 03:42:29 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1659490949; x=1691026949; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=Wv0ISOjOzGqIER47juONGcmcSJ2f9TSMuR30d8X/35E=; b=Eg11lx/rEqeQH45lMOj2eLNSIyeCaFE/oDbUcAro2OkTz2uhv77EEQQr cdsLyPig9nA4yByt5q+8/VS8hc3J7tq6wlhBk+Ac0ndEB7pRqsx3ym7qr bM4IyGVA4Wz27gC3ZKZfTKm0hNSo6e0m0gQB4UzRDYuYK7wLkrFqhmNI3 lJdmTiks729kw8s0mMJMQVt5HKG3dF/rFCMEJOqoZQOts4h80uvM7naTN s3UCDetqhBgQYYiEAKE7PT+AIaYyA/HMEYwj7l0ZU/CLsqHAmytcwdAnU 7vvirQcIGgD6uwhTfT5yUTdsLhIqY2rpDfGvpYLt4hzt8/niZNrIWj4Ce A==; X-IronPort-AV: E=McAfee;i="6400,9594,10427"; a="269938835" X-IronPort-AV: E=Sophos;i="5.93,212,1654585200"; d="scan'208,223";a="269938835" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Aug 2022 18:42:28 -0700 X-IronPort-AV: E=Sophos;i="5.93,212,1654585200"; d="scan'208,223";a="599487440" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Aug 2022 18:42:26 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V2 2/2] tests/vm2vm_virtio_net_perf_cbdma: modify testplan to test virito dequeue Date: Tue, 2 Aug 2022 21:36:50 -0400 Message-Id: <20220803013650.1122116-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org From DPDK-22.07, virtio support async dequeue for split and packed ring path, so modify vm2vm_virtio_net_perf_cbdma testsuite to test the split and packed ring async dequeue feature. Signed-off-by: Wei Ling --- .../TestSuite_vm2vm_virtio_net_perf_cbdma.py | 982 ++++++++++++------ 1 file changed, 638 insertions(+), 344 deletions(-) diff --git a/tests/TestSuite_vm2vm_virtio_net_perf_cbdma.py b/tests/TestSuite_vm2vm_virtio_net_perf_cbdma.py index 8dad7be5..75677175 100644 --- a/tests/TestSuite_vm2vm_virtio_net_perf_cbdma.py +++ b/tests/TestSuite_vm2vm_virtio_net_perf_cbdma.py @@ -87,43 +87,18 @@ class TestVM2VMVirtioNetPerfCbdma(TestCase): 60, ) - @staticmethod - def generate_dms_param(queues): - das_list = [] - for i in range(queues): - das_list.append("txq{}".format(i)) - das_param = "[{}]".format(";".join(das_list)) - return das_param - - @staticmethod - def generate_lcore_dma_param(cbdma_list, core_list): - group_num = int(len(cbdma_list) / len(core_list)) - lcore_dma_list = [] - if len(cbdma_list) == 1: - for core in core_list: - lcore_dma_list.append("lcore{}@{}".format(core, cbdma_list[0])) - elif len(core_list) == 1: - for cbdma in cbdma_list: - lcore_dma_list.append("lcore{}@{}".format(core_list[0], cbdma)) - else: - for cbdma in cbdma_list: - core_list_index = int(cbdma_list.index(cbdma) / group_num) - lcore_dma_list.append( - "lcore{}@{}".format(core_list[core_list_index], cbdma) - ) - lcore_dma_param = "[{}]".format(",".join(lcore_dma_list)) - return lcore_dma_param - def bind_cbdma_device_to_kernel(self): - self.dut.send_expect("modprobe ioatdma", "# ") - self.dut.send_expect( - "./usertools/dpdk-devbind.py -u %s" % self.cbdma_str, "# ", 30 - ) - self.dut.send_expect( - "./usertools/dpdk-devbind.py --force --bind=ioatdma %s" % self.cbdma_str, - "# ", - 60, - ) + if self.cbdma_str: + self.dut.send_expect("modprobe ioatdma", "# ") + self.dut.send_expect( + "./usertools/dpdk-devbind.py -u %s" % self.cbdma_str, "# ", 30 + ) + self.dut.send_expect( + "./usertools/dpdk-devbind.py --force --bind=ioatdma %s" + % self.cbdma_str, + "# ", + 60, + ) @property def check_2M_env(self): @@ -257,8 +232,8 @@ class TestVM2VMVirtioNetPerfCbdma(TestCase): out_tx = self.vhost.send_expect("show port xstats 0", "testpmd> ", 20) out_rx = self.vhost.send_expect("show port xstats 1", "testpmd> ", 20) - tx_info = re.search("tx_size_1523_to_max_packets:\s*(\d*)", out_tx) - rx_info = re.search("rx_size_1523_to_max_packets:\s*(\d*)", out_rx) + tx_info = re.search("tx_q0_size_1519_max_packets:\s*(\d*)", out_tx) + rx_info = re.search("rx_q0_size_1519_max_packets:\s*(\d*)", out_rx) self.verify( int(rx_info.group(1)) > 0, "Port 1 not receive packet greater than 1522" @@ -307,11 +282,11 @@ class TestVM2VMVirtioNetPerfCbdma(TestCase): self.vm_dut[0].send_expect('echo "%s" > /tmp/payload' % data, "# ") # scp this file to vm1 out = self.vm_dut[1].send_command( - "scp root@%s:/tmp/payload /root" % self.virtio_ip1, timeout=5 + "scp root@%s:/tmp/payload /root" % self.virtio_ip1, timeout=10 ) if "Are you sure you want to continue connecting" in out: - self.vm_dut[1].send_command("yes", timeout=3) - self.vm_dut[1].send_command(self.vm[0].password, timeout=3) + self.vm_dut[1].send_command("yes", timeout=10) + self.vm_dut[1].send_command(self.vm[0].password, timeout=10) # get the file info in vm1, and check it valid md5_send = self.vm_dut[0].send_expect("md5sum /tmp/payload", "# ") md5_revd = self.vm_dut[1].send_expect("md5sum /root/payload", "# ") @@ -321,23 +296,24 @@ class TestVM2VMVirtioNetPerfCbdma(TestCase): md5_send == md5_revd, "the received file is different with send file" ) - def test_vm2vm_split_ring_iperf_with_tso_and_cbdma_enable(self): + def test_vm2vm_virtiio_net_split_ring_cbdma_enable_test_with_tcp_traffic(self): """ - Test Case 1: VM2VM split ring vhost-user/virtio-net CBDMA enable test with tcp traffic + Test Case 1: VM2VM virtio-net split ring CBDMA enable test with tcp traffic """ - self.get_cbdma_ports_info_and_bind_to_dpdk(2) - dmas = self.generate_dms_param(1) - lcore_dma = self.generate_lcore_dma_param( - cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:3] + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=2) + lcore_dma = "lcore%s@%s," "lcore%s@%s" % ( + self.vhost_core_list[1], + self.cbdma_list[0], + self.vhost_core_list[2], + self.cbdma_list[1], ) - eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas={},dma_ring_size=2048'".format( - dmas - ) + " --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas={},dma_ring_size=2048'".format( - dmas + eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0;rxq0]'" + + " --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0;rxq0]'" ) param = ( " --nb-cores=2 --txd=1024 --rxd=1024 --txq=1 --rxq=1" - + " --lcore-dma={}".format(lcore_dma) + + " --lcore-dma=[%s]" % lcore_dma ) self.start_vhost_testpmd( cores=self.vhost_core_list, @@ -354,53 +330,72 @@ class TestVM2VMVirtioNetPerfCbdma(TestCase): self.get_perf_result() self.verify_xstats_info_on_vhost() - def test_vm2vm_split_ring_with_mergeable_path_8queue_check_large_packet_and_cbdma_enable( + def test_vm2vm_virtio_net_split_ring_mergeable_8_queues_cbdma_enable_test_with_large_packet_payload_valid_check( self, ): """ - Test Case 2: VM2VM split ring vhost-user/virtio-net mergeable 8 queues CBDMA enable test with large packet payload valid check + Test Case 2: VM2VM virtio-net split ring mergeable 8 queues CBDMA enable test with large packet payload valid check """ self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) - dmas = self.generate_dms_param(8) - core1 = self.vhost_core_list[1] - core2 = self.vhost_core_list[2] - core3 = self.vhost_core_list[3] - core4 = self.vhost_core_list[4] - cbdma1 = self.cbdma_list[0] - cbdma2 = self.cbdma_list[1] - cbdma3 = self.cbdma_list[2] - cbdma4 = self.cbdma_list[3] - cbdma5 = self.cbdma_list[4] - cbdma6 = self.cbdma_list[5] - cbdma7 = self.cbdma_list[6] - cbdma8 = self.cbdma_list[7] - cbdma9 = self.cbdma_list[8] - cbdma10 = self.cbdma_list[9] - cbdma11 = self.cbdma_list[10] - cbdma12 = self.cbdma_list[11] - cbdma13 = self.cbdma_list[12] - cbdma14 = self.cbdma_list[13] - cbdma15 = self.cbdma_list[14] - cbdma16 = self.cbdma_list[15] lcore_dma = ( - f"[lcore{core1}@{cbdma1},lcore{core1}@{cbdma2},lcore{core1}@{cbdma3}," - f"lcore{core1}@{cbdma4},lcore{core1}@{cbdma5},lcore{core1}@{cbdma6}," - f"lcore{core2}@{cbdma7},lcore{core2}@{cbdma8}," - f"lcore{core3}@{cbdma9},lcore{core3}@{cbdma10},lcore{core3}@{cbdma11},lcore{core3}@{cbdma12}," - f"lcore{core3}@{cbdma13},lcore{core3}@{cbdma14},lcore{core3}@{cbdma15}," - f"lcore{core4}@{cbdma16}]" + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s" + % ( + self.vhost_core_list[1], + self.cbdma_list[0], + self.vhost_core_list[1], + self.cbdma_list[1], + self.vhost_core_list[1], + self.cbdma_list[2], + self.vhost_core_list[1], + self.cbdma_list[3], + self.vhost_core_list[1], + self.cbdma_list[4], + self.vhost_core_list[1], + self.cbdma_list[5], + self.vhost_core_list[2], + self.cbdma_list[6], + self.vhost_core_list[2], + self.cbdma_list[7], + self.vhost_core_list[3], + self.cbdma_list[8], + self.vhost_core_list[3], + self.cbdma_list[9], + self.vhost_core_list[3], + self.cbdma_list[10], + self.vhost_core_list[3], + self.cbdma_list[11], + self.vhost_core_list[3], + self.cbdma_list[12], + self.vhost_core_list[3], + self.cbdma_list[13], + self.vhost_core_list[3], + self.cbdma_list[14], + self.vhost_core_list[4], + self.cbdma_list[15], + ) ) eal_param = ( - "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas={}'".format( - dmas - ) - + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas={}'".format( - dmas - ) + "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,tso=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" + + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,tso=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" ) param = ( " --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8" - + " --lcore-dma={}".format(lcore_dma) + + " --lcore-dma=[%s]" % lcore_dma ) self.start_vhost_testpmd( cores=self.vhost_core_list, @@ -418,25 +413,78 @@ class TestVM2VMVirtioNetPerfCbdma(TestCase): self.start_iperf() self.get_perf_result() - self.logger.info("Quit and relaunch vhost w/ diff CBDMA channels") self.pmdout_vhost_user.execute_cmd("quit", "#") lcore_dma = ( - f"[lcore{core1}@{cbdma1},lcore{core1}@{cbdma2}," - f"lcore{core1}@{cbdma3},lcore{core1}@{cbdma4}," - f"lcore{core2}@{cbdma1},lcore{core2}@{cbdma3},lcore{core2}@{cbdma5}," - f"lcore{core2}@{cbdma6},lcore{core2}@{cbdma7},lcore{core2}@{cbdma8}," - f"lcore{core3}@{cbdma2},lcore{core3}@{cbdma4},lcore{core3}@{cbdma9}," - f"lcore{core3}@{cbdma10},lcore{core3}@{cbdma11},lcore{core3}@{cbdma12}," - f"lcore{core3}@{cbdma13},lcore{core3}@{cbdma14},lcore{core3}@{cbdma15}," - f"lcore{core4}@{cbdma16}]" + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s" + % ( + self.vhost_core_list[1], + self.cbdma_list[0], + self.vhost_core_list[1], + self.cbdma_list[1], + self.vhost_core_list[1], + self.cbdma_list[2], + self.vhost_core_list[1], + self.cbdma_list[3], + self.vhost_core_list[2], + self.cbdma_list[0], + self.vhost_core_list[2], + self.cbdma_list[2], + self.vhost_core_list[2], + self.cbdma_list[4], + self.vhost_core_list[2], + self.cbdma_list[5], + self.vhost_core_list[2], + self.cbdma_list[6], + self.vhost_core_list[2], + self.cbdma_list[7], + self.vhost_core_list[3], + self.cbdma_list[1], + self.vhost_core_list[3], + self.cbdma_list[3], + self.vhost_core_list[3], + self.cbdma_list[8], + self.vhost_core_list[3], + self.cbdma_list[9], + self.vhost_core_list[3], + self.cbdma_list[10], + self.vhost_core_list[3], + self.cbdma_list[11], + self.vhost_core_list[3], + self.cbdma_list[12], + self.vhost_core_list[3], + self.cbdma_list[13], + self.vhost_core_list[3], + self.cbdma_list[14], + self.vhost_core_list[4], + self.cbdma_list[15], + ) ) eal_param = ( - "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]'" - + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq1;txq2;txq3;txq4;txq5;txq6;txq7]'" + "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,tso=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]'" + + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,tso=1,dmas=[txq1;txq2;txq3;txq4;txq5;txq6;txq7]'" ) param = ( " --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8" - + " --lcore-dma={}".format(lcore_dma) + + " --lcore-dma=[%s]" % lcore_dma ) self.start_vhost_testpmd( cores=self.vhost_core_list, @@ -451,14 +499,13 @@ class TestVM2VMVirtioNetPerfCbdma(TestCase): self.get_perf_result() if not self.check_2M_env: - self.logger.info("Quit and relaunch vhost w/ iova=pa") eal_param = ( - "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]'" - + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]'" + "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,tso=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]'" + + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,tso=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]'" ) param = ( " --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8" - + " --lcore-dma={}".format(lcore_dma) + + " --lcore-dma=[%s]" % lcore_dma ) self.pmdout_vhost_user.execute_cmd("quit", "#") self.start_vhost_testpmd( @@ -473,11 +520,10 @@ class TestVM2VMVirtioNetPerfCbdma(TestCase): self.start_iperf() self.get_perf_result() - self.logger.info("Quit and relaunch vhost w/o CBDMA channels") self.pmdout_vhost_user.execute_cmd("quit", "#") eal_param = ( - "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4'" - + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4'" + "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4,tso=1'" + + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4,tso=1'" ) param = " --nb-cores=4 --txd=1024 --rxd=1024 --txq=4 --rxq=4" self.start_vhost_testpmd( @@ -492,11 +538,10 @@ class TestVM2VMVirtioNetPerfCbdma(TestCase): self.start_iperf() self.get_perf_result() - self.logger.info("Quit and relaunch vhost w/o CBDMA channels with 1 queue") self.pmdout_vhost_user.execute_cmd("quit", "#") eal_param = ( - "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4'" - + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4'" + "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4,tso=1'" + + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4,tso=1'" ) param = " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1" self.start_vhost_testpmd( @@ -510,54 +555,73 @@ class TestVM2VMVirtioNetPerfCbdma(TestCase): self.start_iperf() self.get_perf_result() - def test_vm2vm_split_ring_with_non_mergeable_path_8queue_check_large_packet_and_cbdma_enable( + def test_vm2vm_virtio_net_split_ring_with_non_mergeable_8_queues_cbdma_enable_test_with_large_packet_payload_valid_check( self, ): """ - Test Case 3: VM2VM split ring vhost-user/virtio-net non-mergeable 8 queues CBDMA enable test with large packet payload valid check + Test Case 3: VM2VM virtio-net split ring non-mergeable 8 queues CBDMA enable test with large packet payload valid check """ self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) - dmas = self.generate_dms_param(8) - core1 = self.vhost_core_list[1] - core2 = self.vhost_core_list[2] - core3 = self.vhost_core_list[3] - core4 = self.vhost_core_list[4] - cbdma1 = self.cbdma_list[0] - cbdma2 = self.cbdma_list[1] - cbdma3 = self.cbdma_list[2] - cbdma4 = self.cbdma_list[3] - cbdma5 = self.cbdma_list[4] - cbdma6 = self.cbdma_list[5] - cbdma7 = self.cbdma_list[6] - cbdma8 = self.cbdma_list[7] - cbdma9 = self.cbdma_list[8] - cbdma10 = self.cbdma_list[9] - cbdma11 = self.cbdma_list[10] - cbdma12 = self.cbdma_list[11] - cbdma13 = self.cbdma_list[12] - cbdma14 = self.cbdma_list[13] - cbdma15 = self.cbdma_list[14] - cbdma16 = self.cbdma_list[15] lcore_dma = ( - f"[lcore{core1}@{cbdma1},lcore{core1}@{cbdma2},lcore{core1}@{cbdma3}," - f"lcore{core1}@{cbdma4},lcore{core1}@{cbdma5},lcore{core1}@{cbdma6}," - f"lcore{core2}@{cbdma7},lcore{core2}@{cbdma8}," - f"lcore{core3}@{cbdma9},lcore{core3}@{cbdma10},lcore{core3}@{cbdma11},lcore{core3}@{cbdma12}," - f"lcore{core3}@{cbdma13},lcore{core3}@{cbdma14},lcore{core3}@{cbdma15}," - f"lcore{core4}@{cbdma16}]" + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s" + % ( + self.vhost_core_list[1], + self.cbdma_list[0], + self.vhost_core_list[1], + self.cbdma_list[1], + self.vhost_core_list[1], + self.cbdma_list[2], + self.vhost_core_list[1], + self.cbdma_list[3], + self.vhost_core_list[1], + self.cbdma_list[4], + self.vhost_core_list[1], + self.cbdma_list[5], + self.vhost_core_list[2], + self.cbdma_list[6], + self.vhost_core_list[2], + self.cbdma_list[7], + self.vhost_core_list[3], + self.cbdma_list[8], + self.vhost_core_list[3], + self.cbdma_list[9], + self.vhost_core_list[3], + self.cbdma_list[10], + self.vhost_core_list[3], + self.cbdma_list[11], + self.vhost_core_list[3], + self.cbdma_list[12], + self.vhost_core_list[3], + self.cbdma_list[13], + self.vhost_core_list[3], + self.cbdma_list[14], + self.vhost_core_list[4], + self.cbdma_list[15], + ) ) eal_param = ( - "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas={}'".format( - dmas - ) - + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas={}'".format( - dmas - ) + "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,tso=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" + + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,tso=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" ) param = ( " --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8" - + " --lcore-dma={}".format(lcore_dma) + + " --lcore-dma=[%s]" % lcore_dma ) self.start_vhost_testpmd( cores=self.vhost_core_list, @@ -575,25 +639,78 @@ class TestVM2VMVirtioNetPerfCbdma(TestCase): self.start_iperf() self.get_perf_result() - self.logger.info("Quit and relaunch vhost w/ diff CBDMA channels") self.pmdout_vhost_user.execute_cmd("quit", "#") lcore_dma = ( - f"[lcore{core1}@{cbdma1},lcore{core1}@{cbdma2}," - f"lcore{core1}@{cbdma3},lcore{core1}@{cbdma4}," - f"lcore{core2}@{cbdma1},lcore{core2}@{cbdma3},lcore{core2}@{cbdma5}," - f"lcore{core2}@{cbdma6},lcore{core2}@{cbdma7},lcore{core2}@{cbdma8}," - f"lcore{core3}@{cbdma2},lcore{core3}@{cbdma4},lcore{core3}@{cbdma9}," - f"lcore{core3}@{cbdma10},lcore{core3}@{cbdma11},lcore{core3}@{cbdma12}," - f"lcore{core3}@{cbdma13},lcore{core3}@{cbdma14},lcore{core3}@{cbdma15}," - f"lcore{core4}@{cbdma16}]" + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s" + % ( + self.vhost_core_list[1], + self.cbdma_list[0], + self.vhost_core_list[1], + self.cbdma_list[1], + self.vhost_core_list[1], + self.cbdma_list[2], + self.vhost_core_list[1], + self.cbdma_list[3], + self.vhost_core_list[2], + self.cbdma_list[0], + self.vhost_core_list[2], + self.cbdma_list[2], + self.vhost_core_list[2], + self.cbdma_list[4], + self.vhost_core_list[2], + self.cbdma_list[5], + self.vhost_core_list[2], + self.cbdma_list[6], + self.vhost_core_list[2], + self.cbdma_list[7], + self.vhost_core_list[3], + self.cbdma_list[1], + self.vhost_core_list[3], + self.cbdma_list[3], + self.vhost_core_list[3], + self.cbdma_list[8], + self.vhost_core_list[3], + self.cbdma_list[9], + self.vhost_core_list[3], + self.cbdma_list[10], + self.vhost_core_list[3], + self.cbdma_list[11], + self.vhost_core_list[3], + self.cbdma_list[12], + self.vhost_core_list[3], + self.cbdma_list[13], + self.vhost_core_list[3], + self.cbdma_list[14], + self.vhost_core_list[4], + self.cbdma_list[15], + ) ) eal_param = ( - "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]'" - + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq1;txq2;txq3;txq4;txq5;txq6]'" + "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,tso=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]'" + + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,tso=1,dmas=[txq1;txq2;txq3;txq4;txq5;txq6]'" ) param = ( " --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8" - + " --lcore-dma={}".format(lcore_dma) + + " --lcore-dma=[%s]" % lcore_dma ) self.start_vhost_testpmd( cores=self.vhost_core_list, @@ -607,11 +724,10 @@ class TestVM2VMVirtioNetPerfCbdma(TestCase): self.start_iperf() self.get_perf_result() - self.logger.info("Quit and relaunch vhost w/o CBDMA channels") self.pmdout_vhost_user.execute_cmd("quit", "#") eal_param = ( - "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8'" - + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8'" + "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,tso=1'" + + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,tso=1'" ) param = " --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8" self.start_vhost_testpmd( @@ -627,11 +743,10 @@ class TestVM2VMVirtioNetPerfCbdma(TestCase): self.start_iperf() self.get_perf_result() - self.logger.info("Quit and relaunch vhost w/o CBDMA channels with 1 queue") self.pmdout_vhost_user.execute_cmd("quit", "#") eal_param = ( - "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8'" - + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8'" + "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,tso=1'" + + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,tso=1'" ) param = " --nb-cores=4 --txd=1024 --rxd=1024 --txq=1 --rxq=1" self.start_vhost_testpmd( @@ -646,28 +761,73 @@ class TestVM2VMVirtioNetPerfCbdma(TestCase): self.start_iperf() self.get_perf_result() - def test_vm2vm_split_ring_with_mergeable_path_16queue_check_large_packet_and_cbdma_enable( + def test_vm2vm_virtio_net_split_ring_mergeable_16_queues_cbdma_enable_test_with_large_packet_payload_valid_check( self, ): """ - Test Case 4: VM2VM split ring vhost-user/virtio-net mergeable 16 queues CBDMA enable test with large packet payload valid check + Test Case 4: VM2VM virtio-net split ring mergeable 16 queues CBDMA enable test with large packet payload valid check """ self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) - dmas = self.generate_dms_param(16) - lcore_dma = self.generate_lcore_dma_param( - cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:9] + lcore_dma = ( + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s" + % ( + self.vhost_core_list[1], + self.cbdma_list[0], + self.vhost_core_list[1], + self.cbdma_list[1], + self.vhost_core_list[2], + self.cbdma_list[2], + self.vhost_core_list[2], + self.cbdma_list[3], + self.vhost_core_list[3], + self.cbdma_list[4], + self.vhost_core_list[4], + self.cbdma_list[5], + self.vhost_core_list[4], + self.cbdma_list[6], + self.vhost_core_list[4], + self.cbdma_list[7], + self.vhost_core_list[5], + self.cbdma_list[8], + self.vhost_core_list[5], + self.cbdma_list[9], + self.vhost_core_list[6], + self.cbdma_list[10], + self.vhost_core_list[6], + self.cbdma_list[11], + self.vhost_core_list[7], + self.cbdma_list[12], + self.vhost_core_list[7], + self.cbdma_list[13], + self.vhost_core_list[8], + self.cbdma_list[14], + self.vhost_core_list[8], + self.cbdma_list[15], + ) ) eal_param = ( - "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=16,dmas={}'".format( - dmas - ) - + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=16,dmas={}'".format( - dmas - ) + "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=16,tso=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13;txq14;txq15;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7;rxq8;rxq9;rxq10;rxq11;rxq12;rxq13;rxq14;rxq15]'" + + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=16,tso=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13;txq14;txq15;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7;rxq8;rxq9;rxq10;rxq11;rxq12;rxq13;rxq14;rxq15]'" ) + param = ( " --nb-cores=8 --txd=1024 --rxd=1024 --txq=16 --rxq=16" - + " --lcore-dma={}".format(lcore_dma) + + " --lcore-dma=[%s]" % lcore_dma ) self.start_vhost_testpmd( cores=self.vhost_core_list, @@ -685,21 +845,24 @@ class TestVM2VMVirtioNetPerfCbdma(TestCase): self.start_iperf() self.get_perf_result() - def test_vm2vm_packed_ring_iperf_with_tso_and_cbdma_enable(self): + def test_vm2vm_virtio_net_packed_ring_cbdma_enable_test_with_tcp_traffic(self): """ - Test Case 5: VM2VM packed ring vhost-user/virtio-net CBDMA enable test with tcp traffic + Test Case 5: VM2VM virtio-net packed ring CBDMA enable test with tcp traffic """ - self.get_cbdma_ports_info_and_bind_to_dpdk(2) - dmas = self.generate_dms_param(1) - lcore_dma = self.generate_lcore_dma_param( - cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:3] + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=2) + lcore_dma = "lcore%s@%s," "lcore%s@%s" % ( + self.vhost_core_list[1], + self.cbdma_list[0], + self.vhost_core_list[2], + self.cbdma_list[1], + ) + eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0;rxq0]'" + + " --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0;rxq0]'" ) - eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas={}'".format( - dmas - ) + " --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas={}'".format(dmas) param = ( " --nb-cores=2 --txd=1024 --rxd=1024 --txq=1 --rxq=1" - + " --lcore-dma={}".format(lcore_dma) + + " --lcore-dma=[%s]" % lcore_dma ) self.start_vhost_testpmd( cores=self.vhost_core_list, @@ -716,46 +879,72 @@ class TestVM2VMVirtioNetPerfCbdma(TestCase): self.get_perf_result() self.verify_xstats_info_on_vhost() - def test_vm2vm_packed_ring_with_mergeable_path_8queue_check_large_packet_and_cbdma_enable( + def test_vm2vm_virtio_net_packed_ring_mergeable_8_queues_cbdma_enable_test_with_large_packet_payload_valid_check( self, ): """ Test Case 6: VM2VM virtio-net packed ring mergeable 8 queues CBDMA enable test with large packet payload valid check """ - self.get_cbdma_ports_info_and_bind_to_dpdk(16, allow_diff_socket=True) - dmas = self.generate_dms_param(7) - core1 = self.vhost_core_list[1] - core2 = self.vhost_core_list[2] - core3 = self.vhost_core_list[3] - core4 = self.vhost_core_list[4] - cbdma1 = self.cbdma_list[0] - cbdma2 = self.cbdma_list[1] - cbdma3 = self.cbdma_list[2] - cbdma4 = self.cbdma_list[3] - cbdma5 = self.cbdma_list[4] - cbdma6 = self.cbdma_list[5] - cbdma7 = self.cbdma_list[6] - cbdma8 = self.cbdma_list[7] - cbdma9 = self.cbdma_list[8] - cbdma10 = self.cbdma_list[9] - cbdma11 = self.cbdma_list[10] - cbdma12 = self.cbdma_list[11] - cbdma13 = self.cbdma_list[12] - cbdma14 = self.cbdma_list[13] - cbdma15 = self.cbdma_list[14] - cbdma16 = self.cbdma_list[15] + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) lcore_dma = ( - f"[lcore{core1}@{cbdma1},lcore{core1}@{cbdma2},lcore{core1}@{cbdma3},lcore{core1}@{cbdma4}," - f"lcore{core2}@{cbdma1},lcore{core2}@{cbdma3},lcore{core2}@{cbdma5},lcore{core2}@{cbdma6},lcore{core2}@{cbdma7},lcore{core2}@{cbdma8}," - f"lcore{core3}@{cbdma2},lcore{core3}@{cbdma4},lcore{core3}@{cbdma9},lcore{core3}@{cbdma10},lcore{core3}@{cbdma11},lcore{core3}@{cbdma12},lcore{core3}@{cbdma13},lcore{core3}@{cbdma14},lcore{core3}@{cbdma15}," - f"lcore{core4}@{cbdma16}]" - ) - eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas={}'".format( - dmas - ) + " --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas={}'".format(dmas) + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s" + % ( + self.vhost_core_list[1], + self.cbdma_list[0], + self.vhost_core_list[1], + self.cbdma_list[1], + self.vhost_core_list[1], + self.cbdma_list[2], + self.vhost_core_list[1], + self.cbdma_list[3], + self.vhost_core_list[1], + self.cbdma_list[4], + self.vhost_core_list[1], + self.cbdma_list[5], + self.vhost_core_list[2], + self.cbdma_list[6], + self.vhost_core_list[2], + self.cbdma_list[7], + self.vhost_core_list[3], + self.cbdma_list[8], + self.vhost_core_list[3], + self.cbdma_list[9], + self.vhost_core_list[3], + self.cbdma_list[10], + self.vhost_core_list[3], + self.cbdma_list[11], + self.vhost_core_list[3], + self.cbdma_list[12], + self.vhost_core_list[3], + self.cbdma_list[13], + self.vhost_core_list[3], + self.cbdma_list[14], + self.vhost_core_list[4], + self.cbdma_list[15], + ) + ) + eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,tso=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" + + " --vdev 'net_vhost1,iface=vhost-net1,queues=8,tso=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" + ) param = ( " --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8" - + " --lcore-dma={}".format(lcore_dma) + + " --lcore-dma=[%s]" % lcore_dma ) self.start_vhost_testpmd( cores=self.vhost_core_list, @@ -764,7 +953,6 @@ class TestVM2VMVirtioNetPerfCbdma(TestCase): param=param, iova_mode="va", ) - # self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on" self.vm_args = "disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on" self.start_vms(server_mode=False, vm_queue=8) self.config_vm_ip() @@ -775,48 +963,72 @@ class TestVM2VMVirtioNetPerfCbdma(TestCase): self.start_iperf() self.get_perf_result() - def test_vm2vm_packed_ring_with_non_mergeable_path_8queue_check_large_packet_and_cbdma_enable( + def test_vm2vm_virtio_net_packed_ring_non_mergeable_8_queues_cbdma_enable_test_with_large_packet_payload_valid_check( self, ): """ Test Case 7: VM2VM virtio-net packed ring non-mergeable 8 queues CBDMA enable test with large packet payload valid check """ - self.get_cbdma_ports_info_and_bind_to_dpdk(16, allow_diff_socket=True) - dmas = self.generate_dms_param(8) - core1 = self.vhost_core_list[1] - core2 = self.vhost_core_list[2] - core3 = self.vhost_core_list[3] - core4 = self.vhost_core_list[4] - cbdma1 = self.cbdma_list[0] - cbdma2 = self.cbdma_list[1] - cbdma3 = self.cbdma_list[2] - cbdma4 = self.cbdma_list[3] - cbdma5 = self.cbdma_list[4] - cbdma6 = self.cbdma_list[5] - cbdma7 = self.cbdma_list[6] - cbdma8 = self.cbdma_list[7] - cbdma9 = self.cbdma_list[8] - cbdma10 = self.cbdma_list[9] - cbdma11 = self.cbdma_list[10] - cbdma12 = self.cbdma_list[11] - cbdma13 = self.cbdma_list[12] - cbdma14 = self.cbdma_list[13] - cbdma15 = self.cbdma_list[14] - cbdma16 = self.cbdma_list[15] + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) lcore_dma = ( - f"[lcore{core1}@{cbdma1},lcore{core1}@{cbdma2},lcore{core1}@{cbdma3}," - f"lcore{core1}@{cbdma4},lcore{core1}@{cbdma5},lcore{core1}@{cbdma6}," - f"lcore{core2}@{cbdma7},lcore{core2}@{cbdma8}," - f"lcore{core3}@{cbdma9},lcore{core3}@{cbdma10},lcore{core3}@{cbdma11},lcore{core3}@{cbdma12}," - f"lcore{core3}@{cbdma13},lcore{core3}@{cbdma14},lcore{core3}@{cbdma15}," - f"lcore{core4}@{cbdma16}]" - ) - eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas={}'".format( - dmas - ) + " --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas={}'".format(dmas) + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s" + % ( + self.vhost_core_list[1], + self.cbdma_list[0], + self.vhost_core_list[1], + self.cbdma_list[1], + self.vhost_core_list[1], + self.cbdma_list[2], + self.vhost_core_list[1], + self.cbdma_list[3], + self.vhost_core_list[1], + self.cbdma_list[4], + self.vhost_core_list[1], + self.cbdma_list[5], + self.vhost_core_list[2], + self.cbdma_list[6], + self.vhost_core_list[2], + self.cbdma_list[7], + self.vhost_core_list[3], + self.cbdma_list[8], + self.vhost_core_list[3], + self.cbdma_list[9], + self.vhost_core_list[3], + self.cbdma_list[10], + self.vhost_core_list[3], + self.cbdma_list[11], + self.vhost_core_list[3], + self.cbdma_list[12], + self.vhost_core_list[3], + self.cbdma_list[13], + self.vhost_core_list[3], + self.cbdma_list[14], + self.vhost_core_list[4], + self.cbdma_list[15], + ) + ) + eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,tso=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" + + " --vdev 'net_vhost1,iface=vhost-net1,queues=8,tso=1,dmas=[txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5]'" + ) param = ( " --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8" - + " --lcore-dma={}".format(lcore_dma) + + " --lcore-dma=[%s]" % lcore_dma ) self.start_vhost_testpmd( cores=self.vhost_core_list, @@ -835,23 +1047,72 @@ class TestVM2VMVirtioNetPerfCbdma(TestCase): self.start_iperf() self.get_perf_result() - def test_vm2vm_packed_ring_with_mergeable_path_16queue_check_large_packet_and_cbdma_enable( + def test_vm2vm_virtio_net_packed_ring_mergeable_16_queues_cbdma_enable_test_with_large_packet_payload_check( self, ): """ Test Case 8: VM2VM virtio-net packed ring mergeable 16 queues CBDMA enabled test with large packet payload valid check """ - self.get_cbdma_ports_info_and_bind_to_dpdk(16, allow_diff_socket=True) - dmas = self.generate_dms_param(16) - lcore_dma = self.generate_lcore_dma_param( - cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:9] + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) + lcore_dma = ( + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s" + % ( + self.vhost_core_list[1], + self.cbdma_list[0], + self.vhost_core_list[1], + self.cbdma_list[1], + self.vhost_core_list[2], + self.cbdma_list[2], + self.vhost_core_list[2], + self.cbdma_list[3], + self.vhost_core_list[3], + self.cbdma_list[4], + self.vhost_core_list[4], + self.cbdma_list[5], + self.vhost_core_list[4], + self.cbdma_list[6], + self.vhost_core_list[4], + self.cbdma_list[7], + self.vhost_core_list[5], + self.cbdma_list[8], + self.vhost_core_list[5], + self.cbdma_list[9], + self.vhost_core_list[6], + self.cbdma_list[10], + self.vhost_core_list[6], + self.cbdma_list[11], + self.vhost_core_list[7], + self.cbdma_list[12], + self.vhost_core_list[7], + self.cbdma_list[13], + self.vhost_core_list[8], + self.cbdma_list[14], + self.vhost_core_list[8], + self.cbdma_list[15], + ) + ) + eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=16,tso=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11,txq12,txq13;txq14;txq15;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7;rxq8;rxq9;rxq10;rxq11;rxq12;rxq13;rxq14;rxq15]'" + + " --vdev 'net_vhost1,iface=vhost-net1,queues=16,tso=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11,txq12,txq13;txq14;txq15;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7;rxq8;rxq9;rxq10;rxq11;rxq12;rxq13;rxq14;rxq15]'" ) - eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=16,dmas={}'".format( - dmas - ) + " --vdev 'net_vhost1,iface=vhost-net1,queues=16,dmas={}'".format(dmas) param = ( " --nb-cores=8 --txd=1024 --rxd=1024 --txq=16 --rxq=16" - + " --lcore-dma={}".format(lcore_dma) + + " --lcore-dma=[%s]" % lcore_dma ) self.start_vhost_testpmd( cores=self.vhost_core_list, @@ -870,96 +1131,129 @@ class TestVM2VMVirtioNetPerfCbdma(TestCase): self.start_iperf() self.get_perf_result() - def test_vm2vm_packed_ring_iperf_with_tso_when_set_ivoa_pa_and_cbdma_enable(self): + def test_vm2vm_virtio_net_packed_ring_cbdma_enable_test_with_tcp_traffic_when_set_iova_pa( + self, + ): """ - Test Case 9: VM2VM packed ring vhost-user/virtio-net CBDMA enable test with tcp traffic when set iova=pa + Test Case 9: VM2VM virtio-net packed ring CBDMA enable test with tcp traffic when set iova=pa """ - self.get_cbdma_ports_info_and_bind_to_dpdk(2) - dmas = self.generate_dms_param(1) - lcore_dma = self.generate_lcore_dma_param( - cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:3] - ) - eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas={}'".format( - dmas - ) + " --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas={}'".format(dmas) - param = ( - " --nb-cores=2 --txd=1024 --rxd=1024 --txq=1 --rxq=1" - + " --lcore-dma={}".format(lcore_dma) - ) - self.start_vhost_testpmd( - cores=self.vhost_core_list, - ports=self.cbdma_list, - eal_param=eal_param, - param=param, - iova_mode="pa", - ) - self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on" - self.start_vms(server_mode=False, vm_queue=1) - self.config_vm_ip() - self.check_ping_between_vms() - self.check_scp_file_valid_between_vms() - self.start_iperf() - self.get_perf_result() - self.verify_xstats_info_on_vhost() + if not self.check_2M_env: + self.get_cbdma_ports_info_and_bind_to_dpdk(2) + lcore_dma = "lcore%s@%s," "lcore%s@%s" % ( + self.vhost_core_list[1], + self.cbdma_list[0], + self.vhost_core_list[2], + self.cbdma_list[1], + ) + eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0;rxq0]'" + + " --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0;rxq0]'" + ) + param = ( + " --nb-cores=2 --txd=1024 --rxd=1024 --txq=1 --rxq=1" + + " --lcore-dma=[%s]" % lcore_dma + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + ports=self.cbdma_list, + eal_param=eal_param, + param=param, + iova_mode="pa", + ) + self.vm_args = "disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on" + self.start_vms(server_mode=False, vm_queue=1) + self.config_vm_ip() + self.check_ping_between_vms() + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_perf_result() + self.verify_xstats_info_on_vhost() - def test_vm2vm_packed_ring_with_mergeable_path_8queue_check_large_packet_when_set_ivoa_pa_and_cbdma_enable( + def test_vm2vm_virtio_net_packed_ring_mergeable_8_queues_cbdma_enable_and_pa_mode_test_with_large_packet_payload_valid_check( self, ): """ Test Case 10: VM2VM virtio-net packed ring mergeable 8 queues CBDMA enable and PA mode test with large packet payload valid check """ - self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) - dmas = self.generate_dms_param(7) - core1 = self.vhost_core_list[1] - core2 = self.vhost_core_list[2] - core3 = self.vhost_core_list[3] - core4 = self.vhost_core_list[4] - cbdma1 = self.cbdma_list[0] - cbdma2 = self.cbdma_list[1] - cbdma3 = self.cbdma_list[2] - cbdma4 = self.cbdma_list[3] - cbdma5 = self.cbdma_list[4] - cbdma6 = self.cbdma_list[5] - cbdma7 = self.cbdma_list[6] - cbdma8 = self.cbdma_list[7] - cbdma9 = self.cbdma_list[8] - cbdma10 = self.cbdma_list[9] - cbdma11 = self.cbdma_list[10] - cbdma12 = self.cbdma_list[11] - cbdma13 = self.cbdma_list[12] - cbdma14 = self.cbdma_list[13] - cbdma15 = self.cbdma_list[14] - cbdma16 = self.cbdma_list[15] - lcore_dma = ( - f"[lcore{core1}@{cbdma1},lcore{core1}@{cbdma2},lcore{core1}@{cbdma3}," - f"lcore{core1}@{cbdma4},lcore{core1}@{cbdma5},lcore{core1}@{cbdma6}," - f"lcore{core2}@{cbdma7},lcore{core2}@{cbdma8}," - f"lcore{core3}@{cbdma9},lcore{core3}@{cbdma10},lcore{core3}@{cbdma11},lcore{core3}@{cbdma12}," - f"lcore{core3}@{cbdma13},lcore{core3}@{cbdma14},lcore{core3}@{cbdma15}," - f"lcore{core4}@{cbdma16}]" - ) - eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas={}'".format( - dmas - ) + " --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas={}'".format(dmas) - param = ( - " --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8" - + " --lcore-dma={}".format(lcore_dma) - ) - self.start_vhost_testpmd( - cores=self.vhost_core_list, - ports=self.cbdma_list, - eal_param=eal_param, - param=param, - iova_mode="pa", - ) - self.vm_args = "disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on" - self.start_vms(server_mode=False, vm_queue=8) - self.config_vm_ip() - self.check_ping_between_vms() - for _ in range(1): - self.check_scp_file_valid_between_vms() - self.start_iperf() - self.get_perf_result() + if not self.check_2M_env: + self.get_cbdma_ports_info_and_bind_to_dpdk( + cbdma_num=16, allow_diff_socket=True + ) + lcore_dma = ( + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s" + % ( + self.vhost_core_list[1], + self.cbdma_list[0], + self.vhost_core_list[1], + self.cbdma_list[1], + self.vhost_core_list[1], + self.cbdma_list[2], + self.vhost_core_list[1], + self.cbdma_list[3], + self.vhost_core_list[1], + self.cbdma_list[4], + self.vhost_core_list[1], + self.cbdma_list[5], + self.vhost_core_list[2], + self.cbdma_list[6], + self.vhost_core_list[2], + self.cbdma_list[7], + self.vhost_core_list[3], + self.cbdma_list[8], + self.vhost_core_list[3], + self.cbdma_list[9], + self.vhost_core_list[3], + self.cbdma_list[10], + self.vhost_core_list[3], + self.cbdma_list[11], + self.vhost_core_list[3], + self.cbdma_list[12], + self.vhost_core_list[3], + self.cbdma_list[13], + self.vhost_core_list[3], + self.cbdma_list[14], + self.vhost_core_list[4], + self.cbdma_list[15], + ) + ) + eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,tso=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" + + " --vdev 'net_vhost1,iface=vhost-net1,queues=8,tso=1,dmas=[txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5]'" + ) + param = ( + " --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8" + + " --lcore-dma=[%s]" % lcore_dma + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + ports=self.cbdma_list, + eal_param=eal_param, + param=param, + iova_mode="pa", + ) + self.vm_args = "disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on" + self.start_vms(server_mode=False, vm_queue=8) + self.config_vm_ip() + self.check_ping_between_vms() + for _ in range(1): + self.check_scp_file_valid_between_vms() + self.start_iperf() + self.get_perf_result() def stop_all_apps(self): for i in range(len(self.vm)):