From patchwork Wed Aug 17 08:25:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 115204 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EA87EA0032; Wed, 17 Aug 2022 10:29:59 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E553140E2D; Wed, 17 Aug 2022 10:29:59 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id 79E0C4068E for ; Wed, 17 Aug 2022 10:29:55 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1660724998; x=1692260998; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=y8Tiu0u66qVn/Q+Qkmc4HS059FF2RT2gdPAA2dmxLjk=; b=BQ0jDgRtu9LirqVgSlZiepXq2pnyPbf7DcJSq9sDJPfjJZHhu8Q94zxu M7RZTkdRV+TVJ9pl7mz9WR9ZAd6zmLMdA5DwQ1d3iFqXmzSx3nxR3j6fj pPe/W1nykbyKLzbJs0J+1sPzt5FElepQKAMz5ck64NwfmFNO0ITwueCbi fUPsuM4m9hd/E3syq9ZztI4ALCCLlbukE3RwVQ6WLlDbjOrrgwiweeq7H RrpMZZE3NMN95acodWLeCxVAIcy4zyOeLi2HMNesD2h5qXTEoQOBc0NhD NthFGLd02/BkgsWsxLaMvXvE97A4DhWe+MGYwJVLbx9N9bWZPO2yGY6xI Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10441"; a="356435106" X-IronPort-AV: E=Sophos;i="5.93,242,1654585200"; d="scan'208,223";a="356435106" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Aug 2022 01:29:52 -0700 X-IronPort-AV: E=Sophos;i="5.93,242,1654585200"; d="scan'208,223";a="603816178" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Aug 2022 01:29:50 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V5 1/2] test_plans/vm2vm_virtio_pmd_cbdma_test_plan: modify testplan to test virito dequeue Date: Wed, 17 Aug 2022 04:25:34 -0400 Message-Id: <20220817082534.4018763-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org From DPDK-22.07, virtio support async dequeue for split and packed ring path, so modify vm2vm_virtio_pmd_cbdma testplan to test the split and packed ring async dequeue feature. Signed-off-by: Wei Ling --- .../vm2vm_virtio_pmd_cbdma_test_plan.rst | 408 +++++++++++------- 1 file changed, 264 insertions(+), 144 deletions(-) diff --git a/test_plans/vm2vm_virtio_pmd_cbdma_test_plan.rst b/test_plans/vm2vm_virtio_pmd_cbdma_test_plan.rst index a491bd40..d8fabe3f 100644 --- a/test_plans/vm2vm_virtio_pmd_cbdma_test_plan.rst +++ b/test_plans/vm2vm_virtio_pmd_cbdma_test_plan.rst @@ -1,5 +1,5 @@ .. SPDX-License-Identifier: BSD-3-Clause - Copyright(c) 2022 Intel Corporation + Copyright(c) 2021 Intel Corporation ================================================ vm2vm vhost-user/virtio-pmd with cbdma test plan @@ -10,16 +10,16 @@ Description Vhost asynchronous data path leverages DMA devices to offload memory copies from the CPU and it is implemented in an asynchronous way. In addition, vhost supports M:N mapping between vrings and DMA virtual channels. Specifically, one vring can use multiple different DMA -channels and one DMA channel can be shared by multiple vrings at the same time. Vhost enqueue operation with CBDMA channels is supported -in both split and packed ring. +channels and one DMA channel can be shared by multiple vrings at the same time. From DPDK22.07, Vhost enqueue and dequeue operation with +CBDMA channels is supported in both split and packed ring. -This document provides the test plan for testing some basic functions with CBDMA device in vm2vm vhost-user/virtio-pmd topology environment. -1. vm2vm mergeable, normal path test with virtio 1.0 and virtio 1.1 -2. vm2vm mergeable path test with virtio 1.0 and dynamic change queue number. +This document provides the test plan for testing some basic functions with CBDMA channels in vm2vm vhost-user/virtio-pmd topology environment. +1. vm2vm mergeable, non-mergebale path test with virtio 1.0 and virtio1.1 and check virtio-pmd tx chain packets in mergeable path. +2. dynamic change queue number. -Note: +..Note: 1.For packed virtqueue virtio-net test, need qemu version > 4.2.0 and VM kernel version > 5.1, and packed ring multi-queues not support reconnect in qemu yet. -2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version >= 5.2.0, dut to old qemu exist reconnect issue when multi-queues test. +2.For split virtqueue virtio-net with multi-queues server mode test, better to use qemu version >= 5.2.0, dut to qemu(v4.2.0~v5.1.0) exist split ring multi-queues reconnection issue. 3.When DMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may exceed IOMMU's max capability, better to use 1G guest hugepage. 4.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. @@ -33,34 +33,34 @@ Prerequisites Topology -------- - Test flow: Virtio-pmd-->Vhost-user-->Testpmd-->Vhost-user-->Virtio-pmd + Test flow: Virtio-pmd-->Vhost-user-->Testpmd-->Vhost-user-->Virtio-pmd Software -------- - Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz + Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz General set up -------------- 1. Compile DPDK:: - # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library= - # ninja -C -j 110 - For example: - CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=x86_64-native-linuxapp-gcc - ninja -C x86_64-native-linuxapp-gcc -j 110 + # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static + # ninja -C -j 110 + For example: + CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc + ninja -C x86_64-native-linuxapp-gcc -j 110 2. Get the PCI device ID and DMA device ID of DUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 is DMA device ID:: - # ./usertools/dpdk-devbind.py -s + # ./usertools/dpdk-devbind.py -s - Network devices using kernel driver - =================================== - 0000:18:00.0 'Device 159b' if=ens785f0 drv=ice unused=vfio-pci + Network devices using kernel driver + =================================== + 0000:18:00.0 'Device 159b' if=ens785f0 drv=ice unused=vfio-pci - DMA devices using kernel driver - =============================== - 0000:00:04.0 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci - 0000:00:04.1 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci + DMA devices using kernel driver + =============================== + 0000:00:04.0 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci + 0000:00:04.1 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci Test case ========= @@ -69,12 +69,12 @@ Common steps ------------ 1. Bind 1 NIC port and CBDMA channels to vfio-pci:: - # ./usertools/dpdk-devbind.py -b vfio-pci - # ./usertools/dpdk-devbind.py -b vfio-pci + # ./usertools/dpdk-devbind.py -b vfio-pci + # ./usertools/dpdk-devbind.py -b vfio-pci - For example, Bind 1 NIC port and 2 CBDMA channels:: - # ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:18.0 - # ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:04.0,0000:00:04.1 + For example, Bind 1 NIC port and 2 CBDMA channels:: + # ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:18.0 + # ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:04.0,0000:00:04.1 2. On VM1 and VM2, bind virtio device(for example,0000:00:05.0) with vfio-pci driver:: @@ -83,23 +83,22 @@ Common steps echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode # ./usertools/dpdk-devbind.py --force --bind=vfio-pci 0000:00:05.0 -Test Case 1: VM2VM virtio-pmd split ring mergeable path 8 queues CBDMA enable with server mode stable test ----------------------------------------------------------------------------------------------------------- -This case uses testpmd and QEMU to test split ring mergeable path with 8 queues and CBDMA enable with server mode, -In VM, use testpmd to send imix packets, and relaunch vhost-user 10 times to test stable. +Test Case 1: VM2VM virtio-pmd split ring mergeable path dynamic queue size with cbdma enable and server mode +------------------------------------------------------------------------------------------------------------ +This case tests split ring mergeable path in VM2VM vhost-user/virtio-pmd topology when vhost uses the asynchronous operations with CBDMA channels, +check that it can work normally after dynamically changing queue number, reconnection has also been tested. 1. Bind 16 CBDMA channels to vfio-pci, as common step 1. -2. Launch the testpmd with 2 vhost port and 8 queues by below commands:: +2. Launch the testpmd with 2 vhost ports below commands:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ - -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ - --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore3@0000:00:04.4,lcore3@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,\ - lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore5@0000:80:04.4,lcore5@0000:80:04.5,lcore5@0000:80:04.6,lcore5@0000:80:04.7] + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]' \ + -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4 \ + --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore3@0000:00:04.4,lcore3@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore5@0000:80:04.4,lcore5@0000:80:04.5,lcore5@0000:80:04.6,lcore5@0000:80:04.7] testpmd> start 3. Launch VM1 and VM2 using qemu:: @@ -113,8 +112,7 @@ In VM, use testpmd to send imix packets, and relaunch vhost-user 10 times to tes -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ -chardev socket,id=char0,path=./vhost-net0,server \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\ - mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 taskset -c 17-27 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 9 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ @@ -125,69 +123,86 @@ In VM, use testpmd to send imix packets, and relaunch vhost-user 10 times to tes -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ -chardev socket,id=char0,path=./vhost-net1,server \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,\ - vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 -4. On VM1 and VM2, bind virtio device with vfio-pci driver, as common step 2. +4. On VM1 and VM2, bind virtio device with vfio-pci driver:: + + modprobe vfio + modprobe vfio-pci + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode + ./usertools/dpdk-devbind.py --force --bind=vfio-pci 0000:00:05.0 5. Launch testpmd in VM1:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip \ - --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 testpmd> set fwd mac testpmd> start -6. Launch testpmd in VM2, sent imix pkts from VM2:: +6. Launch testpmd in VM2 and send imix pkts, check imix packets can looped between two VMs for 1 mins and 4 queues (queue0 to queue3) have packets rx/tx:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip \ - --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 testpmd> set fwd mac testpmd> set txpkts 64,256,512,1024,2000,64,256,512,1024,2000 - testpmd> start tx_first 1 + testpmd> start tx_first 32 + testpmd> show port stats all -7. Check imix packets can looped between two VMs and 8 queues all have packets rx/tx:: +7. Check vhost use the asynchronous data path(funtion like virtio_dev_rx_async_xxx/virtio_dev_tx_async_xxx):: - testpmd> show port stats all - testpmd> stop + perf top -8. Relaunch and start vhost side testpmd with below cmd:: +8. On host, dynamic change queue numbers:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ - -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ - -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ - --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore3@0000:00:04.4,lcore3@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,\ - lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore5@0000:80:04.4,lcore5@0000:80:04.5,lcore5@0000:80:04.6,lcore5@0000:80:04.7] + testpmd> stop + testpmd> port stop all + testpmd> port config all rxq 8 + testpmd> port config all txq 8 + testpmd> port start all testpmd> start -9. Send pkts by testpmd in VM2, check imix packets can looped between two VMs and 8 queues all have packets rx/tx:: +9. Send packets by testpmd in VM2:: testpmd> stop - testpmd> start tx_first 1 + testpmd> start tx_first 32 testpmd> show port stats all - testpmd> stop -10. Rerun step 7-8 for 10 times. +10. Check vhost testpmd RX/TX can work normally, packets can looped between two VMs and both 8 queues can RX/TX traffic. -Test Case 2: VM2VM virtio-pmd split ring mergeable path dynamic queue size CBDMA enable with server mode test -------------------------------------------------------------------------------------------------------------- -This case uses testpmd and QEMU to test split ring mergeable path and CBDMA enable with server mode, -In VM, use testpmd to send imix packets, and then dynamic queue size from 4 to 8 to test it works well or not. +11. Rerun step 7. + +12. Relaunch and start vhost side testpmd with 8 queues:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ + -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ + --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore3@0000:00:04.4,lcore3@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore5@0000:80:04.4,lcore5@0000:80:04.5,lcore5@0000:80:04.6,lcore5@0000:80:04.7] + testpmd> start + +13. Send packets by testpmd in VM2, check imix packets can looped between two VMs for 1 mins and 8 queues all have packets rx/tx:: + + testpmd> stop + testpmd> start tx_first 32 + testpmd> show port stats all + +14. Rerun step 12-13 for 5 times. + +Test Case 2: VM2VM virtio-pmd split ring non-mergeable path dynamic queue size with cbdma enable and server mode +---------------------------------------------------------------------------------------------------------------- +This case tests split ring non-mergeable path in VM2VM vhost-user/virtio-pmd topology when vhost uses the asynchronous operations with CBDMA channels, +check that it can work normally after dynamically changing queue number, reconnection has also been tested. 1. Bind 16 CBDMA channels to vfio-pci, as common step 1. 2. Launch the testpmd with 2 vhost ports below commands:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3]' \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]' \ -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4 \ - --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore3@0000:00:04.4,lcore3@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,\ - lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore5@0000:80:04.4,lcore5@0000:80:04.5,lcore5@0000:80:04.6,lcore5@0000:80:04.7] + --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore3@0000:00:04.4,lcore3@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore5@0000:80:04.4,lcore5@0000:80:04.5,lcore5@0000:80:04.6,lcore5@0000:80:04.7] testpmd> start 3. Launch VM1 and VM2 using qemu:: @@ -201,8 +216,7 @@ In VM, use testpmd to send imix packets, and then dynamic queue size from 4 to 8 -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ -chardev socket,id=char0,path=./vhost-net0,server \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\ - mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 taskset -c 17-27 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 9 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ @@ -213,72 +227,69 @@ In VM, use testpmd to send imix packets, and then dynamic queue size from 4 to 8 -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ -chardev socket,id=char0,path=./vhost-net1,server \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,\ - vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 + +4. On VM1 and VM2, bind virtio device with vfio-pci driver:: -4. On VM1 and VM2, bind virtio device with vfio-pci driver, as common step 2. + modprobe vfio + modprobe vfio-pci + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode + ./usertools/dpdk-devbind.py --force --bind=vfio-pci 0000:00:05.0 5. Launch testpmd in VM1:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip \ - --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 testpmd> set fwd mac testpmd> start -6. Launch testpmd in VM2, sent imix pkts from VM2:: +6. Launch testpmd in VM2 and send imix pkts, check imix packets can looped between two VMs for 1 mins and 4 queues (queue0 to queue3) have packets rx/tx:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip \ - --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 testpmd> set fwd mac - testpmd> set txpkts 64,256,512,1024,2000,64,256,512,1024,2000 - testpmd> start tx_first 1 + testpmd> set txpkts 64,256,512 + testpmd> start tx_first 32 + testpmd> show port stats all -7. Check imix packets can looped between two VMs and 4 queues (queue0 to queue3) have packets rx/tx:: +7. Check vhost use the asynchronous data path(funtion like virtio_dev_rx_async_xxx/virtio_dev_tx_async_xxx):: - testpmd> show port stats all - testpmd> stop + perf top -8. Relaunch and start vhost side testpmd with 8 queues:: +8. On VM1 and VM2, dynamic change queue numbers at virtio-pmd side from 8 queues to 4 queues:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ - -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ - -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ - --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore3@0000:00:04.4,lcore3@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,\ - lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore5@0000:80:04.4,lcore5@0000:80:04.5,lcore5@0000:80:04.6,lcore5@0000:80:04.7] + testpmd> stop + testpmd> port stop all + testpmd> port config all rxq 4 + testpmd> port config all txq 4 + testpmd> port start all testpmd> start -9. Send pkts by testpmd in VM2, check imix packets can looped between two VMs and 8 queues all have packets rx/tx:: +9. Send packets by testpmd in VM2, check Check virtio-pmd RX/TX can work normally and imix packets can looped between two VMs for 1 mins:: testpmd> stop - testpmd> start tx_first 1 + testpmd> start tx_first 32 testpmd> show port stats all - testpmd> stop -10. Rerun step 7-8 for 10 times. +10. Rerun step 7. -Test Case 3: VM2VM virtio-pmd packed ring mergeable path 8 queues CBDMA enable test ------------------------------------------------------------------------------------ -This case uses testpmd and QEMU to test packed ring mergeable path with 8 queues and CBDMA enable, -In VM, use testpmd to send imix packets, and then quit VM1 and change VM1 from packed ring path to splirt ring path to test. +11. Stop testpmd in VM2, and check that 4 queues can RX/TX traffic. + +Test Case 3: VM2VM virtio-pmd packed ring mergeable path dynamic queue size with cbdma enable and server mode +------------------------------------------------------------------------------------------------------------- +This case tests packed ring mergeable path with virtio1.1 and server mode in VM2VM vhost-user/virtio-pmd topology when vhost uses the asynchronous operations with CBDMA channels,check that it can work normally after dynamically changing queue number. 1. Bind 16 CBDMA channels to vfio-pci, as common step 1. -2. Launch the testpmd with 2 vhost port and 8 queues by below commands:: +2. Launch the testpmd with 2 vhost ports below commands:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ - -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ - --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore3@0000:00:04.4,lcore3@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,\ - lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore5@0000:80:04.4,lcore5@0000:80:04.5,lcore5@0000:80:04.6,lcore5@0000:80:04.7] + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]' \ + -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4 \ + --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore3@0000:00:04.4,lcore3@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore5@0000:80:04.4,lcore5@0000:80:04.5,lcore5@0000:80:04.6,lcore5@0000:80:04.7] testpmd> start -3. Launch VM1 and VM2 with qemu:: +3. Launch VM1 and VM2 using qemu:: taskset -c 6-16 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 9 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ @@ -287,10 +298,9 @@ In VM, use testpmd to send imix packets, and then quit VM1 and change VM1 from p -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0 \ + -chardev socket,id=char0,path=./vhost-net0,server \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\ - mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10 taskset -c 17-27 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 9 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ @@ -299,62 +309,172 @@ In VM, use testpmd to send imix packets, and then quit VM1 and change VM1 from p -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1 \ + -chardev socket,id=char0,path=./vhost-net1,server \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,\ - vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12 -4. On VM1 and VM2, bind virtio device with vfio-pci driver, as common step 2. +4. On VM1 and VM2, bind virtio device with vfio-pci driver:: + + modprobe vfio + modprobe vfio-pci + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode + ./usertools/dpdk-devbind.py --force --bind=vfio-pci 0000:00:05.0 5. Launch testpmd in VM1:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip \ - --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 - testpmd> set fwd mac + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 + testpmd> set mac fwd testpmd> start -6. Launch testpmd in VM2, sent imix pkts from VM2:: +6. Launch testpmd in VM2 and send imix pkts, check imix packets can looped between two VMs for 1 mins and 4 queues (queue0 to queue3) have packets rx/tx:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip \ - --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 - testpmd> set fwd mac + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 + testpmd> set mac fwd testpmd> set txpkts 64,256,512,1024,2000,64,256,512,1024,2000 - testpmd> start tx_first 1 - -7. Check imix packets can looped between two VMs and 8 queues all have packets rx/tx:: - + testpmd> start tx_first 32 testpmd> show port stats all testpmd> stop -8. Quit VM2 and relaunch VM2 with split ring:: +7. Quit VM2 and relaunch VM2 with split ring:: - taskset -c 6-16 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 9 -m 4096 \ + taskset -c 17-27 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 9 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net1,server \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\ - mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 -9. Bind virtio device with vfio-pci driver:: +8. Bind virtio device with vfio-pci driver:: modprobe vfio modprobe vfio-pci echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode # ./usertools/dpdk-devbind.py --force --bind=vfio-pci 0000:00:05.0 -10. Launch testpmd in VM2 and send imix pkts from VM2:: +9. Launch testpmd in VM2 and send imix pkts from VM2:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip \ --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 testpmd> set fwd mac testpmd> set txpkts 64,256,512,1024,2000,64,256,512,1024,2000 -11. Check imix packets can looped between two VMs and 8 queues all have packets rx/tx:: +10. Check imix packets can looped between two VMs and 4 queues all have packets rx/tx:: + + testpmd> show port stats all + testpmd> stop + testpmd> start + +11. Check vhost use the asynchronous data path(funtion like virtio_dev_rx_async_xxx/virtio_dev_tx_async_xxx):: + + perf top +12. On host, dynamic change queue numbers:: + + testpmd> stop + testpmd> port stop all + testpmd> port config all rxq 8 + testpmd> port config all txq 8 + testpmd> port start all + testpmd> start + +13. Send packets by testpmd in VM2:: + + testpmd> stop + testpmd> start tx_first 32 testpmd> show port stats all + +14. Check vhost testpmd RX/TX can work normally, packets can looped between two VMs and both 8 queues can RX/TX traffic. + +15. Rerun step 11. + +Test Case 4: VM2VM virtio-pmd packed ring non-mergeable path dynamic queue size with cbdma enable and server mode +----------------------------------------------------------------------------------------------------------------- +This case tests packed ring non-mergeable path with virtio1.1 and server mode in VM2VM vhost-user/virtio-pmd topology when vhost uses the asynchronous operations with CBDMA channels,check that it can work normally after dynamically changing queue number. + +1. Bind 16 CBDMA channels to vfio-pci, as common step 1. + +2. Launch the testpmd with 2 vhost ports below commands:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ + -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ + --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore3@0000:00:04.4,lcore3@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore5@0000:80:04.4,lcore5@0000:80:04.5,lcore5@0000:80:04.6,lcore5@0000:80:04.7] + testpmd> start + +3. Launch VM1 and VM2 using qemu:: + + taskset -c 6-16 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 9 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ + -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=./vhost-net0,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10 + + taskset -c 17-27 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 9 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net1,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12 + +4. On VM1 and VM2, bind virtio device with vfio-pci driver:: + + modprobe vfio + modprobe vfio-pci + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode + ./usertools/dpdk-devbind.py --force --bind=vfio-pci 0000:00:05.0 + +5. Launch testpmd in VM1:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=4 --rxq=4 --txd=1024 --rxd=1024 + testpmd> set mac fwd + testpmd> start + +6. Launch testpmd in VM2 and send imix pkts, check imix packets can looped between two VMs for 1 mins:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=4 --rxq=4 --txd=1024 --rxd=1024 + testpmd> set mac fwd + testpmd> set txpkts 64,256,512 + testpmd> start tx_first 32 + testpmd> show port stats all + +7. Check vhost use the asynchronous data path(funtion like virtio_dev_rx_async_xxx/virtio_dev_tx_async_xxx):: + + perf top + +8. On VM2, stop the testpmd, check that both 4 queues have packets rx/tx:: + + testpmd> stop + +9. On VM1 and VM2, dynamic change queue numbers at virtio-pmd side from 4 queues to 8 queues:: + + testpmd> stop + testpmd> port stop all + testpmd> port config all rxq 8 + testpmd> port config all txq 8 + testpmd> port start all + testpmd> start + +10. Send packets by testpmd in VM2, check Check virtio-pmd RX/TX can work normally and imix packets can looped between two VMs for 1 mins:: + testpmd> stop + testpmd> start tx_first 32 + testpmd> show port stats all + +11. Rerun step 7. + +12. Stop testpmd in VM2, and check that 4 queues can RX/TX traffic. From patchwork Wed Aug 17 08:25:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 115205 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 19207A0032; Wed, 17 Aug 2022 10:30:07 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 13BCF40DDC; Wed, 17 Aug 2022 10:30:07 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 7DB704068E for ; Wed, 17 Aug 2022 10:30:05 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1660725005; x=1692261005; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=Vr7yzXHt3apulkMZIdaagWcyiR3ydjtk5R1a5pULGwk=; b=mxbMJmcEG/oQyEJ4KtK3TOOzQv/JFhph9d9qPa/3ElaWhzJWl3e8FdL7 vpCSMq5fcMm6Z5NuvV/zg4VfPy57tjfthTqzaUnDhkObdLdhcbA0mNV7W Fz3c8bz0OLBEWXubFtnV2HinanFi/2TvIpV8YFbBbo/0hJPJ/v40U2RNN lbmOVgAgk1AGyc5ADZREh2DeaycfahSA0HEzoJBdVnGI6r8j0MfOvewlT XDx8B3jKdGt6ZqJKDunbykKzMh0ix0THZKXIrSpC2YsCyZLJlpE72OZmH 9iTS2YFfRBMatscz1cedqCKzaWUUYiWskGt9EsG12Xs/9NGjhbRvJ0qD3 w==; X-IronPort-AV: E=McAfee;i="6400,9594,10441"; a="292434835" X-IronPort-AV: E=Sophos;i="5.93,242,1654585200"; d="scan'208,223";a="292434835" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Aug 2022 01:30:04 -0700 X-IronPort-AV: E=Sophos;i="5.93,242,1654585200"; d="scan'208,223";a="603816238" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Aug 2022 01:30:02 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V5 2/2] tests/vm2vm_virtio_pmd_cbdma: modify testsuite to test virito dequeue Date: Wed, 17 Aug 2022 04:25:46 -0400 Message-Id: <20220817082546.4018827-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org From DPDK-22.07, virtio support async dequeue for split and packed ring path, so modify vm2vm_virtio_pmd_cbdma testsuite to test the split and packed ring async dequeue feature. Signed-off-by: Wei Ling Acked-by: Xingguang He Tested-by: Chenyu Huang Acked-by: Lijuan Tu --- tests/TestSuite_vm2vm_virtio_pmd_cbdma.py | 516 ++++++++++++++++------ 1 file changed, 390 insertions(+), 126 deletions(-) diff --git a/tests/TestSuite_vm2vm_virtio_pmd_cbdma.py b/tests/TestSuite_vm2vm_virtio_pmd_cbdma.py index b926534e..b00d7b04 100644 --- a/tests/TestSuite_vm2vm_virtio_pmd_cbdma.py +++ b/tests/TestSuite_vm2vm_virtio_pmd_cbdma.py @@ -20,7 +20,7 @@ from framework.test_case import TestCase from framework.virt_common import VM -class TestVM2VMVirtioPmdCbdma(TestCase): +class TestVM2VMVirtioPmdCBDMA(TestCase): def set_up_all(self): self.dut_ports = self.dut.get_ports() self.ports_socket = self.dut.get_numa_id(self.dut_ports[0]) @@ -48,6 +48,7 @@ class TestVM2VMVirtioPmdCbdma(TestCase): self.result_table_create(self.table_header) self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#") self.dut.send_expect("killall -s INT qemu-system-x86_64", "#") + self.dut.send_expect("killall -s INT perf", "#") self.dut.send_expect("rm -rf %s/vhost-net*" % self.base_dir, "#") self.vm_num = 2 self.vm_dut = [] @@ -91,33 +92,6 @@ class TestVM2VMVirtioPmdCbdma(TestCase): 60, ) - @staticmethod - def generate_dms_param(queues): - das_list = [] - for i in range(queues): - das_list.append("txq{}".format(i)) - das_param = "[{}]".format(";".join(das_list)) - return das_param - - @staticmethod - def generate_lcore_dma_param(cbdma_list, core_list): - group_num = int(len(cbdma_list) / len(core_list)) - lcore_dma_list = [] - if len(cbdma_list) == 1: - for core in core_list: - lcore_dma_list.append("lcore{}@{}".format(core, cbdma_list[0])) - elif len(core_list) == 1: - for cbdma in cbdma_list: - lcore_dma_list.append("lcore{}@{}".format(core_list[0], cbdma)) - else: - for cbdma in cbdma_list: - core_list_index = int(cbdma_list.index(cbdma) / group_num) - lcore_dma_list.append( - "lcore{}@{}".format(core_list[core_list_index], cbdma) - ) - lcore_dma_param = "[{}]".format(",".join(lcore_dma_list)) - return lcore_dma_param - def start_vhost_testpmd(self, cores, ports, prefix, eal_param, param): """ launch the testpmd with different parameters @@ -130,6 +104,7 @@ class TestVM2VMVirtioPmdCbdma(TestCase): def start_vms( self, vm_queue, + mergeable=True, packed=False, server_mode=True, restart_vm1=False, @@ -152,14 +127,22 @@ class TestVM2VMVirtioPmdCbdma(TestCase): else: vm_params["opt_path"] = self.base_dir + "/vhost-net%d" % i + ",server" vm_params["opt_mac"] = "52:54:00:00:00:0%d" % (i + 1) - if not packed: + if mergeable: + mrg_rxbuf = "on" + else: + mrg_rxbuf = "off" + if packed: vm_params[ "opt_settings" - ] = "disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on" + ] = "disable-modern=false,mrg_rxbuf={},mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on".format( + mrg_rxbuf + ) else: vm_params[ "opt_settings" - ] = "disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on" + ] = "disable-modern=false,mrg_rxbuf={},mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on".format( + mrg_rxbuf + ) vm_info.set_vm_device(**vm_params) time.sleep(3) try: @@ -172,32 +155,37 @@ class TestVM2VMVirtioPmdCbdma(TestCase): self.vm_dut.append(vm_dut) self.vm.append(vm_info) - def start_vm0_testpmd(self): - param = "--tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000" - self.vm0_pmd.start_testpmd(cores="default", param=param) - self.vm0_pmd.execute_cmd("set fwd mac") - self.vm0_pmd.execute_cmd("start") - - def start_vm1_testpmd(self, resend=False): - param = "--tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000" - if not resend: - self.vm1_pmd.start_testpmd(cores="default", param=param) - self.vm1_pmd.execute_cmd("set fwd mac") - self.vm1_pmd.execute_cmd( - "set txpkts 64,256,512,1024,2000,64,256,512,1024,2000" + def start_vm_testpmd(self, vm_pmd, queues, mergeable=True): + if mergeable: + param = "--enable-hw-vlan-strip --txq={} --rxq={} --txd=1024 --rxd=1024 --max-pkt-len=9600 --tx-offloads=0x00 --rx-offloads=0x00002000".format( + queues, queues ) - self.vm1_pmd.execute_cmd("start tx_first 1") else: - self.vm1_pmd.execute_cmd("stop") - self.vm0_pmd.execute_cmd("start") - self.vm0_pmd.execute_cmd("clear port stats all") - self.vhost_user_pmd.execute_cmd("clear port stats all") - self.vm1_pmd.execute_cmd("clear port stats all") - self.vm1_pmd.execute_cmd("start tx_first 1") + param = "--enable-hw-vlan-strip --txq={} --rxq={} --txd=1024 --rxd=1024 --tx-offloads=0x00".format( + queues, queues + ) + vm_pmd.start_testpmd(cores="default", param=param) + vm_pmd.execute_cmd("set fwd mac") + + def send_big_imix_packets_from_vm1(self): + self.vm1_pmd.execute_cmd("set txpkts 64,256,512,1024,2000,64,256,512,1024,2000") + self.vm1_pmd.execute_cmd("start tx_first 32") + self.vm1_pmd.execute_cmd("show port stats all") + + def send_small_imix_packets_from_vm1(self): + self.vm1_pmd.execute_cmd("set txpkts 64,256,512") + self.vm1_pmd.execute_cmd("start tx_first 32") + self.vm1_pmd.execute_cmd("show port stats all") + + def send_64b_packets_from_vm1(self): + self.vm1_pmd.execute_cmd("stop") + self.vm1_pmd.execute_cmd("start tx_first 32") + self.vm1_pmd.execute_cmd("show port stats all") def check_packets_of_each_queue(self, vm_pmd, queues): vm_pmd.execute_cmd("show port stats all") out = vm_pmd.execute_cmd("stop") + self.logger.info(out) for queue in range(queues): reg = "Queue= %d" % queue index = out.find(reg) @@ -211,28 +199,94 @@ class TestVM2VMVirtioPmdCbdma(TestCase): + "rx-packets: {}, tx-packets: {}".format(rx_packets, tx_packets), ) - def test_vm2vm_virtio_pmd_split_ring_mergeable_path_8_queues_cbdma_enable_with_server_mode_stable_test( + def dynamic_change_queue_size(self, dut_pmd, queues): + dut_pmd.execute_cmd("stop") + dut_pmd.execute_cmd("port stop all") + dut_pmd.execute_cmd("port config all rxq {}".format(queues)) + dut_pmd.execute_cmd("port config all txq {}".format(queues)) + dut_pmd.execute_cmd("port start all") + dut_pmd.execute_cmd("start") + + def get_and_verify_func_name_of_perf_top(self, func_name_list): + self.dut.send_expect("rm -fr perf_top.log", "# ", 120) + self.dut.send_expect("perf top > perf_top.log", "", 120) + time.sleep(10) + self.dut.send_expect("^C", "#") + out = self.dut.send_expect("cat perf_top.log", "# ", 120) + self.logger.info(out) + for func_name in func_name_list: + self.verify( + func_name in out, + "the func_name {} is not in the perf top output".format(func_name), + ) + + def test_vm2vm_virtio_pmd_split_ring_mergeable_path_dynamic_queue_size_with_cbdma_enable_and_server_mode( self, ): """ - Test Case 1: VM2VM virtio-pmd split ring mergeable path 8 queues CBDMA enable with server mode stable test + Test Case 1: VM2VM virtio-pmd split ring mergeable path dynamic queue size with cbdma enable and server mode """ + self.check_path = ["virtio_dev_rx_async", "virtio_dev_tx_async"] self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) - dmas = self.generate_dms_param(8) - lcore_dma = self.generate_lcore_dma_param( - cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:] + lcore_dma = ( + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s" + % ( + self.vhost_core_list[1], + self.cbdma_list[0], + self.vhost_core_list[1], + self.cbdma_list[1], + self.vhost_core_list[1], + self.cbdma_list[2], + self.vhost_core_list[1], + self.cbdma_list[3], + self.vhost_core_list[2], + self.cbdma_list[4], + self.vhost_core_list[2], + self.cbdma_list[5], + self.vhost_core_list[2], + self.cbdma_list[6], + self.vhost_core_list[2], + self.cbdma_list[7], + self.vhost_core_list[3], + self.cbdma_list[8], + self.vhost_core_list[3], + self.cbdma_list[9], + self.vhost_core_list[3], + self.cbdma_list[10], + self.vhost_core_list[3], + self.cbdma_list[11], + self.vhost_core_list[4], + self.cbdma_list[12], + self.vhost_core_list[4], + self.cbdma_list[13], + self.vhost_core_list[4], + self.cbdma_list[14], + self.vhost_core_list[4], + self.cbdma_list[15], + ) ) eal_param = ( - "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas={}'".format( - dmas - ) - + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas={}'".format( - dmas - ) + "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]'" + + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]'" ) param = ( - "--nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8" - + " --lcore-dma={}".format(lcore_dma) + "--nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4" + + " --lcore-dma=[%s]" % lcore_dma ) self.start_vhost_testpmd( cores=self.vhost_core_list, @@ -241,16 +295,35 @@ class TestVM2VMVirtioPmdCbdma(TestCase): eal_param=eal_param, param=param, ) - self.start_vms(vm_queue=8, packed=False, server_mode=True) + self.start_vms(vm_queue=8, mergeable=True, packed=False, server_mode=True) self.vm0_pmd = PmdOutput(self.vm_dut[0]) self.vm1_pmd = PmdOutput(self.vm_dut[1]) - self.start_vm0_testpmd() - self.start_vm1_testpmd(resend=False) + self.start_vm_testpmd(vm_pmd=self.vm0_pmd, queues=8, mergeable=True) + self.vm0_pmd.execute_cmd("start") + self.start_vm_testpmd(vm_pmd=self.vm1_pmd, queues=8, mergeable=True) + self.send_big_imix_packets_from_vm1() + self.get_and_verify_func_name_of_perf_top(self.check_path) + self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=4) + self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=4) + + self.dynamic_change_queue_size(dut_pmd=self.vhost_user_pmd, queues=8) + self.vm0_pmd.execute_cmd("start") + self.send_64b_packets_from_vm1() + self.get_and_verify_func_name_of_perf_top(self.check_path) self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=8) self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=8) - for _ in range(10): + + for _ in range(5): self.logger.info("Quit and relaunch vhost side testpmd") - self.vhost_user_pmd.execute_cmd("quit", "#") + self.vhost_user_pmd.quit() + eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" + + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" + ) + param = ( + "--nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8" + + " --lcore-dma=[%s]" % lcore_dma + ) self.start_vhost_testpmd( cores=self.vhost_core_list, ports=self.cbdma_list, @@ -258,32 +331,78 @@ class TestVM2VMVirtioPmdCbdma(TestCase): eal_param=eal_param, param=param, ) - self.start_vm1_testpmd(resend=True) + self.vm0_pmd.execute_cmd("start") + self.send_64b_packets_from_vm1() self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=8) self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=8) - def test_vm2vm_virtio_pmd_split_ring_mergeable_path_dynamic_queue_size_cbdma_enable_with_server_mode_test( + def test_vm2vm_virtio_pmd_split_ring_non_mergeable_path_dynamic_queue_size_with_cbdma_enable_and_server_mode( self, ): """ - Test Case 2: VM2VM virtio-pmd split ring mergeable path dynamic queue size CBDMA enable with server mode test + Test Case 2: VM2VM virtio-pmd split ring non-mergeable path dynamic queue size with cbdma enable and server mode """ + self.check_path = ["virtio_dev_rx_async", "virtio_dev_tx_async"] self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) - dmas = self.generate_dms_param(4) - lcore_dma = self.generate_lcore_dma_param( - cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:] + lcore_dma = ( + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s" + % ( + self.vhost_core_list[1], + self.cbdma_list[0], + self.vhost_core_list[1], + self.cbdma_list[1], + self.vhost_core_list[1], + self.cbdma_list[2], + self.vhost_core_list[1], + self.cbdma_list[3], + self.vhost_core_list[2], + self.cbdma_list[4], + self.vhost_core_list[2], + self.cbdma_list[5], + self.vhost_core_list[2], + self.cbdma_list[6], + self.vhost_core_list[2], + self.cbdma_list[7], + self.vhost_core_list[3], + self.cbdma_list[8], + self.vhost_core_list[3], + self.cbdma_list[9], + self.vhost_core_list[3], + self.cbdma_list[10], + self.vhost_core_list[3], + self.cbdma_list[11], + self.vhost_core_list[4], + self.cbdma_list[12], + self.vhost_core_list[4], + self.cbdma_list[13], + self.vhost_core_list[4], + self.cbdma_list[14], + self.vhost_core_list[4], + self.cbdma_list[15], + ) ) eal_param = ( - "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas={}'".format( - dmas - ) - + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas={}'".format( - dmas - ) + "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]'" + + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]'" ) param = ( - " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4" - + " --lcore-dma={}".format(lcore_dma) + "--nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4" + + " --lcore-dma=[%s]" % lcore_dma ) self.start_vhost_testpmd( cores=self.vhost_core_list, @@ -292,54 +411,91 @@ class TestVM2VMVirtioPmdCbdma(TestCase): eal_param=eal_param, param=param, ) - self.start_vms(vm_queue=8, packed=False, server_mode=True) + self.start_vms(vm_queue=8, mergeable=False, packed=False, server_mode=True) self.vm0_pmd = PmdOutput(self.vm_dut[0]) self.vm1_pmd = PmdOutput(self.vm_dut[1]) - self.start_vm0_testpmd() - self.start_vm1_testpmd(resend=False) + self.start_vm_testpmd(vm_pmd=self.vm0_pmd, queues=8, mergeable=False) + self.vm0_pmd.execute_cmd("start") + self.start_vm_testpmd(vm_pmd=self.vm1_pmd, queues=8, mergeable=False) + self.send_small_imix_packets_from_vm1() + self.get_and_verify_func_name_of_perf_top(self.check_path) + self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=4) + self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=4) + + self.dynamic_change_queue_size(dut_pmd=self.vm0_pmd, queues=4) + self.dynamic_change_queue_size(dut_pmd=self.vm1_pmd, queues=4) + self.send_64b_packets_from_vm1() + self.get_and_verify_func_name_of_perf_top(self.check_path) self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=4) self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=4) - for _ in range(10): - self.logger.info("Quit and relaunch vhost side testpmd with 8 queues") - self.vhost_user_pmd.execute_cmd("quit", "#") - dmas = self.generate_dms_param(8) - eal_param = "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas={}'".format( - dmas - ) + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas={}'".format( - dmas - ) - param = ( - " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8" - + " --lcore-dma={}".format(lcore_dma) - ) - self.start_vhost_testpmd( - cores=self.vhost_core_list, - ports=self.cbdma_list, - prefix="vhost", - eal_param=eal_param, - param=param, - ) - self.start_vm1_testpmd(resend=True) - self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=8) - self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=8) - def test_vm2vm_virtio_pmd_packed_ring_mergeable_path_8_queues_cbdma_enable_test( + def test_vm2vm_virtio_pmd_packed_ring_mergeable_path_dynamic_queue_size_with_cbdma_enable_and_server_mode( self, ): """ - Test Case 3: VM2VM virtio-pmd packed ring mergeable path 8 queues CBDMA enable test + Test Case 3: VM2VM virtio-pmd packed ring mergeable path dynamic queue size with cbdma enable and server mode """ + self.check_path = ["virtio_dev_rx_async", "virtio_dev_tx_async"] self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) - dmas = self.generate_dms_param(8) - lcore_dma = self.generate_lcore_dma_param( - cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:] + lcore_dma = ( + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s" + % ( + self.vhost_core_list[1], + self.cbdma_list[0], + self.vhost_core_list[1], + self.cbdma_list[1], + self.vhost_core_list[1], + self.cbdma_list[2], + self.vhost_core_list[1], + self.cbdma_list[3], + self.vhost_core_list[2], + self.cbdma_list[4], + self.vhost_core_list[2], + self.cbdma_list[5], + self.vhost_core_list[2], + self.cbdma_list[6], + self.vhost_core_list[2], + self.cbdma_list[7], + self.vhost_core_list[3], + self.cbdma_list[8], + self.vhost_core_list[3], + self.cbdma_list[9], + self.vhost_core_list[3], + self.cbdma_list[10], + self.vhost_core_list[3], + self.cbdma_list[11], + self.vhost_core_list[4], + self.cbdma_list[12], + self.vhost_core_list[4], + self.cbdma_list[13], + self.vhost_core_list[4], + self.cbdma_list[14], + self.vhost_core_list[4], + self.cbdma_list[15], + ) + ) + eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]'" + + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]'" ) - eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas={}'".format( - dmas - ) + " --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas={}'".format(dmas) param = ( - " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8" - + " --lcore-dma={}".format(lcore_dma) + "--nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4" + + " --lcore-dma=[%s]" % lcore_dma ) self.start_vhost_testpmd( cores=self.vhost_core_list, @@ -348,22 +504,130 @@ class TestVM2VMVirtioPmdCbdma(TestCase): eal_param=eal_param, param=param, ) - self.start_vms(vm_queue=8, packed=True, server_mode=False) + self.start_vms(vm_queue=8, mergeable=True, packed=True, server_mode=True) self.vm0_pmd = PmdOutput(self.vm_dut[0]) self.vm1_pmd = PmdOutput(self.vm_dut[1]) - self.start_vm0_testpmd() - self.start_vm1_testpmd(resend=False) - self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=8) - self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=8) + self.start_vm_testpmd(vm_pmd=self.vm0_pmd, queues=8, mergeable=True) + self.vm0_pmd.execute_cmd("start") + self.start_vm_testpmd(vm_pmd=self.vm1_pmd, queues=8, mergeable=True) + self.send_big_imix_packets_from_vm1() + self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=4) + self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=4) + self.logger.info("Quit and relaunch VM2 with split ring") self.vm1_pmd.execute_cmd("quit", "#") self.vm[1].stop() self.vm_dut.remove(self.vm_dut[1]) self.vm.remove(self.vm[1]) - self.start_vms(vm_queue=8, packed=False, restart_vm1=True, server_mode=False) + self.start_vms( + vm_queue=8, mergeable=True, packed=False, restart_vm1=True, server_mode=True + ) self.vm1_pmd = PmdOutput(self.vm_dut[1]) self.vm0_pmd.execute_cmd("start") - self.start_vm1_testpmd(resend=False) + self.start_vm_testpmd(vm_pmd=self.vm1_pmd, queues=8) + self.send_big_imix_packets_from_vm1() + self.get_and_verify_func_name_of_perf_top(self.check_path) + self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=4) + self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=4) + + self.dynamic_change_queue_size(dut_pmd=self.vhost_user_pmd, queues=8) + self.vm0_pmd.execute_cmd("start") + self.send_64b_packets_from_vm1() + self.get_and_verify_func_name_of_perf_top(self.check_path) + self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=8) + self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=8) + + def test_vm2vm_virtio_pmd_packed_ring_non_mergeable_path_dynamic_queue_size_with_cbdma_enable_and_server_mode( + self, + ): + """ + Test Case 4: VM2VM virtio-pmd packed ring non-mergeable path dynamic queue size with cbdma enable and server mode + """ + self.check_path = ["virtio_dev_rx_async", "virtio_dev_tx_async"] + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) + lcore_dma = ( + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s" + % ( + self.vhost_core_list[1], + self.cbdma_list[0], + self.vhost_core_list[1], + self.cbdma_list[1], + self.vhost_core_list[1], + self.cbdma_list[2], + self.vhost_core_list[1], + self.cbdma_list[3], + self.vhost_core_list[2], + self.cbdma_list[4], + self.vhost_core_list[2], + self.cbdma_list[5], + self.vhost_core_list[2], + self.cbdma_list[6], + self.vhost_core_list[2], + self.cbdma_list[7], + self.vhost_core_list[3], + self.cbdma_list[8], + self.vhost_core_list[3], + self.cbdma_list[9], + self.vhost_core_list[3], + self.cbdma_list[10], + self.vhost_core_list[3], + self.cbdma_list[11], + self.vhost_core_list[4], + self.cbdma_list[12], + self.vhost_core_list[4], + self.cbdma_list[13], + self.vhost_core_list[4], + self.cbdma_list[14], + self.vhost_core_list[4], + self.cbdma_list[15], + ) + ) + eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" + + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" + ) + param = ( + "--nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8" + + " --lcore-dma=[%s]" % lcore_dma + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + ports=self.cbdma_list, + prefix="vhost", + eal_param=eal_param, + param=param, + ) + self.start_vms(vm_queue=8, mergeable=False, packed=True, server_mode=True) + self.vm0_pmd = PmdOutput(self.vm_dut[0]) + self.vm1_pmd = PmdOutput(self.vm_dut[1]) + self.start_vm_testpmd(vm_pmd=self.vm0_pmd, queues=4, mergeable=False) + self.vm0_pmd.execute_cmd("start") + self.start_vm_testpmd(vm_pmd=self.vm1_pmd, queues=4, mergeable=False) + self.send_small_imix_packets_from_vm1() + self.get_and_verify_func_name_of_perf_top(self.check_path) + self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=4) + self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=4) + + self.dynamic_change_queue_size(self.vm0_pmd, queues=8) + self.dynamic_change_queue_size(self.vm1_pmd, queues=8) + self.vm0_pmd.execute_cmd("start") + self.send_64b_packets_from_vm1() + self.get_and_verify_func_name_of_perf_top(self.check_path) self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=8) self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=8)