From patchwork Fri Apr 22 09:47:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 110124 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 05314A0032; Fri, 22 Apr 2022 11:47:27 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F37B24067C; Fri, 22 Apr 2022 11:47:26 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id C3E3C40042 for ; Fri, 22 Apr 2022 11:47:25 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650620846; x=1682156846; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=fcNVOiOKOM/fJeKKmVvHVrdDUd0jLny1+gDO/RV3i5Y=; b=kPOd1PdHjXSG1FVw8XQgi590VyGckr31oK0G49iwwzAm9cGqasqs2/HB i6oVlzzgQqLsESihs7MyRYAr0eW9fsIKtVPPRuyX35T6efznBcu5U6CsI 4g016c0QPVQSDOua/uslqLZ801/8aQXkmD/LzMmYXsTwtgvoo9MQBu5ab +RcdwPCZk1CuUaAdFoLD+ErzS8iJHz4gvMiPunni3xEU5f+t1GP9lKpIQ zIogFHp19GQbGVQjAf7m93l2NcKzmgUSjnBFUDYAcy4Ev2idDokqLlO0/ SjjweTWkEg57uHirB7/bFzo8lgQ2NFkfHlR+/KbU72z+W39ruX1NkZ/Jg w==; X-IronPort-AV: E=McAfee;i="6400,9594,10324"; a="264409610" X-IronPort-AV: E=Sophos;i="5.90,281,1643702400"; d="scan'208";a="264409610" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Apr 2022 02:47:24 -0700 X-IronPort-AV: E=Sophos;i="5.90,281,1643702400"; d="scan'208";a="728441823" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Apr 2022 02:47:23 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 1/3] test_plans/index: add vm2vm_virtio_user_cbdma Date: Fri, 22 Apr 2022 17:47:19 +0800 Message-Id: <20220422094719.1564349-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path), add new testsuite vm2vm_virtio_user_cbdma_test_plan for coverage the vm2vm split ring and packed ring path test with cbdma. 1) Add new testplan vm2vm_virtio_user_cbdma_test_plan into test_plans/index.rst. Signed-off-by: Wei Ling --- test_plans/index.rst | 1 + 1 file changed, 1 insertion(+) diff --git a/test_plans/index.rst b/test_plans/index.rst index f8118d14..05794f98 100644 --- a/test_plans/index.rst +++ b/test_plans/index.rst @@ -297,6 +297,7 @@ The following are the test plans for the DPDK DTS automated test system. port_control_test_plan port_representor_test_plan vm2vm_virtio_user_test_plan + vm2vm_virtio_user_cbdma_test_plan vmdq_dcb_test_plan acl_test_plan power_negative_test_plan From patchwork Fri Apr 22 09:47:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 110125 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2C045A0032; Fri, 22 Apr 2022 11:47:39 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 23064410E7; Fri, 22 Apr 2022 11:47:39 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id 2C6F940042 for ; Fri, 22 Apr 2022 11:47:37 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650620857; x=1682156857; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=p/8li4GVlDtz0fS+Jy0UqjwzFOASeSxiRtykClMmeIE=; b=eL4Cy2ht7v1pYBr3XjCY/oENcAMK6EXb1XRihdOlM1j85duRl5aA2bA8 E6v4dnAsyR0Hn2iTBPg0G035HmpdhojsPwvqOCHT6vDLuDBhxkcxYue8m 2HbYWQUmEs8mh7bMBZbG0pl5DuLQ0A0LbhKjCqjql16jbx/SsJESjSkhd 2DHGp5eJgI3mU49NxajKRDQ5eSlnyQpRNCEudZk6VUW3KhYpiYmWI5NAZ 4Tow8cCAD+38IGhuieZ1AT5+dJavAb92mINn4dtOOB6SvznaPJigW2bYy qR2Bic51yiyk9clxZE2KQPhRWDs5Eu/7nSeh8aN23hpCDvT8uihMRu4l6 A==; X-IronPort-AV: E=McAfee;i="6400,9594,10324"; a="262231532" X-IronPort-AV: E=Sophos;i="5.90,281,1643702400"; d="scan'208";a="262231532" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Apr 2022 02:47:36 -0700 X-IronPort-AV: E=Sophos;i="5.90,281,1643702400"; d="scan'208";a="728441867" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Apr 2022 02:47:34 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 2/3] test_plans/vm2vm_virtio_user_cbdma_test_plan: add vm2vm_virtio_user_cbdma testplan Date: Fri, 22 Apr 2022 17:47:30 +0800 Message-Id: <20220422094730.1564407-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path), add new testsuite vm2vm_virtio_user_cbdma for coverage the vm2vm split ring and packed ring path test with cbdma. 1) Add new testplan test_plans/vm2vm_virtio_user_cbdma_test_plan into test_plans. Signed-off-by: Wei Ling --- .../vm2vm_virtio_user_cbdma_test_plan.rst | 1078 +++++++++++++++++ 1 file changed, 1078 insertions(+) create mode 100644 test_plans/vm2vm_virtio_user_cbdma_test_plan.rst diff --git a/test_plans/vm2vm_virtio_user_cbdma_test_plan.rst b/test_plans/vm2vm_virtio_user_cbdma_test_plan.rst new file mode 100644 index 00000000..b7afbdec --- /dev/null +++ b/test_plans/vm2vm_virtio_user_cbdma_test_plan.rst @@ -0,0 +1,1078 @@ +.. Copyright (c) <2022>, Intel Corporation + All rights reserved. + + Redistribution and use in source and binary forms, with or without + modification, are permitted provided that the following conditions + are met: + + - Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + + - Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in + the documentation and/or other materials provided with the + distribution. + + - Neither the name of Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived + from this software without specific prior written permission. + + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS + FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE + COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, + INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES + (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR + SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) + ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED + OF THE POSSIBILITY OF SUCH DAMAGE. + +================================================= +vm2vm vhost-user/virtio-user test plan +================================================= + +Description +=========== +Test indirect descriptor feature. +For example, the split ring mergeable inorder path use non-indirect descriptor, +the 2000,2000,2000,2000 chain packets will need 4 consequent ring, still need one ring put header. +the split ring mergeable path use indirect descriptor, +the 2000,2000,2000,2000 chain packets will only occupy one ring. + +This test plan test several features in VM2VM topo: +1. Split ring and packed ring vm2vm test when vhost enqueue operation with multi-CBDMA channels. +2. payload check +3. iova=pa mode (When DMA devices are bound to vfio driver, VA mode is the default and recommended. +For PA mode, page by page mapping may exceed IOMMU's max capability, better to use 1G guest hugepage.) + +For more about dpdk-testpmd sample, please refer to the DPDK docments: +https://doc.dpdk.org/guides/testpmd_app_ug/run_app.html + +For virtio-user vdev parameter, you can refer to the DPDK docments: +https://doc.dpdk.org/guides/nics/virtio.html#virtio-paths-selection-and-usage. + +Prerequisites +============= + +Topology +-------- +Test flow: Virtio-user-->Vhost-user-->Testpmd-->Vhost-user-->Virtio-user + +Hardware +-------- +Supportted NICs: ALL + +Software +-------- +Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz + +General set up +-------------- +1. Compile DPDK:: + + # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library= + # ninja -C -j 110 + +2. Get the PCI device ID and DMA device ID of DUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 is DMA device ID:: + + # ./usertools/dpdk-devbind.py -s + + Network devices using kernel driver + =================================== + 0000:18:00.0 'Device 159b' if=ens785f0 drv=ice unused=vfio-pci + + DMA devices using kernel driver + =============================== + 0000:00:04.0 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci + 0000:00:04.1 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci + +Test case +========= + +Common steps +------------ +1. Bind 1 NIC port and CBDMA channels to vfio-pci:: + + # ./usertools/dpdk-devbind.py -b vfio-pci + # ./usertools/dpdk-devbind.py -b vfio-pci + + For example, 2 CBDMA channels:: + # ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:04.0,0000:00:04.1 + +2. Attach pdump secondary process to primary process by same file-prefix:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 \ + -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000' + +Test Case 1: split virtqueue vm2vm non-mergeable path multi-queues payload check with cbdma enable +-------------------------------------------------------------------------------------------------- +This case uses test of 2 virtio-user with split ring non-mergeable path with multi-queues and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user side receive packets. + +1. Bind 2 CBDMA channel to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 -a 0000:00:04.1 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1] + +3. Launch virtio-user1 by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ + --no-pci --file-prefix=virtio1 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=0,queue_size=4096 \ + -- -i --enable-hw-vlan-strip --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set fwd rxonly + testpmd>start + +4. Start pdump to capture virtio-user1 packets, as common step 2. + +5. Launch virtio-user0 and send packets:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ + --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=0,queue_size=4096 \ + -- -i --enable-hw-vlan-strip --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set burst 1 + testpmd>set txpkts 64,128,256,512 + testpmd>start tx_first 27 + testpmd>stop + testpmd>set burst 32 + testpmd>start tx_first 7 + testpmd>stop + testpmd>set txpkts 64 + testpmd>start tx_first 1 + testpmd>stop + +6. Start vhost testpmd, check virtio-user1 RX-packets is 566 and RX-bytes is 486016, 54 packets with 960 length and 512 packets with 64 length in pdump-virtio-rx.pcap. + +7. Quit vhost testpmd:: + + testpmd>quit + +8. Clear virtio-user1 port stats, execute below command:: + + testpmd>stop + testpmd>clear port stats all + testpmd>start + +9. Relaunch vhost with iova=pa by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 -a 0000:00:04.1 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1] + +10. Rerun step 3. + +11. Virtio-user0 send packets:: + + testpmd>set burst 1 + testpmd>set txpkts 64,128,256,512 + testpmd>start tx_first 27 + testpmd>stop + testpmd>set burst 32 + testpmd>start tx_first 7 + testpmd>stop + testpmd>set txpkts 64 + testpmd>start tx_first 1 + testpmd>stop + +12. Rerun step 6. + +Test Case 2: split virtqueue vm2vm mergeable path multi-queues payload check with cbdma enable +---------------------------------------------------------------------------------------------- +This case uses test of 2 virtio-user with split ring mergeable path with multi-queues and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user side receive packets. + +1. Bind 1 CBDMA channel to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0] + +3. Launch virtio-user1 by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set fwd rxonly + testpmd>start + +4. Start pdump to capture virtio-user1 packets, as common step 2. + +5. Launch virtio-user0 and send packets:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + +6. Start vhost testpmd, check 502 packets and 279232 bytes received by virtio-user1 and 54 packets with 4640 length and 448 packets with 64 length in pdump-virtio-rx.pcap. + +7. Quit vhost testpmd:: + + testpmd>quit + +8. Clear virtio-user1 port stats, execute below command:: + + testpmd>stop + testpmd>clear port stats all + testpmd>start + +9. Relaunch vhost with iova=pa by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0] + +10. Rerun step 3. + +11. Virtio-user0 send packets:: + + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + +12. Rerun step 6. + +Test Case 3: split virtqueue vm2vm inorder non-mergeable path multi-queues payload check with cbdma enable +---------------------------------------------------------------------------------------------------------- +This case uses test of 2 virtio-user with split ring inorder non-mergeable path with multi-queues and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user side receive packets. + +1. Bind 5 CBDMA channel to vfio-pci, as common step 1. + +1. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4] + +2. Launch virtio-user1 by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=1,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set fwd rxonly + testpmd>start + +3. Start pdump to capture virtio-user1 packets, as common step 2. + +4. Launch virtio-user0 and send packets:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=1,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + +5. Start vhost testpmd, check 448 packets and 28672 bytes received by virtio-user1 and 448 packets with 64 length in pdump-virtio-rx.pcap. + +6. Quit vhost testpmd:: + + testpmd>quit + +7. Clear virtio-user1 port stats, execute below command:: + + testpmd>stop + testpmd>clear port stats all + testpmd>start + +8. Relaunch vhost with iova=pa by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4] + +9. Rerun step 3. + +10. Virtio-user0 send packets:: + + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + +11. Rerun step 5. + +Test Case 4: split virtqueue vm2vm vectorized path multi-queues payload check with cbdma enable +----------------------------------------------------------------------------------------------- +This case uses test of 2 virtio-user with split ring vectorized path with multi-queues and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user side receive packets. + +1. Bind 8 CBDMA channel to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq1],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + +3. Launch virtio-user1 by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set fwd rxonly + testpmd>start + +4. Start pdump to capture virtio-user1 packets, as common step 2. + +5. Launch virtio-user0 and send packets:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + +6. Start vhost testpmd, check 448 packets and 28672 bytes received by virtio-user1 and 448 packets with 64 length in pdump-virtio-rx.pcap. + +7. Quit vhost testpmd:: + + testpmd>quit + +8. Clear virtio-user1 port stats, execute below command:: + + testpmd>stop + testpmd>clear port stats all + testpmd>start + +9. Relaunch vhost with iova=pa by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq1],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + +10. Rerun step 3. + +11. Virtio-user0 send packets:: + + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + +12. Rerun step 6. + +Test Case 5: Split virtqueue vm2vm inorder mergeable path test non-indirect descriptor with cbdma enable +-------------------------------------------------------------------------------------------------------- +This case uses test of 2 virtio-user with split ring inorder mergeable path non-indirect descriptor and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user side receive packets. + +1. Bind 4 CBDMA channel to vfio-pci, as common step 1. + +2. Launch testpmd by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3] + +3. Launch virtio-user1 by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ + --no-pci --file-prefix=virtio1 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=1,queue_size=256 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 + testpmd>set fwd rxonly + testpmd>start + +4. Start pdump to capture virtio-user1 packets, as common step 2. + +5. Launch virtio-user0 and send packets(include 251 small packets and 32 8K packets):: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ + --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=1,queue_size=256 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 + testpmd>set burst 1 + testpmd>start tx_first 27 + testpmd>stop + testpmd>set burst 32 + testpmd>start tx_first 7 + testpmd>stop + testpmd>set txpkts 2000,2000,2000,2000 + testpmd>start tx_first 1 + testpmd>stop + +6. Start vhost, then quit pdump and three testpmd, about split virtqueue inorder mergeable path, it use the non-indirect descriptors, +the 8k length pkt will occupies 5 ring:2000,2000,2000,2000 will need 4 consequent ring, +still need one ring put header. So check 504 packets and 48128 bytes received by virtio-user1 and 502 packets with 64 length and 2 packets with 8K length in pdump-virtio-rx.pcap. + +7. Relaunch vhost with iova=pa by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3] + +8. Rerun step 3-6. + +Test Case 6: Split virtqueue vm2vm mergeable path test indirect descriptor with cbdma enable +-------------------------------------------------------------------------------------------- +This case uses test of 2 virtio-user with split ring inorder mergeable path indirect descriptor and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user side receive packets. + +1. Bind 4 CBDMA channel to vfio-pci, as common step 1. + +2. Launch testpmd by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3] + +3. Launch virtio-user1 by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ + --no-pci --file-prefix=virtio1 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=256 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 + testpmd>set fwd rxonly + testpmd>start + +4. Start pdump to capture virtio-user1 packets, as common step 2. + +5. Launch virtio-user0 and send packets(include 251 small packets and 32 8K packets):: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ + --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=256 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 + testpmd>set burst 1 + testpmd>start tx_first 27 + testpmd>stop + testpmd>set burst 32 + testpmd>start tx_first 7 + testpmd>stop + testpmd>set txpkts 2000,2000,2000,2000 + testpmd>start tx_first 1 + testpmd>stop + +6. Start vhost, then quit pdump and three testpmd, about split virtqueue mergeable path, it use the indirect descriptors, the 8k length pkt will just occupies one ring. +So check 512 packets and 112128 bytes received by virtio-user1 and 502 packets with 64 length and 10 packets with 8K length in pdump-virtio-rx.pcap. + +7. Relaunch vhost with iova=pa by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3] + +8. Rerun step 3-6. + +Test Case 7: packed virtqueue vm2vm non-mergeable path multi-queues payload check with cbdma enable +--------------------------------------------------------------------------------------------------- +This case uses test of 2 virtio-user with packed ring non-mergeable path with multi-queues and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user side receive packets. + +1. Bind 2 CBDMA channel to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 -a 0000:00:04.1 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=va -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1] + +3. Launch virtio-user1 by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ + --no-pci --file-prefix=virtio1 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=0,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set fwd rxonly + testpmd>start + +4. Start pdump to capture virtio-user1 packets, as common step 2. + +5. Launch virtio-user0 and send packets:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ + --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=0,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + +6. Start vhost testpmd, check virtio-user1 RX-packets is 448 and RX-bytes is 28672, 448 packets with 64 length in pdump-virtio-rx.pcap. + +7. Quit vhost testpmd:: + + testpmd>quit + +8. Clear virtio-user1 port stats, execute below command:: + + testpmd>stop + testpmd>clear port stats all + testpmd>start + +9. Relaunch vhost with iova=pa by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 -a 0000:00:04.1 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=pa -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1] + +10. Rerun step 3. + +11. Virtio-user0 send packets:: + + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + +12. Rerun step 6. + +Test Case 8: packed virtqueue vm2vm mergeable path multi-queues payload check with cbdma enable +----------------------------------------------------------------------------------------------- +This case uses test of 2 virtio-user with packed ring mergeable path with multi-queues and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user side receive packets. + +1. Bind 1 CBDMA channel to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=va -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0] + +3. Launch virtio-user1 by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=0,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set fwd rxonly + testpmd>start + +4. Start pdump to capture virtio-user1 packets, as common step 2. + +5. Launch virtio-user0 and send packets:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=0,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + +6. Start vhost testpmd, then quit pdump, check 502 packets and 279232 bytes received by virtio-user1 and 54 packets with 4640 length and 448 packets with 64 length in pdump-virtio-rx.pcap. + +7. Quit vhost testpmd:: + + testpmd>quit + +8. Clear virtio-user1 port stats, execute below command:: + + testpmd>stop + testpmd>clear port stats all + testpmd>start + +9. Relaunch vhost with iova=pa by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=pa -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0] + +10. Rerun step 3. + +11. Virtio-user0 send packets:: + + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + +12. Rerun step 6. + +Test Case 9: packed virtqueue vm2vm inorder mergeable path multi-queues payload check with cbdma enable +------------------------------------------------------------------------------------------------------- +This case uses test of 2 virtio-user with packed ring inorder mergeable path with multi-queues and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user side receive packets. + +1. Bind 5 CBDMA channel to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4] + +3. Launch virtio-user1 by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set fwd rxonly + testpmd>start + +4. Start pdump to capture virtio-user1 packets, as common step 2. + +5. Launch virtio-user0 and send 8k length packets:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + +6. Start vhost testpmd, then quit pdump, check 502 packets and 279232 bytes received by virtio-user1 and 54 packets with 4640 length and 448 packets with 64 length in pdump-virtio-rx.pcap. + +7. Quit vhost testpmd:: + + testpmd>quit + +8. Clear virtio-user1 port stats, execute below command:: + + testpmd>stop + testpmd>clear port stats all + testpmd>start + +9. Relaunch vhost with iova=pa by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4] + +10. Rerun step 3. + +11. Virtio-user0 send packets:: + + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + +12. Rerun step 5. + +Test Case 10: packed virtqueue vm2vm inorder non-mergeable path multi-queues payload check with cbdma enable +------------------------------------------------------------------------------------------------------------ +This case uses test of 2 virtio-user with packed ring inorder mergeable path with multi-queues and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user side receive packets. + +1. Bind 8 CBDMA channel to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq1],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + +3. Launch virtio-user1 by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set fwd rxonly + testpmd>start + +4. Start pdump to capture virtio-user1 packets, as common step 2. + +5. Launch virtio-user0 and send 8k length packets:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + +6. Start vhost testpmd, then quit pdump, check 448 packets and 28672 bytes received by virtio-user1 and 448 packets with 64 length in pdump-virtio-rx.pcap. + +7. Quit vhost testpmd:: + + testpmd>quit + +8. Clear virtio-user1 port stats, execute below command:: + + testpmd>stop + testpmd>clear port stats all + testpmd>start + +9. Relaunch vhost with iova=pa by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq1],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + +10. Rerun step 3. + +11. Virtio-user0 send packets:: + + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + +12. Rerun step 6. + +Test Case 11: packed virtqueue vm2vm vectorized-rx path multi-queues payload check with cbdma enable +---------------------------------------------------------------------------------------------------- +This case uses test of 2 virtio-user with packed ring vectorized-rx path with multi-queues and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user side receive packets. + +1. Bind 8 CBDMA channel to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq1],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=2 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore12@0000:00:04.4,lcore12@0000:00:04.5,lcore12@0000:00:04.6,lcore12@0000:00:04.7] + +3. Launch virtio-user1 by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set fwd rxonly + testpmd>start + +4. Start pdump to capture virtio-user1 packets, as common step 2. + +5. Launch virtio-user0 and send 8k length packets:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --force-max-simd-bitwidth=512 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + +6. Start vhost testpmd, then quit pdump, check 448 packets and 28672 bytes received by virtio-user1 and 448 packets with 64 length in pdump-virtio-rx.pcap. + +7. Quit vhost testpmd:: + + testpmd>quit + +8. Clear virtio-user1 port stats, execute below command:: + + testpmd>stop + testpmd>clear port stats all + testpmd>start + +9. Relaunch vhost with iova=pa by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq1],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=2 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore12@0000:00:04.3,lcore12@0000:00:04.4,lcore12@0000:00:04.5,lcore12@0000:00:04.6,lcore12@0000:00:04.7] + +10. Rerun step 3. + +11. Virtio-user0 send packets:: + + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + +12. Rerun step 5. + +Test Case 12: packed virtqueue vm2vm vectorized path multi-queues payload check with ring size is not power of 2 and cbdma enable +--------------------------------------------------------------------------------------------------------------------------------- +This case uses test of 2 virtio-user with packed ring vectorized path with ring size is not power of 2 with multi-queues and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user side receive packets. + +1. Bind 8 CBDMA channel to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + +3. Launch virtio-user1 by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --force-max-simd-bitwidth=512 --no-pci --file-prefix=virtio1 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4097 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4097 --rxd=4097 + testpmd>set fwd rxonly + testpmd>start + +4. Start pdump to capture virtio-user1 packets, as common step 2. + +5. Launch virtio-user0 and send 8k length packets:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --force-max-simd-bitwidth=512 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4097 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4097 --rxd=4097 + + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + +6. Start vhost testpmd, then quit pdump, check 448 packets and 28672 bytes received by virtio-user1 and 448 packets with 64 length in pdump-virtio-rx.pcap. + +7. Quit vhost testpmd:: + + testpmd>quit + +8. Clear virtio-user1 port stats, execute below command:: + + testpmd>stop + testpmd>clear port stats all + testpmd>start + +9. Relaunch vhost with iova=pa by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore12@0000:00:04.3,lcore12@0000:00:04.4,lcore12@0000:00:04.5,lcore12@0000:00:04.6,lcore12@0000:00:04.7] + +10. Rerun step 3. + +11. Virtio-user0 send packets:: + + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + +12. Rerun step 5. + +Test Case 13: packed virtqueue vm2vm vectorized-tx path multi-queues test indirect descriptor with cbdma enable +--------------------------------------------------------------------------------------------------------------- +This case uses test of 2 virtio-user with packed ring vectorized-tx path with multi-queues indirect descriptor and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user side receive packets. + +1. Bind 8 CBDMA channel to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + +3. Launch virtio-user1 by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 + testpmd>set fwd rxonly + testpmd>start + +4. Start pdump to capture virtio-user1 packets, as common step 2. + +5. Launch virtio-user0 and send 8k length packets:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --force-max-simd-bitwidth=512 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 + + testpmd>set burst 1 + testpmd>start tx_first 27 + testpmd>stop + testpmd>set burst 32 + testpmd>start tx_first 7 + testpmd>stop + testpmd>set txpkts 2000,2000,2000,2000 + testpmd>start tx_first 1 + testpmd>stop + +6. Start vhost, then quit pdump and three testpmd, about packed virtqueue vectorized-tx path, it use the indirect descriptors, the 8k length pkt will just occupies one ring. +So check 512 packets and 112128 bytes received by virtio-user1 and 502 packets with 64 length and 10 packets with 8K length in pdump-virtio-rx.pcap. + +7. Relaunch vhost with iova=pa by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + +8. Rerun step 3-6. + +Test Case 14: packed virtqueue vm2vm vectorized-tx path test batch processing with cbdma enable +----------------------------------------------------------------------------------------------- +This case uses test of 2 virtio-user with packed ring vectorized-tx path with multi-queues indirect descriptor and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user side receive packets. + +1. Bind 8 CBDMA channel to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=1,client=1,dmas=[txq0],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=1,client=1,dmas=[txq0],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --rxq=1 --txq=1 --txd=256 --rxd=256 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + +3. Launch virtio-user1 by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256 \ + -- -i --nb-cores=1 --rxq=1 --txq=1 --txd=256 --rxd=256 + testpmd>set fwd rxonly + testpmd>start + +4. Start pdump to capture virtio-user1 packets, as common step 2. + +5. Launch virtio-user0 and send 1 packet:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --force-max-simd-bitwidth=512 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256 \ + -- -i --nb-cores=1 --rxq=1 --txq=1 --txd=256 --rxd=256 + testpmd>set burst 1 + testpmd>start tx_first 1 + testpmd>stop + +6. Start vhost, then quit pdump and three testpmd, check 1 packet and 64 bytes received by virtio-user1 and 1 packet with 64 length in pdump-virtio-rx.pcap. From patchwork Fri Apr 22 09:47:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 110126 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5417BA0032; Fri, 22 Apr 2022 11:47:51 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4CACC410EF; Fri, 22 Apr 2022 11:47:51 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id BD4DC40042 for ; Fri, 22 Apr 2022 11:47:48 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650620868; x=1682156868; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=BTdY+y+SoWrCrrphxLrX8/EqdOih+lvYpMvrohiVX1I=; b=O+DulLTwpVbMLDd/cgsuZlBjVQLspzR3JvQUPTJSpMX6bueQBRMDudWk 7ZcjHjCJaBtF3J5/Ok98O1b1UX3PS/KVlGNpohqa9IgzAkAdTLbw3mzS0 hn2lp1GHePIqr2/4P7yLXF7V0F28tJJNdXivns7jvGQVU2Udk1cp536b7 36Tv5UJmQnznonD96i80C0jkJu66vrzEf5H5RIgkwRa2NPXHi91kAkPx5 h0JVytaGXHArbeq5bwLU2nItrSf0MJDrwL0F6ovGuO0Ocq72+uBP6cqNp SaKwrIY9XJL4LDcZIZzpGM/5oP0sfm4bbUyw4uzCOndoJA6kyYvXXH/il Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10324"; a="245208063" X-IronPort-AV: E=Sophos;i="5.90,281,1643702400"; d="scan'208";a="245208063" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Apr 2022 02:47:47 -0700 X-IronPort-AV: E=Sophos;i="5.90,281,1643702400"; d="scan'208";a="728441915" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Apr 2022 02:47:45 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 3/3] tests/vm2vm_virtio_user_cbdma: add vm2vm_virtio_user_cbdma testsuite Date: Fri, 22 Apr 2022 17:47:41 +0800 Message-Id: <20220422094741.1564468-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path), add new testsuite vm2vm_virtio_user_cbdma for coverage the vm2vm split ring and packed ring path test with cbdma. 1) Add new testsuite tests/vm2vm_virtio_user_cbdma into tests. Signed-off-by: Wei Ling --- tests/TestSuite_vm2vm_virtio_user_cbdma.py | 1344 ++++++++++++++++++++ 1 file changed, 1344 insertions(+) create mode 100644 tests/TestSuite_vm2vm_virtio_user_cbdma.py diff --git a/tests/TestSuite_vm2vm_virtio_user_cbdma.py b/tests/TestSuite_vm2vm_virtio_user_cbdma.py new file mode 100644 index 00000000..b3f78778 --- /dev/null +++ b/tests/TestSuite_vm2vm_virtio_user_cbdma.py @@ -0,0 +1,1344 @@ +# BSD LICENSE +# +# Copyright(c) <2022> Intel Corporation. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +""" +DPDK Test suite. + +Test cases for vm2vm virtio-user +This suite include split virtqueue vm2vm in-order mergeable, +in-order non-mergeable,mergeable, non-mergeable, vector_rx path +test and packed virtqueue vm2vm in-order mergeable, in-order non-mergeable, +mergeable, non-mergeable path test +""" +import re + +from framework.packet import Packet +from framework.pmd_output import PmdOutput +from framework.test_case import TestCase + + +class TestVM2VMVirtioUserCbdma(TestCase): + def set_up_all(self): + self.memory_channel = self.dut.get_memory_channels() + self.dump_virtio_pcap = "/tmp/pdump-virtio-rx.pcap" + self.dump_vhost_pcap = "/tmp/pdump-vhost-rx.pcap" + self.app_pdump = self.dut.apps_name["pdump"] + self.dut_ports = self.dut.get_ports() + self.ports_socket = self.dut.get_numa_id(self.dut_ports[0]) + self.cores_list = self.dut.get_core_list(config="all", socket=self.ports_socket) + self.vhost_core_list = self.cores_list[0:9] + self.virtio0_core_list = self.cores_list[10:12] + self.virtio1_core_list = self.cores_list[12:14] + self.vhost_user = self.dut.new_session(suite="vhost-user") + self.virtio_user0 = self.dut.new_session(suite="virtio-user0") + self.virtio_user1 = self.dut.new_session(suite="virtio-user1") + self.pdump_user = self.dut.new_session(suite="pdump-user") + self.vhost_user_pmd = PmdOutput(self.dut, self.vhost_user) + self.virtio_user0_pmd = PmdOutput(self.dut, self.virtio_user0) + self.virtio_user1_pmd = PmdOutput(self.dut, self.virtio_user1) + self.testpmd_name = self.dut.apps_name['test-pmd'].split("/")[-1] + + def set_up(self): + """ + run before each test case. + """ + self.nopci = True + self.queue_num = 1 + self.dut.send_expect("rm -rf ./vhost-net*", "#") + self.dut.send_expect("rm -rf %s" % self.dump_virtio_pcap, "#") + self.dut.send_expect("rm -rf %s" % self.dump_vhost_pcap, "#") + self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#") + + def get_cbdma_ports_info_and_bind_to_dpdk(self, cbdma_num, allow_diff_socket=False): + """ + get and bind cbdma ports into DPDK driver + """ + self.all_cbdma_list = [] + self.cbdma_list = [] + self.cbdma_str = "" + out = self.dut.send_expect( + "./usertools/dpdk-devbind.py --status-dev dma", "# ", 30 + ) + device_info = out.split("\n") + for device in device_info: + pci_info = re.search("\s*(0000:\S*:\d*.\d*)", device) + if pci_info is not None: + dev_info = pci_info.group(1) + # the numa id of ioat dev, only add the device which on same socket with nic dev + bus = int(dev_info[5:7], base=16) + if bus >= 128: + cur_socket = 1 + else: + cur_socket = 0 + if allow_diff_socket: + self.all_cbdma_list.append(pci_info.group(1)) + else: + if self.ports_socket == cur_socket: + self.all_cbdma_list.append(pci_info.group(1)) + self.verify( + len(self.all_cbdma_list) >= cbdma_num, "There no enough cbdma device" + ) + self.cbdma_list = self.all_cbdma_list[0:cbdma_num] + self.cbdma_str = " ".join(self.cbdma_list) + self.dut.send_expect( + "./usertools/dpdk-devbind.py --force --bind=%s %s" + % (self.drivername, self.cbdma_str), + "# ", + 60, + ) + + @staticmethod + def generate_dms_param(queues): + das_list = [] + for i in range(queues): + das_list.append("txq{}".format(i)) + das_param = "[{}]".format(";".join(das_list)) + return das_param + + @staticmethod + def generate_lcore_dma_param(cbdma_list, core_list): + group_num = int(len(cbdma_list) / len(core_list)) + lcore_dma_list = [] + if len(cbdma_list) == 1: + for core in core_list: + lcore_dma_list.append("lcore{}@{}".format(core, cbdma_list[0])) + elif len(core_list) == 1: + for cbdma in cbdma_list: + lcore_dma_list.append("lcore{}@{}".format(core_list[0], cbdma)) + else: + for cbdma in cbdma_list: + core_list_index = int(cbdma_list.index(cbdma) / group_num) + lcore_dma_list.append( + "lcore{}@{}".format(core_list[core_list_index], cbdma) + ) + lcore_dma_param = "[{}]".format(",".join(lcore_dma_list)) + return lcore_dma_param + + def bind_cbdma_device_to_kernel(self): + self.dut.send_expect("modprobe ioatdma", "# ") + self.dut.send_expect( + "./usertools/dpdk-devbind.py -u %s" % self.cbdma_str, "# ", 30 + ) + self.dut.send_expect( + "./usertools/dpdk-devbind.py --force --bind=ioatdma %s" % self.cbdma_str, + "# ", + 60, + ) + + @property + def check_2M_env(self): + out = self.dut.send_expect( + "cat /proc/meminfo |grep Hugepagesize|awk '{print($2)}'", "# " + ) + return True if out == "2048" else False + + def start_vhost_testpmd( + self, cores, eal_param="", param="", ports="", iova_mode="" + ): + if iova_mode: + eal_param += " --iova=" + iova_mode + self.vhost_user_pmd.start_testpmd( + cores=cores, + eal_param=eal_param, + param=param, + ports=ports, + prefix="vhost", + fixed_prefix=True, + ) + + def start_virtio_testpmd_with_vhost_net1(self, cores, eal_param="", param=""): + """ + launch the testpmd as virtio with vhost_net1 + """ + if self.check_2M_env: + eal_param += " --single-file-segments" + self.virtio_user1_pmd.start_testpmd( + cores=cores, + eal_param=eal_param, + param=param, + no_pci=True, + prefix="virtio-user1", + fixed_prefix=True, + ) + self.virtio_user1_pmd.execute_cmd("set fwd rxonly") + self.virtio_user1_pmd.execute_cmd("start") + + def start_virtio_testpmd_with_vhost_net0(self, cores, eal_param="", param=""): + """ + launch the testpmd as virtio with vhost_net0 + """ + if self.check_2M_env: + eal_param += " --single-file-segments" + self.virtio_user0_pmd.start_testpmd( + cores=cores, + eal_param=eal_param, + param=param, + no_pci=True, + prefix="virtio-user0", + fixed_prefix=True, + ) + + def send_251_960byte_and_32_64byte_pkts(self): + """ + send 251 960byte and 32 64byte length packets from virtio_user0 testpmd + """ + self.virtio_user0_pmd.execute_cmd("set burst 1") + self.virtio_user0_pmd.execute_cmd("set txpkts 64,128,256,512") + self.virtio_user0_pmd.execute_cmd("start tx_first 27") + self.virtio_user0_pmd.execute_cmd("stop") + self.virtio_user0_pmd.execute_cmd("set burst 32") + self.virtio_user0_pmd.execute_cmd("start tx_first 7") + self.virtio_user0_pmd.execute_cmd("stop") + self.virtio_user0_pmd.execute_cmd("set txpkts 64") + self.virtio_user0_pmd.execute_cmd("start tx_first 1") + self.virtio_user0_pmd.execute_cmd("stop") + self.vhost_user_pmd.execute_cmd("start") + out = self.vhost_user_pmd.execute_cmd("show port stats all") + self.logger.info(out) + + def send_27_4640byte_and_224_64byte_pkts(self): + """ + send 54 4640byte and 448 64byte length packets from virtio_user0 testpmd + """ + self.virtio_user0_pmd.execute_cmd("set burst 1") + self.virtio_user0_pmd.execute_cmd("set txpkts 64,256,2000,64,256,2000") + self.virtio_user0_pmd.execute_cmd("start tx_first 27") + self.virtio_user0_pmd.execute_cmd("stop") + self.virtio_user0_pmd.execute_cmd("set burst 32") + self.virtio_user0_pmd.execute_cmd("set txpkts 64") + self.virtio_user0_pmd.execute_cmd("start tx_first 7") + self.virtio_user0_pmd.execute_cmd("stop") + self.vhost_user_pmd.execute_cmd("start") + out = self.vhost_user_pmd.execute_cmd("show port stats all") + self.logger.info(out) + + def send_224_64byte_and_27_4640byte_pkts(self): + """ + send 54 4640byte and 448 64byte length packets from virtio_user0 testpmd + """ + self.virtio_user0_pmd.execute_cmd("set burst 32") + self.virtio_user0_pmd.execute_cmd("set txpkts 64") + self.virtio_user0_pmd.execute_cmd("start tx_first 7") + self.virtio_user0_pmd.execute_cmd("stop") + self.virtio_user0_pmd.execute_cmd("set burst 1") + self.virtio_user0_pmd.execute_cmd("set txpkts 64,256,2000,64,256,2000") + self.virtio_user0_pmd.execute_cmd("start tx_first 27") + self.virtio_user0_pmd.execute_cmd("stop") + self.vhost_user_pmd.execute_cmd("start") + out = self.vhost_user_pmd.execute_cmd("show port stats all") + self.logger.info(out) + + def send_251_64byte_and_32_8000byte_pkts(self): + """ + send 54 4640byte and 448 64byte length packets from virtio_user0 testpmd + """ + self.virtio_user0_pmd.execute_cmd("set burst 1") + self.virtio_user0_pmd.execute_cmd("start tx_first 27") + self.virtio_user0_pmd.execute_cmd("stop") + self.virtio_user0_pmd.execute_cmd("set burst 32") + self.virtio_user0_pmd.execute_cmd("start tx_first 7") + self.virtio_user0_pmd.execute_cmd("stop") + self.virtio_user0_pmd.execute_cmd("set txpkts 2000,2000,2000,2000") + self.virtio_user0_pmd.execute_cmd("start tx_first 1") + self.virtio_user0_pmd.execute_cmd("stop") + self.vhost_user_pmd.execute_cmd("start") + out = self.vhost_user_pmd.execute_cmd("show port stats all") + self.logger.info(out) + + def send_1_64byte_pkts(self): + self.virtio_user0_pmd.execute_cmd("set burst 1") + self.virtio_user0_pmd.execute_cmd("start tx_first 1") + self.virtio_user0_pmd.execute_cmd("stop") + self.vhost_user_pmd.execute_cmd("start") + out = self.vhost_user_pmd.execute_cmd("show port stats all") + self.logger.info(out) + + def clear_virtio_user1_stats(self): + self.virtio_user1_pmd.execute_cmd("stop") + self.virtio_user1_pmd.execute_cmd("clear port stats all") + self.virtio_user1_pmd.execute_cmd("start") + out = self.virtio_user1_pmd.execute_cmd("show port stats all") + self.logger.info(out) + + def start_pdump_to_capture_pkt(self): + """ + launch pdump app with dump_port and file_prefix + the pdump app should start after testpmd started + if dump the vhost-testpmd, the vhost-testpmd should started before launch pdump + if dump the virtio-testpmd, the virtio-testpmd should started before launch pdump + """ + eal_params = self.dut.create_eal_parameters( + cores="Default", prefix="virtio-user1", fixed_prefix=True + ) + command_line = ( + self.app_pdump + + " %s -v -- " + + "--pdump 'device_id=net_virtio_user1,queue=*,rx-dev=%s,mbuf-size=8000'" + ) + self.pdump_user.send_expect( + command_line % (eal_params, self.dump_virtio_pcap), "Port" + ) + + def check_virtio_user1_stats(self, check_dict): + """ + check the virtio-user1 show port stats + """ + out = self.virtio_user1_pmd.execute_cmd("show port stats all") + self.logger.info(out) + rx_packets = re.search("RX-packets:\s*(\d*)", out) + rx_bytes = re.search("RX-bytes:\s*(\d*)", out) + rx_num = int(rx_packets.group(1)) + byte_num = int(rx_bytes.group(1)) + packet_count = 0 + byte_count = 0 + for key, value in check_dict.items(): + packet_count += value + byte_count += key * value + self.verify( + rx_num == packet_count, + "receive pakcet number: {} is not equal as send:{}".format( + rx_num, packet_count + ), + ) + self.verify( + byte_num == byte_count, + "receive pakcet byte:{} is not equal as send:{}".format( + byte_num, byte_count + ), + ) + + def check_packet_payload_valid(self, check_dict): + """ + check the payload is valid + """ + self.pdump_user.send_expect("^c", "# ", 60) + self.dut.session.copy_file_from( + src=self.dump_virtio_pcap, dst=self.dump_virtio_pcap + ) + pkt = Packet() + pkts = pkt.read_pcapfile(self.dump_virtio_pcap) + for key, value in check_dict.items(): + count = 0 + for i in range(len(pkts)): + if len(pkts[i]) == key: + count += 1 + self.verify( + value == count, + "pdump file: {} have not include enough packets {}".format(count, key), + ) + + def check_vhost_user_testpmd_logs(self): + out = self.vhost_user.get_session_before(timeout=30) + check_logs = [ + "DMA completion failure on channel", + "DMA copy failed for channel", + ] + for check_log in check_logs: + self.verify(check_log not in out, "Vhost-user testpmd Exception") + + def quit_all_testpmd(self): + self.vhost_user_pmd.quit() + self.virtio_user0_pmd.quit() + self.virtio_user1_pmd.quit() + self.pdump_user.send_expect("^c", "# ", 60) + + def test_split_ring_non_mergeable_path_multi_queues_with_cbdma(self): + """ + Test Case 1: split virtqueue vm2vm non-mergeable path multi-queues payload check with cbdma enable + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(2) + dmas = self.generate_dms_param(2) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] + ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + " --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + vhost_param = ( + " --nb-cores=1 --txd=4096 --rxd=4096 --txq=2 --rxq=2 --no-flush-rx" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=0,queue_size=4096" + virtio1_param = ( + " --enable-hw-vlan-strip --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + ) + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=0,queue_size=4096" + virtio0_param = ( + " --enable-hw-vlan-strip --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + ) + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_251_960byte_and_32_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + self.check_vhost_user_testpmd_logs() + + if not self.check_2M_env: + self.logger.info("Quit and relaunch vhost with iova=pa") + self.vhost_user_pmd.quit() + self.clear_virtio_user1_stats() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.start_pdump_to_capture_pkt() + + self.send_251_960byte_and_32_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + self.check_vhost_user_testpmd_logs() + + def test_split_ring_mergeable_path_multi_queues_with_cbdma(self): + """ + Test Case 2: split virtqueue vm2vm mergeable path multi-queues payload check with cbdma enable + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(1) + dmas = self.generate_dms_param(2) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list[0:1], core_list=self.vhost_core_list[1:2] + ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + " --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + vhost_param = ( + " --nb-cores=1 --txd=4096 --rxd=4096 --txq=2 --rxq=2 --no-flush-rx" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=4096" + virtio1_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=4096" + virtio0_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_27_4640byte_and_224_64byte_pkts() + check_dict = {4640: 54, 64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + if not self.check_2M_env: + self.logger.info("Quit and relaunch vhost with iova=pa") + self.vhost_user_pmd.quit() + self.clear_virtio_user1_stats() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.start_pdump_to_capture_pkt() + + self.send_27_4640byte_and_224_64byte_pkts() + check_dict = {4640: 54, 64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_split_ring_inorder_non_mergeable_path_multi_queues_with_cbdma(self): + """ + Test Case 3: split virtqueue vm2vm inorder non-mergeable path multi-queues payload check with cbdma enable + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(5) + dmas = self.generate_dms_param(2) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] + ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + " --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + vhost_param = ( + " --nb-cores=1 --txd=4096 --rxd=4096 --txq=2 --rxq=2 --no-flush-rx" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=1,queue_size=4096" + virtio1_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=1,queue_size=4096" + virtio0_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_27_4640byte_and_224_64byte_pkts() + check_dict = {64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + if not self.check_2M_env: + self.logger.info("Quit and relaunch vhost with iova=pa") + self.vhost_user_pmd.quit() + self.clear_virtio_user1_stats() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.start_pdump_to_capture_pkt() + + self.send_27_4640byte_and_224_64byte_pkts() + check_dict = {64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_split_ring_vectorized_path_multi_queues_with_cbdma(self): + """ + Test Case 4: split virtqueue vm2vm vectorized path multi-queues payload check with cbdma enable + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(8) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0],dma_ring_size=2048'" + + " --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq1],dma_ring_size=2048'" + ) + vhost_param = ( + " --nb-cores=1 --txd=4096 --rxd=4096 --txq=2 --rxq=2 --no-flush-rx" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=4096" + virtio1_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=4096" + virtio0_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_224_64byte_and_27_4640byte_pkts() + check_dict = {64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + if not self.check_2M_env: + self.logger.info("Quit and relaunch vhost with iova=pa") + self.vhost_user_pmd.quit() + self.clear_virtio_user1_stats() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.start_pdump_to_capture_pkt() + + self.send_224_64byte_and_27_4640byte_pkts() + check_dict = {64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_split_ring_inorder_mergeable_path_multi_queues_test_non_indirect_descriptor_with_cbdma( + self, + ): + """ + Test Case 5: Split virtqueue vm2vm inorder mergeable path test non-indirect descriptor with cbdma enable + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(4) + dmas = self.generate_dms_param(2) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] + ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + " --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + vhost_param = ( + " --nb-cores=1 --txd=256 --rxd=256 --txq=2 --rxq=2 --no-flush-rx" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=1,queue_size=256" + virtio1_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=1,queue_size=256" + virtio0_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_251_64byte_and_32_8000byte_pkts() + check_dict = {64: 502, 8000: 2} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + if not self.check_2M_env: + self.logger.info("Quit and relaunch vhost with iova=pa") + self.vhost_user_pmd.quit() + self.virtio_user1_pmd.quit() + self.virtio_user0_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_251_64byte_and_32_8000byte_pkts() + check_dict = {64: 502, 8000: 2} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_split_ring_inorder_mergeable_path_multi_queues_test_indirect_descriptor_with_cbdma( + self, + ): + """ + Test Case 6: Split virtqueue vm2vm mergeable path test indirect descriptor with cbdma enable + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(4) + dmas = self.generate_dms_param(2) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] + ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + " --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + vhost_param = ( + " --nb-cores=1 --txd=256 --rxd=256 --txq=2 --rxq=2 --no-flush-rx" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=256" + virtio1_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=256" + virtio0_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_251_64byte_and_32_8000byte_pkts() + check_dict = {64: 502, 8000: 10} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + if not self.check_2M_env: + self.logger.info("Quit and relaunch vhost with iova=pa") + self.vhost_user_pmd.quit() + self.virtio_user1_pmd.quit() + self.virtio_user0_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_251_64byte_and_32_8000byte_pkts() + check_dict = {64: 502, 8000: 10} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_ring_non_mergeable_path_multi_queues_with_cbdma(self): + """ + Test Case 7: packed virtqueue vm2vm non-mergeable path multi-queues payload check with cbdma enable + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(2) + dmas = self.generate_dms_param(2) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] + ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + " --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + vhost_param = ( + " --nb-cores=1 --txd=4096 --rxd=4096 --txq=2 --rxq=2 --no-flush-rx" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=0,queue_size=4096" + virtio1_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=0,queue_size=4096" + virtio0_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_224_64byte_and_27_4640byte_pkts() + check_dict = {64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + if not self.check_2M_env: + self.logger.info("Quit and relaunch vhost with iova=pa") + self.vhost_user_pmd.quit() + self.clear_virtio_user1_stats() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.start_pdump_to_capture_pkt() + + self.send_224_64byte_and_27_4640byte_pkts() + check_dict = {64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_ring_mergeable_path_multi_queues_with_cbdma(self): + """ + Test Case 8: packed virtqueue vm2vm mergeable path multi-queues payload check with cbdma enable + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(1) + dmas = self.generate_dms_param(2) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list[0:1], core_list=self.vhost_core_list[1:2] + ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + " --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + vhost_param = ( + " --nb-cores=1 --txd=4096 --rxd=4096 --txq=2 --rxq=2 --no-flush-rx" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=0,queue_size=4096" + virtio1_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=0,queue_size=4096" + virtio0_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_27_4640byte_and_224_64byte_pkts() + check_dict = {4640: 54, 64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + if not self.check_2M_env: + self.logger.info("Quit and relaunch vhost with iova=pa") + self.vhost_user_pmd.quit() + self.clear_virtio_user1_stats() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.start_pdump_to_capture_pkt() + + self.send_27_4640byte_and_224_64byte_pkts() + check_dict = {4640: 54, 64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_ring_inorder_mergeable_path_multi_queues_with_cbdma(self): + """ + Test Case 9: packed virtqueue vm2vm inorder mergeable path multi-queues payload check with cbdma enable + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(5) + dmas = self.generate_dms_param(2) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] + ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + " --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + vhost_param = ( + " --nb-cores=1 --txd=4096 --rxd=4096 --txq=2 --rxq=2 --no-flush-rx" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,queue_size=4096" + virtio1_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,queue_size=4096" + virtio0_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_27_4640byte_and_224_64byte_pkts() + check_dict = {4640: 54, 64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + if not self.check_2M_env: + self.logger.info("Quit and relaunch vhost with iova=pa") + self.vhost_user_pmd.quit() + self.clear_virtio_user1_stats() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.start_pdump_to_capture_pkt() + + self.send_27_4640byte_and_224_64byte_pkts() + check_dict = {4640: 54, 64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_ring_inorder_non_mergeable_path_multi_queues_with_cbdma(self): + """ + Test Case 10: packed virtqueue vm2vm inorder non-mergeable path multi-queues payload check with cbdma enable + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(8) + dmas = self.generate_dms_param(1) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] + ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + " --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + vhost_param = ( + " --nb-cores=1 --txd=4096 --rxd=4096 --txq=2 --rxq=2 --no-flush-rx" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,queue_size=4096" + virtio1_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,queue_size=4096" + virtio0_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_224_64byte_and_27_4640byte_pkts() + check_dict = {64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + if not self.check_2M_env: + self.logger.info("Quit and relaunch vhost with iova=pa") + self.vhost_user_pmd.quit() + self.clear_virtio_user1_stats() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.start_pdump_to_capture_pkt() + + self.send_224_64byte_and_27_4640byte_pkts() + check_dict = {64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_ring_inorder_vectorized_rx_path_multi_queues_with_cbdma(self): + """ + Test Case 11: packed virtqueue vm2vm vectorized-rx path multi-queues payload check with cbdma enable + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(8) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:3] + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0],dma_ring_size=2048'" + + " --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq1],dma_ring_size=2048'" + ) + vhost_param = ( + " --nb-cores=2 --txd=4096 --rxd=4096 --txq=2 --rxq=2 --no-flush-rx" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + + virtio1_eal_param = "--force-max-simd-bitwidth=512 --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4096" + virtio1_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--force-max-simd-bitwidth=512 --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4096" + virtio0_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_224_64byte_and_27_4640byte_pkts() + check_dict = {64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + if not self.check_2M_env: + self.logger.info("Quit and relaunch vhost with iova=pa") + self.vhost_user_pmd.quit() + self.clear_virtio_user1_stats() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.start_pdump_to_capture_pkt() + + self.send_224_64byte_and_27_4640byte_pkts() + check_dict = {64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_ring_inorder_vectorized_path_multi_queues_check_with_ring_size_is_not_power_of_2_queues_with_cbdma( + self, + ): + """ + Test Case 12: packed virtqueue vm2vm vectorized path multi-queues payload check with ring size is not power of 2 and cbdma enabled + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(8) + dmas = self.generate_dms_param(2) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] + ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + " --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + vhost_param = ( + " --nb-cores=1 --txd=4096 --rxd=4096 --txq=2 --rxq=2 --no-flush-rx" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + + virtio1_eal_param = "--force-max-simd-bitwidth=512 --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4097" + virtio1_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4097 --rxd=4097" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--force-max-simd-bitwidth=512 --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4097" + virtio0_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4097 --rxd=4097" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_224_64byte_and_27_4640byte_pkts() + check_dict = {64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + if not self.check_2M_env: + self.logger.info("Quit and relaunch vhost with iova=pa") + self.vhost_user_pmd.quit() + self.clear_virtio_user1_stats() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.start_pdump_to_capture_pkt() + + self.send_224_64byte_and_27_4640byte_pkts() + check_dict = {64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_ring_vectorized_tx_path_multi_queues_test_indirect_descriptor_with_cbdma( + self, + ): + """ + Test Case 13: packed virtqueue vm2vm vectorized-tx path multi-queues test indirect descriptor with cbdma enable + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(8) + dmas = self.generate_dms_param(2) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] + ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + " --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + vhost_param = ( + " --nb-cores=1 --txd=256 --rxd=256 --txq=2 --rxq=2 --no-flush-rx" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + + virtio1_eal_param = "--force-max-simd-bitwidth=512 --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256" + virtio1_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--force-max-simd-bitwidth=512 --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256" + virtio0_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_251_64byte_and_32_8000byte_pkts() + check_dict = {64: 502, 8000: 10} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + if not self.check_2M_env: + self.logger.info("Quit and relaunch vhost with iova=pa") + self.vhost_user_pmd.quit() + self.virtio_user1_pmd.quit() + self.virtio_user0_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_251_64byte_and_32_8000byte_pkts() + check_dict = {64: 502, 8000: 10} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_ring_vectorized_tx_path_test_batch_processing_with_cbdma(self): + """ + Test Case 14: packed virtqueue vm2vm vectorized-tx path test batch processing with cbdma enable + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(8) + dmas = self.generate_dms_param(1) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] + ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=1,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + " --vdev 'net_vhost1,iface=vhost-net1,queues=1,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + vhost_param = ( + " --nb-cores=1 --txd=256 --rxd=256 --txq=1 --rxq=1 --no-flush-rx" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + + virtio1_eal_param = "--force-max-simd-bitwidth=512 --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256" + virtio1_param = " --nb-cores=1 --rxq=1 --txq=1 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--force-max-simd-bitwidth=512 --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256" + virtio0_param = " --nb-cores=1 --rxq=1 --txq=1 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_1_64byte_pkts() + check_dict = {64: 1} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def close_all_session(self): + if getattr(self, "vhost_user", None): + self.dut.close_session(self.vhost_user) + if getattr(self, "virtio-user0", None): + self.dut.close_session(self.virtio_user0) + if getattr(self, "virtio-user1", None): + self.dut.close_session(self.virtio_user1) + if getattr(self, "pdump_session", None): + self.dut.close_session(self.pdump_user) + + def tear_down(self): + """ + Run after each test case. + """ + self.quit_all_testpmd() + self.dut.kill_all() + self.bind_cbdma_device_to_kernel() + + def tear_down_all(self): + """ + Run after each test suite. + """ + self.close_all_session()