From patchwork Thu Aug 11 06:16:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 114822 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3088DA0548; Thu, 11 Aug 2022 08:20:03 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1A16640DDE; Thu, 11 Aug 2022 08:20:03 +0200 (CEST) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id 45DEB40DDA for ; Thu, 11 Aug 2022 08:19:59 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1660198800; x=1691734800; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=nrZ5FEC/lQ6d64Xpxz44WUMef24d9RASSY1FBf6U9bo=; b=Kwi1qdxbtX2yQxBBqsQ/HqWrUE8vs4lgsEgcwki2e9p+p1kXfpt1ZXiF Xhvu3PjJsaw3ar7zvsoRsY8B4LRk+VIQOnBt4GBk762niTCD3lOnGiu+C clrXJzwaOlffTjb00EWder+WD8INZFyQtyvYK47OvOKrT/XXdDHPNsvfl YtREqU1fqAF1yuTsOzzWdx2ecdx/3FdiOLstCdWqgQDxnQVRaN2d2UQcy TEgW0aNfowo4cD0FRWYRTY9jkR7HEXR4Q7prHhWx0iSrOGTipFQZ9EeU7 NIDGtj0ts4iTdmvhPY75KLSK0DE41tQn3f7CYoY/4qAuo3x5/OZfon6fp g==; X-IronPort-AV: E=McAfee;i="6400,9594,10435"; a="271046457" X-IronPort-AV: E=Sophos;i="5.93,228,1654585200"; d="scan'208,223";a="271046457" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Aug 2022 23:19:58 -0700 X-IronPort-AV: E=Sophos;i="5.93,228,1654585200"; d="scan'208,223";a="665234403" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Aug 2022 23:19:56 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V2 1/3] test_plans/vhost_cbdma_test_plan: modify testplan to test virtio dequeue Date: Thu, 11 Aug 2022 02:16:31 -0400 Message-Id: <20220811061631.386610-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org From DPDK-22.07, virtio support async dequeue for split and packed ring path, so modify vhost_cbdma testplan to test the split and packed ring async dequeue feature. Signed-off-by: Wei Ling --- test_plans/vhost_cbdma_test_plan.rst | 1353 ++++++++------------------ 1 file changed, 423 insertions(+), 930 deletions(-) diff --git a/test_plans/vhost_cbdma_test_plan.rst b/test_plans/vhost_cbdma_test_plan.rst index 08820a9c..4a0ccc43 100644 --- a/test_plans/vhost_cbdma_test_plan.rst +++ b/test_plans/vhost_cbdma_test_plan.rst @@ -1,9 +1,9 @@ .. SPDX-License-Identifier: BSD-3-Clause Copyright(c) 2022 Intel Corporation -========================================================== -DMA-accelerated Tx operations for vhost-user PMD test plan -========================================================== +============================================================= +DMA-accelerated Tx/RX operations for vhost-user PMD test plan +============================================================= Description =========== @@ -13,8 +13,6 @@ data path with CBDMA driver in the PVP topology environment with testpmd. CBDMA is a kind of DMA engine, Vhost asynchronous data path leverages DMA devices to offload memory copies from the CPU and it is implemented in an asynchronous way. -Linux kernel and DPDK provide CBDMA driver, no matter which driver is used, -DPDK DMA library is used in data-path to offload copies to CBDMA, and the only difference is which driver configures CBDMA. It enables applications, like OVS, to save CPU cycles and hide memory copy overhead, thus achieving higher throughput. Vhost doesn't manage DMA devices and applications, like OVS, need to manage and configure CBDMA devices. Applications need to tell vhost what CBDMA devices to use in every data path function call. @@ -23,10 +21,17 @@ function modules, not limited in vhost. In addition, vhost supports M:N mapping and DMA virtual channels. Specifically, one vring can use multiple different DMA channels and one DMA channel can be shared by multiple vrings at the same time. +From DPDK22.07, this feature is implemented on both split and packed ring enqueue and dequeue data path. + Note: 1. When CBDMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may exceed IOMMU's max capability, better to use 1G guest hugepage. 2. DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. +This patch supports to bind dma to each lcore in testpmd. To enable this feature, need to add +"--lcore-dma=[fwd-lcore-id@dma-bdf,...]" in testpmd. After set this parameter for all forwarding cores, +vhost will use dma belonging to lcore to offload copies. + +3. by default, the xl710 Intel NIC does not activate the ETH RSS IPv4/TCP data stream. So we need to execute `port config all rss ipv4-tcp` in testpmd. For more about dpdk-testpmd sample, please refer to the DPDK docments: https://doc.dpdk.org/guides/testpmd_app_ug/run_app.html @@ -38,38 +43,38 @@ Prerequisites Topology -------- - Test flow: TG-->NIC-->Vhost-->Virtio-->Vhost-->NIC-->TG + Test flow: TG-->NIC-->Vhost-->Virtio-->Vhost-->NIC-->TG Hardware -------- - Supportted NICs: ALL + Supportted NICs: ALL Software -------- - Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz + Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz General set up -------------- 1. Compile DPDK:: - # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library= - # ninja -C -j 110 - For example: - CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=x86_64-native-linuxapp-gcc - ninja -C x86_64-native-linuxapp-gcc -j 110 + # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static + # ninja -C -j 110 + For example: + CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc + ninja -C x86_64-native-linuxapp-gcc -j 110 2. Get the PCI device ID and DMA device ID of DUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 are DMA device IDs:: - # ./usertools/dpdk-devbind.py -s + # ./usertools/dpdk-devbind.py -s - Network devices using kernel driver - =================================== - 0000:18:00.0 'Device 159b' if=ens785f0 drv=ice unused=vfio-pci + Network devices using kernel driver + =================================== + 0000:18:00.0 'Device 159b' if=ens785f0 drv=ice unused=vfio-pci - DMA devices using kernel driver - =============================== - 0000:00:04.0 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci - 0000:00:04.1 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci + DMA devices using kernel driver + =============================== + 0000:00:04.0 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci + 0000:00:04.1 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci Test case ========= @@ -78,1115 +83,603 @@ Common steps ------------ 1. Bind 1 NIC port and CBDMA devices to vfio-pci:: - # ./usertools/dpdk-devbind.py -b vfio-pci - # ./usertools/dpdk-devbind.py -b vfio-pci + # ./usertools/dpdk-devbind.py -b vfio-pci + # ./usertools/dpdk-devbind.py -b vfio-pci - For example, Bind 1 NIC port and 2 CBDMA devices:: - ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:18.0 - ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:04.0,0000:00:04.1 + For example, Bind 1 NIC port and 2 CBDMA devices:: + ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:18.0 + ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:04.0,0000:00:04.1 + +2. Send tcp imix packets [64,1518] to NIC by traffic generator:: -2. Send imix packets [64,1518] to NIC by traffic generator:: + The imix packets include packet size [64, 128, 256, 512, 1024, 1518], and the format of packet is as follows. + +-------------+-------------+-------------+-------------+ + | MAC | MAC | IPV4 | IPV4 | + | Src address | Dst address | Src address | Dst address | + |-------------|-------------|-------------|-------------| + | Random MAC | Virtio mac | Random IP | Random IP | + +-------------+-------------+-------------+-------------+ + All the packets in this test plan use the Virtio mac: 00:11:22:33:44:10. - The imix packets include packet size [64, 128, 256, 512, 1024, 1518], and the format of packet is as follows. - +-------------+-------------+-------------+-------------+ - | MAC | MAC | IPV4 | IPV4 | - | Src address | Dst address | Src address | Dst address | - |-------------|-------------|-------------|-------------| - | Random MAC | Virtio mac | Random IP | Random IP | - +-------------+-------------+-------------+-------------+ - All the packets in this test plan use the Virtio mac: 00:11:22:33:44:10. +Test Case 1: PVP split ring all path multi-queues vhost async operation with 1 to 1 mapping between vring and CBDMA virtual channels +------------------------------------------------------------------------------------------------------------------------------------ +This case tests split ring in each virtio path with multi-queues can work normally when vhost uses the asynchronous enqueue and dequeue operations +and the mapping between vrings and CBDMA virtual channels is 1:1. Both iova as VA and PA mode have been tested. -Test Case 1: PVP split ring all path vhost enqueue operations with 1 to 1 mapping between vrings and CBDMA virtual channels ---------------------------------------------------------------------------------------------------------------------------- -This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring in each virtio path with 1 core and 1 queue -when vhost uses the asynchronous enqueue operations and the mapping between vrings and CBDMA virtual channels is 1:1. -Both iova as VA and PA mode have been tested. - -1. Bind 1 NIC port and 1 CBDMA device to vfio-pci, as common step 1. +1. Bind 1 NIC port and 4 CBDMA devices to vfio-pci, as common step 1. 2. Launch vhost by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=1,dmas=[txq0],dma_ring_size=2048' \ - --iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0] - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ + --vdev 'net_vhost0,iface=/root/dpdk/vhost_net0,queues=2,dmas=[txq0;txq1;rxq0;rxq1]' \ + --iova=va -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3] + testpmd> port config all rss ipv4-tcp + testpmd> set fwd csum + testpmd> start 3. Launch virtio-user with inorder mergeable path:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=1 \ - -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/root/dpdk/vhost_net0,mrg_rxbuf=1,in_order=1,queues=2 \ + -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 + testpmd> set fwd csum + testpmd> start -4. Send imix packets [64,1518] from packet generator as common step2, and then check the throughput can get expected data:: +4. Send tcp imix packets [64,1518] from packet generator as common step2, and then check the throughput can get expected data:: - testpmd> show port stats all + testpmd> show port stats all 5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: - testpmd> stop + testpmd> stop -6. Restart vhost port and send imix pkts again, then check the throuhput can get expected data:: +6. Restart vhost port and send imix packets again, then check the throuhput can get expected data:: - testpmd> start - testpmd> show port stats all + testpmd> start + testpmd> show port stats all 7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=0,queues=1 \ - -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/root/dpdk/vhost_net0,mrg_rxbuf=1,in_order=0,queues=2 \ + -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 + testpmd> set fwd csum + testpmd> start 8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,queues=1 \ - -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/root/dpdk/vhost_net0,mrg_rxbuf=0,in_order=1,queues=2 \ + -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 + testpmd> set fwd csum + testpmd> start 9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,queues=1 \ - -- -i --enable-hw-vlan-strip --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/root/dpdk/vhost_net0,mrg_rxbuf=0,in_order=0,queues=2 \ + -- -i --enable-hw-vlan-strip --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 + testpmd> set fwd csum + testpmd> start .. note:: - Rx offload(s) are requested when using split ring non-mergeable path. So add the parameter "--enable-hw-vlan-strip". + Rx offload(s) are requested when using split ring non-mergeable path. So add the parameter "--enable-hw-vlan-strip". 10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,vectorized=1,queues=1 \ - -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/root/dpdk/vhost_net0,mrg_rxbuf=0,in_order=0,vectorized=1,queues=2 \ + -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 + testpmd> set fwd csum + testpmd> start 11. Quit all testpmd and relaunch vhost with iova=pa by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=1,dmas=[txq0],dma_ring_size=2048' \ - --iova=pa -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0] - testpmd> set fwd mac - testpmd> start - -12. Rerun steps 3-11. - -Test Case 2: PVP split ring all path multi-queues vhost enqueue operations with 1 to 1 mapping between vrings and CBDMA virtual channels ----------------------------------------------------------------------------------------------------------------------------------------- -This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring in each virtio path with multi-queues -when vhost uses the asynchronous enqueue operations and the mapping between vrings and CBDMA virtual channels is 1:1. -Both iova as VA and PA mode have been tested. - -1. Bind 1 NIC port and 8 CBDMA devices to vfio-pci, as common step 1. - -2. Launch vhost by below command:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ - -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ - --iova=va -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.3,lcore15@0000:00:04.4,lcore16@0000:00:04.5,lcore17@0000:00:04.6,lcore18@0000:00:04.7] - testpmd> set fwd mac - testpmd> start - -3. Launch virtio-user with inorder mergeable path:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=8 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start - -4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: - - testpmd> show port stats all - -5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: - - testpmd> stop - -6. Restart vhost port and send imix pkts again, then check the throuhput can get expected data:: - - testpmd> start - testpmd> show port stats all - -7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=0,queues=8 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start - -8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,queues=8 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start - -9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ + --vdev 'net_vhost0,iface=/root/dpdk/vhost_net0,queues=2,dmas=[txq0;txq1;rxq0;rxq1]' \ + --iova=pa -- -i --nb-cores=2 --txq=2 --rxq=2 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore12@0000:00:04.2,lcore12@0000:00:04.3] + testpmd> port config all rss ipv4-tcp + testpmd> set fwd csum + testpmd> start - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,queues=8 \ - -- -i --enable-hw-vlan-strip --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start - -10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,vectorized=1,queues=8 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start - -11. Quit all testpmd and relaunch vhost with iova=pa by below command:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ - -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ - --iova=pa -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.3,lcore15@0000:00:04.4,lcore16@0000:00:04.5,lcore17@0000:00:04.6,lcore18@0000:00:04.7] - testpmd> set fwd mac - testpmd> start - -12. Rerun step 7. +12. Rerun steps 3-6. -Test Case 3: PVP split ring all path multi-queues vhost enqueue operations with M to 1 mapping between vrings and CBDMA virtual channels ----------------------------------------------------------------------------------------------------------------------------------------- -This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring in each virtio path with multi-queues -when vhost uses the asynchronous enqueue operations and the mapping between vrings and CBDMA virtual channels is M:1. -Both iova as VA and PA mode have been tested. +Test Case 2: PVP split ring all path multi-queues vhost async operations with M to 1 mapping between vrings and CBDMA virtual channels +-------------------------------------------------------------------------------------------------------------------------------------- +This case tests split ring in each virtio path with multi-queues can work normally when vhost uses the asynchronous enqueue and dequeue operations +and the mapping between vrings and CBDMA virtual channels is M:1. Both iova as VA and PA mode have been tested. -1. Bind 1 NIC port and 8 CBDMA devices to vfio-pci, as common step 1. +1. Bind 1 NIC port and 4 CBDMA devices to vfio-pci, as common step 1. 2. Launch vhost by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ - --iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0] - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ + --vdev 'net_vhost0,iface=/root/dpdk/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ + --iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.3] + testpmd> port config all rss ipv4-tcp + testpmd> set fwd csum + testpmd> start 3. Launch virtio-user with inorder mergeable path:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=8 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/root/dpdk/vhost_net0,mrg_rxbuf=1,in_order=1,queues=8 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd csum + testpmd> start -3. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: +3. Send tcp imix packets [64,1518] from packet generator as common step2, and then check the throughput can get expected data:: - testpmd> show port stats all + testpmd> show port stats all 5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: - testpmd> stop + testpmd> stop -6. Restart vhost port and send imix pkts again, then check the throught can get expected data:: +6. Restart vhost port and send imix packets again, then check the throuhput can get expected data:: - testpmd> start - testpmd> show port stats all + testpmd> start + testpmd> show port stats all 7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=0,queues=8 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/root/dpdk/vhost_net0,mrg_rxbuf=1,in_order=0,queues=8 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd csum + testpmd> start 8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,queues=8 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/root/dpdk/vhost_net0,mrg_rxbuf=0,in_order=1,queues=8 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd csum + testpmd> start 9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,queues=8 \ - -- -i --enable-hw-vlan-strip --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/root/dpdk/vhost_net0,mrg_rxbuf=0,in_order=0,queues=8 \ + -- -i --enable-hw-vlan-strip --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd csum + testpmd> start 10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,vectorized=1,queues=8 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/root/dpdk/vhost_net0,mrg_rxbuf=0,in_order=0,vectorized=1,queues=8 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd csum + testpmd> start 11. Quit all testpmd and relaunch vhost by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ - --iova=va -- -i --nb-cores=3 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.0] - testpmd> set fwd mac - testpmd> start - -12. Rerun steps 4-6. - -13. Quit all testpmd and relaunch vhost by below command:: + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/root/dpdk/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ + --iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.0,lcore14@0000:00:04.0] + testpmd> port config all rss ipv4-tcp + testpmd> set fwd csum + testpmd> start - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ - --iova=va -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.0,lcore14@0000:00:04.0,lcore15@0000:00:04.0,lcore16@0000:00:04.0,lcore17@0000:00:04.0,lcore18@0000:00:04.0] - testpmd> set fwd mac - testpmd> start +12. Rerun steps 7. -14. Rerun steps 7. - -15. Quit all testpmd and relaunch vhost with iova=pa by below command:: +13. Quit all testpmd and relaunch vhost with iova=pa by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ - --iova=pa -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.0,lcore14@0000:00:04.0,lcore15@0000:00:04.0,lcore16@0000:00:04.0,lcore17@0000:00:04.0,lcore18@0000:00:04.0] - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/root/dpdk/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ + --iova=pa -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.0,lcore14@0000:00:04.0,lcore15@0000:00:04.0,lcore16@0000:00:04.0,lcore17@0000:00:04.0,lcore18@0000:00:04.0] + testpmd> port config all rss ipv4-tcp + testpmd> set fwd csum + testpmd> start -16. Rerun steps 7. +14. Rerun steps 8. -Test Case 4: PVP split ring all path vhost enqueue operations with 1 to N mapping between vrings and CBDMA virtual channels ---------------------------------------------------------------------------------------------------------------------------- -This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring in each virtio path when vhost uses -the asynchronous enqueue operations and the mapping between vrings and CBDMA virtual channels is 1:N. -Both iova as VA and PA mode have been tested. +Test Case 3: PVP split ring dynamic queue number vhost async operations with M to N mapping between vrings and CBDMA virtual channels +------------------------------------------------------------------------------------------------------------------------------------- +This case tests if the vhost-user async operation with cbdma channels can work normally when the queue number of split ring dynamic change. Both iova as VA and PA mode have been tested. 1. Bind 1 NIC port and 8 CBDMA devices to vfio-pci, as common step 1. -2. Launch vhost by below command:: +2. Launch vhost by below command(1:N mapping):: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ - -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=1,dmas=[txq0],dma_ring_size=2048' \ - --iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] - testpmd> set fwd mac - testpmd> start + #./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/root/dpdk/vhost_net0,queues=8,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ + --iova=va -- -i --nb-cores=2 --txq=2 --rxq=2 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore12@0000:00:04.4,lcore12@0000:00:04.5,lcore12@0000:00:04.6,lcore12@0000:00:04.7] + testpmd> port config all rss ipv4-tcp + testpmd>set fwd csum + testpmd>start -3. Launch virtio-user with inorder mergeable path:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=1 \ - -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start +3. Launch virtio-user by below command:: -4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: + #./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/root/dpdk/vhost_net0,mrg_rxbuf=1,in_order=1,queues=8,server=1 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start - testpmd> show port stats all +4. Send tcp imix packets[64,1518] from packet generator with random ip, check perforamnce can get target. 5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: - testpmd> stop - -6. Restart vhost port and send imix pkts again, then check the throught can get expected data:: - - testpmd> start - testpmd> show port stats all - -7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=0,queues=1 \ - -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start - -8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,queues=1 \ - -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start - -9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,queues=1 \ - -- -i --enable-hw-vlan-strip --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start - -10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,vectorized=1,queues=1 \ - -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start - -11. Quit all testpmd and relaunch vhost with iova=pa by below command:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ - -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=1,dmas=[txq0],dma_ring_size=2048' \ - --iova=pa -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] - testpmd> set fwd mac - testpmd> start - -12. Rerun steps 9. - -Test Case 5: PVP split ring all path multi-queues vhost enqueue operations with M to N mapping between vrings and CBDMA virtual channels ----------------------------------------------------------------------------------------------------------------------------------------- -This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring in each virtio path with multi-queues -when vhost uses the asynchronous enqueue operations and the mapping between vrings and CBDMA virtual channels is M:N. -Both iova as VA and PA mode have been tested. - -1. Bind 1 NIC port and 8 CBDMA devices to vfio-pci, as common step 1. - -2. Launch vhost by below command:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ - -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2],dma_ring_size=2048' \ - --iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] - testpmd> set fwd mac - testpmd> start + testpmd>stop -3. Launch virtio-user with inorder mergeable path:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=8 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start +6. Quit and relaunch vhost without CBDMA:: -4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:18:00.0 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/root/dpdk/vhost_net0,queues=8,client=1' \ + --iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> port config all rss ipv4-tcp + testpmd>set fwd csum + testpmd>start - testpmd> show port stats all - -5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: +7. Rerun step 4-5. - testpmd> stop +8. Quit and relaunch vhost by below command:: -6. Restart vhost port and send imix pkts again, then check the throught can get expected data:: - - testpmd> start - testpmd> show port stats all - -7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/root/dpdk/vhost_net0,queues=8,client=1,dmas=[rxq0;rxq1;rxq2;rxq3]' \ + --iova=va -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.3] + testpmd> port config all rss ipv4-tcp + testpmd>set fwd csum + testpmd>start - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=0,queues=8 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start +9. Rerun step 4-5. -8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: +10. Quit and relaunch vhost with M:N mapping between vrings and CBDMA virtual channels:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,queues=8 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:18:00.0 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/root/dpdk/vhost_net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ + --iova=va -- -i --nb-cores=2 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore12@0000:00:04.1,lcore12@0000:00:04.2,lcore12@0000:00:04.3,lcore12@0000:00:04.4,lcore12@0000:00:04.5,lcore12@0000:00:04.6,lcore12@0000:00:04.7] + testpmd> port config all rss ipv4-tcp + testpmd>set fwd csum + testpmd>start -9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: +11. Rerun step 4-5. - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,queues=8 \ - -- -i --enable-hw-vlan-strip --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start +12. Quit and relaunch vhost with diff mapping between vrings and CBDMA virtual channels:: -10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 \ + --vdev 'net_vhost0,iface=/root/dpdk/vhost_net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ + --iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore12@0000:00:04.1,lcore12@0000:00:04.2,lcore13@0000:00:04.3,lcore13@0000:00:04.4,lcore13@0000:00:04.5,lcore13@0000:00:04.6,lcore14@0000:00:04.4,lcore14@0000:00:04.5,lcore14@0000:00:04.6,lcore14@0000:00:04.7] + testpmd> port config all rss ipv4-tcp + testpmd>set fwd csum + testpmd>start - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,vectorized=1,queues=8 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start +13. Start vhost port and rerun step 4-5. -11. Quit all testpmd and relaunch vhost by below command:: +14. Quit and relaunch virtio-user by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ - -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ - --iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/root/dpdk/vhost_net0,mrg_rxbuf=1,in_order=0,queues=8,server=1 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start -12. Rerun steps 8. +15. Rerun step 4-5. -13. Quit all testpmd and relaunch vhost with iova=pa by below command:: +16. Quit and relaunch vhost with iova=pa by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ - -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ - --iova=pa -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 \ + --vdev 'net_vhost0,iface=/root/dpdk/vhost_net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ + --iova=pa -- -i --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.1,lcore14@0000:00:04.2,lcore15@0000:00:04.1,lcore15@0000:00:04.2] + testpmd> port config all rss ipv4-tcp + testpmd>set fwd csum + testpmd>start -14. Rerun steps 10. +17. Rerun step 4-5. -Test Case 6: PVP split ring dynamic queue number vhost enqueue operations with M to N mapping between vrings and CBDMA virtual channels +Test Case 4: PVP packed ring all path multi-queues vhost async operations with 1 to 1 mapping between vrings and CBDMA virtual channels --------------------------------------------------------------------------------------------------------------------------------------- -This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring when vhost uses the asynchronous enqueue operations -and if the vhost-user can work well when the queue number dynamic change. Both iova as VA and PA mode have been tested. -Both iova as VA and PA mode have been tested. +This case tests packed ring in each virtio path with multi-queues can work normally when vhost uses the asynchronous enqueue and dequeue operations +and the mapping between vrings and CBDMA virtual channels is 1:1. Both iova as VA and PA mode have been tested. -1. Bind 1 NIC port and 8 CBDMA devices to vfio-pci, as common step 1. +1. Bind 1 NIC port and 4 CBDMA devices to vfio-pci, as common step 1. 2. Launch vhost by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 \ - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,client=1' \ - --iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start - -3. Launch virtio-user by below command:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=8,server=1 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start - -4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: - - testpmd> show port stats all - -5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log. - - testpmd> stop - -6. Restart vhost port and send imix pkts again, then check the throught can get expected data:: - - testpmd> start - testpmd> show port stats all - -7. Quit and relaunch vhost with 1:1 mapping between vrings and CBDMA virtual channels, then repeat step 4-6:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ - -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3]' \ - --iova=va -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.3] - testpmd> set fwd mac - testpmd> start - -8. Quit and relaunch vhost with M:N(1:N;M# ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ - -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq6;txq7]' \ - --iova=va -- -i --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.7,lcore12@0000:00:04.1,lcore12@0000:00:04.2,lcore12@0000:00:04.3,lcore13@0000:00:04.2,lcore13@0000:00:04.3,lcore13@0000:00:04.4,lcore14@0000:00:04.2,lcore14@0000:00:04.3,lcore14@0000:00:04.4,lcore14@0000:00:04.5,lcore15@0000:00:04.0,lcore15@0000:00:04.1,lcore15@0000:00:04.2,lcore15@0000:00:04.3,lcore15@0000:00:04.4,lcore15@0000:00:04.5,lcore15@0000:00:04.6,lcore15@0000:00:04.7] - testpmd> set fwd mac - testpmd> start - -9. Quit and relaunch vhost with diff M:N(M:1;M>N) mapping between vrings and CBDMA virtual channels, then repeat step 4-6:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ - -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 \ - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ - --iova=va -- -i --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.1,lcore14@0000:00:04.2,lcore15@0000:00:04.1,lcore15@0000:00:04.2] - testpmd> set fwd mac - testpmd> start - -10. Quit and relaunch vhost with iova=pa by below command, then repeat step 4-6:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ - -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 \ - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ - --iova=pa -- -i --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.1,lcore14@0000:00:04.2,lcore15@0000:00:04.1,lcore15@0000:00:04.2] - testpmd> set fwd mac - testpmd> start - -Test Case 7: PVP packed ring all path vhost enqueue operations with 1 to 1 mapping between vrings and CBDMA virtual channels ----------------------------------------------------------------------------------------------------------------------------- -This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring in each virtio path with 1 core and 1 queue -when vhost uses the asynchronous enqueue operations and the mapping between vrings and CBDMA virtual channels is 1:1. -Both iova as VA and PA mode have been tested. - -1. Bind 1 NIC port and 1 CBDMA device to vfio-pci, as common step 1. - -2. Launch vhost by below command:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=1,dmas=[txq0],dma_ring_size=2048' \ - --iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0] - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ + --vdev 'net_vhost0,iface=/root/dpdk/vhost_net0,queues=2,dmas=[txq0;txq1;rxq0;rxq1]' \ + --iova=va -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3] + testpmd> port config all rss ipv4-tcp + testpmd> set fwd csum + testpmd> start 3. Launch virtio-user with inorder mergeable path:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=1,packed_vq=1 \ - -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/root/dpdk/vhost_net0,mrg_rxbuf=1,in_order=1,queues=2,packed_vq=1 \ + -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 + testpmd> set fwd csum + testpmd> start -4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: +4. Send tcp imix packets [64,1518] from packet generator as common step2, and then check the throughput can get expected data:: - testpmd> show port stats all + testpmd> show port stats all 5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: - testpmd> stop + testpmd> stop -6. Restart vhost port and send imix pkts again, then check the throught can get expected data:: +6. Restart vhost port and send imix packets again, then check the throuhput can get expected data:: - testpmd> start - testpmd> show port stats all + testpmd> start + testpmd> show port stats all 7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=0,queues=1,packed_vq=1 \ - -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/root/dpdk/vhost_net0,mrg_rxbuf=1,in_order=0,queues=2,packed_vq=1 \ + -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 + testpmd> set fwd csum + testpmd> start 8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,queues=1,packed_vq=1 \ - -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/root/dpdk/vhost_net0,mrg_rxbuf=0,in_order=1,queues=2,packed_vq=1 \ + -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 + testpmd> set fwd csum + testpmd> start 9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,queues=1,packed_vq=1 \ - -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/root/dpdk/vhost_net0,mrg_rxbuf=0,in_order=0,queues=2,packed_vq=1 \ + -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 + testpmd> set fwd csum + testpmd> start 10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=1 \ - -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/root/dpdk/vhost_net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=2 \ + -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 + testpmd> set fwd csum + testpmd> start .. note:: - If building and running environment support (AVX512 || NEON) && in-order feature is negotiated && Rx mergeable - is not negotiated && TCP_LRO Rx offloading is disabled && vectorized option enabled, packed virtqueue vectorized Rx path will be selected. + If building and running environment support (AVX512 || NEON) && in-order feature is negotiated && Rx mergeable + is not negotiated && TCP_LRO Rx offloading is disabled && vectorized option enabled, packed virtqueue vectorized Rx path will be selected. 11. Relaunch virtio-user with vectorized path and ring size is not power of 2, then repeat step 4-6:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=1,queue_size=1025 \ - -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1025 --rxd=1025 - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/root/dpdk/vhost_net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=2,queue_size=1025 \ + -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1025 --rxd=1025 + testpmd> set fwd csum + testpmd> start 12. Quit all testpmd and relaunch vhost with iova=pa by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=1,dmas=[txq0],dma_ring_size=2048' \ - --iova=pa -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0] - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ + --vdev 'net_vhost0,iface=/root/dpdk/vhost_net0,queues=2,dmas=[txq0;txq1;rxq0;rxq1]' \ + --iova=pa -- -i --nb-cores=2 --txq=2 --rxq=2 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore12@0000:00:04.2,lcore12@0000:00:04.3] + testpmd> port config all rss ipv4-tcp + testpmd> set fwd csum + testpmd> start -12. Rerun steps 3-6. - -Test Case 8: PVP packed ring all path multi-queues vhost enqueue operations with 1 to 1 mapping between vrings and CBDMA virtual channels ------------------------------------------------------------------------------------------------------------------------------------------ -This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring in each virtio path with multi-queues -when vhost uses the asynchronous enqueue operations and the mapping between vrings and CBDMA virtual channels is 1:1. -Both iova as VA and PA mode have been tested. - -1. Bind 1 NIC port and 8 CBDMA devices to vfio-pci, as common step 1. - -2. Launch vhost by below command:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ - -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ - --iova=va -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.3,lcore15@0000:00:04.4,lcore16@0000:00:04.5,lcore17@0000:00:04.6,lcore18@0000:00:04.7] - testpmd> set fwd mac - testpmd> start - -3. Launch virtio-user with inorder mergeable path:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=8,packed_vq=1 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start - -4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: - - testpmd> show port stats all - -5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: - - testpmd> stop - -6. Restart vhost port and send imix pkts again, then check the throught can get expected data:: - - testpmd> start - testpmd> show port stats all - -7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=0,queues=8,packed_vq=1 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start - -8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,queues=8,packed_vq=1 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start - -9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,queues=8,packed_vq=1 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start - -10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=8 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start - -11. Relaunch virtio-user with vectorized path and ring size is not power of 2, then repeat step 4-6:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=8,queue_size=1025 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1025 --rxd=1025 - testpmd> set fwd mac - testpmd> start - -12. Quit all testpmd and relaunch vhost with iova=pa by below command:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ - -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ - --iova=pa -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.3,lcore15@0000:00:04.4,lcore16@0000:00:04.5,lcore17@0000:00:04.6,lcore18@0000:00:04.7] - testpmd> set fwd mac - testpmd> start - -13. Rerun step 7. +13. Rerun steps 3-6. -Test Case 9: PVP packed ring all path multi-queues vhost enqueue operations with M to 1 mapping between vrings and CBDMA virtual channels ------------------------------------------------------------------------------------------------------------------------------------------ -This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring in each virtio path with multi-queues -when vhost uses the asynchronous enqueue operations and the mapping between vrings and CBDMA virtual channels is M:1. -Both iova as VA and PA mode have been tested. +Test Case 5: PVP packed ring all path multi-queues vhost async operations with M to 1 mapping between vrings and CBDMA virtual channels +--------------------------------------------------------------------------------------------------------------------------------------- +This case tests packed ring in each virtio path with multi-queues can work normally when vhost uses the asynchronous enqueue and dequeue operations +and the mapping between vrings and CBDMA virtual channels is M:1. Both iova as VA and PA mode have been tested. -1. Bind 1 NIC port and 1 CBDMA device to vfio-pci, as common step 1. +1. Bind 1 NIC port and 4 CBDMA devices to vfio-pci, as common step 1. 2. Launch vhost by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ - --iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0] - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ + --vdev 'net_vhost0,iface=/root/dpdk/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ + --iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.3] + testpmd> port config all rss ipv4-tcp + testpmd> set fwd csum + testpmd> start 3. Launch virtio-user with inorder mergeable path:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=8,packed_vq=1 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/root/dpdk/vhost_net0,mrg_rxbuf=1,in_order=1,queues=8,packed_vq=1 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd csum + testpmd> start -4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: +4. Send tcp imix packets [64,1518] from packet generator as common step2, and then check the throughput can get expected data:: - testpmd> show port stats all + testpmd> show port stats all 5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: - testpmd> stop + testpmd> stop -6. Restart vhost port and send imix pkts again, then check the throught can get expected data:: +6. Restart vhost port and send imix packets again, then check the throuhput can get expected data:: - testpmd> start - testpmd> show port stats all + testpmd> start + testpmd> show port stats all 7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=0,queues=8,packed_vq=1 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/root/dpdk/vhost_net0,mrg_rxbuf=1,in_order=0,queues=8,packed_vq=1 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd csum + testpmd> start 8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,queues=8,packed_vq=1 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/root/dpdk/vhost_net0,mrg_rxbuf=0,in_order=1,queues=8,packed_vq=1 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd csum + testpmd> start 9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,queues=8,packed_vq=1 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/root/dpdk/vhost_net0,mrg_rxbuf=0,in_order=0,queues=8,packed_vq=1 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd csum + testpmd> start 10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=8 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/root/dpdk/vhost_net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=8 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd csum + testpmd> start 11. Relaunch virtio-user with vectorized path and ring size is not power of 2, then repeat step 4-6:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=8,queue_size=1025 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1025 --rxd=1025 - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/root/dpdk/vhost_net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=8,queue_size=1025 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1025 --rxd=1025 + testpmd> set fwd csum + testpmd> start 12. Quit all testpmd and relaunch vhost by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ - --iova=va -- -i --nb-cores=3 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.0] - testpmd> set fwd mac - testpmd> start - -13. Rerun steps 3-6. - -14. Quit all testpmd and relaunch vhost by below command:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ - --iova=va -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.0,lcore14@0000:00:04.0,lcore15@0000:00:04.0,lcore16@0000:00:04.0,lcore17@0000:00:04.0,lcore18@0000:00:04.0] - testpmd> set fwd mac - testpmd> start - -15. Rerun steps 7. - -16. Quit all testpmd and relaunch vhost with iova=pa by below command:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ - --iova=pa -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.0,lcore14@0000:00:04.0,lcore15@0000:00:04.0,lcore16@0000:00:04.0,lcore17@0000:00:04.0,lcore18@0000:00:04.0] - testpmd> set fwd mac - testpmd> start - -17. Rerun steps 8. - -Test Case 10: PVP packed ring all path vhost enqueue operations with 1 to N mapping between vrings and CBDMA virtual channels ------------------------------------------------------------------------------------------------------------------------------ -This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring in each virtio path when vhost uses -the asynchronous enqueue operations and the mapping between vrings and CBDMA virtual channels is 1:N. -Both iova as VA and PA mode have been tested. - -1. Bind 1 NIC port and 8 CBDMA devices to vfio-pci, as common step 1. - -2. Launch vhost by below command:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ - -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=1,dmas=[txq0],dma_ring_size=2048' \ - --iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] - testpmd> set fwd mac - testpmd> start - -3. Launch virtio-user with inorder mergeable path:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=1,packed_vq=1 \ - -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start - -4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: - - testpmd> show port stats all - -5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: - - testpmd> stop - -6. Restart vhost port and send imix pkts again, then check the throught can get expected data:: + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/root/dpdk/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ + --iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.0,lcore14@0000:00:04.0] + testpmd> port config all rss ipv4-tcp + testpmd> set fwd csum + testpmd> start - testpmd> start - testpmd> show port stats all - -7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=0,queues=1,packed_vq=1 \ - -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start - -8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,queues=1,packed_vq=1 \ - -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start - -9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,queues=1,packed_vq=1 \ - -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start - -10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queues=1 \ - -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start - -11. Relaunch virtio-user with vectorized path and ring size is not power of 2, then repeat step 3:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queues=1,queue_size=1025 \ - -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1025 --rxd=1025 - testpmd> set fwd mac - testpmd> start +13. Rerun steps 7. -12. Quit all testpmd and relaunch vhost with iova=pa by below command:: +14. Quit all testpmd and relaunch vhost with iova=pa by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ - -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=1,dmas=[txq0],dma_ring_size=2048' \ - --iova=pa -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/root/dpdk/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ + --iova=pa -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.0,lcore14@0000:00:04.0,lcore15@0000:00:04.0,lcore16@0000:00:04.0,lcore17@0000:00:04.0,lcore18@0000:00:04.0] + testpmd> port config all rss ipv4-tcp + testpmd> set fwd csum + testpmd> start -13. Rerun steps 9. +15. Rerun steps 8. -Test Case 11: PVP packed ring all path multi-queues vhost enqueue operations with M to N mapping between vrings and CBDMA virtual channels ------------------------------------------------------------------------------------------------------------------------------------------- -This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring in each virtio path with multi-queues -when vhost uses the asynchronous enqueue operations and the mapping between vrings and CBDMA virtual channels is M:N. -Both iova as VA and PA mode have been tested. +Test Case 6: PVP packed ring dynamic queue number vhost async operations with M to N mapping between vrings and CBDMA virtual channels +-------------------------------------------------------------------------------------------------------------------------------------- +This case tests if the vhost-user async operation with cbdma channles can work normally when the queue number of split ring dynamic change. Both iova as VA and PA mode have been tested. 1. Bind 1 NIC port and 8 CBDMA devices to vfio-pci, as common step 1. -2. Launch vhost by below command:: +2. Launch vhost by below command(1:N mapping):: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ - -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2],dma_ring_size=2048' \ - --iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/root/dpdk/vhost_net0,queues=8,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ + --iova=va -- -i --nb-cores=2 --txq=2 --rxq=2 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore12@0000:00:04.4,lcore12@0000:00:04.5,lcore12@0000:00:04.6,lcore12@0000:00:04.7] + testpmd> port config all rss ipv4-tcp + testpmd>set fwd csum + testpmd>start -3. Launch virtio-user with inorder mergeable path:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=8,packed_vq=1 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start +3. Launch virtio-user by below command:: -4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/root/dpdk/vhost_net0,mrg_rxbuf=1,in_order=1,queues=8,server=1,packed_vq=1 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd csum + testpmd> start - testpmd> show port stats all +4. Send tcp imix packets[64,1518] from packet generator with random ip, check perforamnce can get target. 5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: - testpmd> stop - -6. Restart vhost port and send imix pkts again, then check the throught can get expected data:: - - testpmd> start - testpmd> show port stats all - -7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=0,queues=8,packed_vq=1 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start - -8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,queues=8,packed_vq=1 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start - -9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,queues=8,packed_vq=1 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start - -10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: + testpmd>stop - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=8, \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start +6. Quit and relaunch vhost without CBDMA:: -11. Relaunch virtio-user with vectorized path and ring size is not power of 2, then repeat step 4-6:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=8,queue_size=1025 \ - -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1025 --rxd=1025 - testpmd> set fwd mac - testpmd> start - -12. Quit all testpmd and relaunch vhost by below command:: + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:18:00.0 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/root/dpdk/vhost_net0,queues=8,client=1' \ + --iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> port config all rss ipv4-tcp + testpmd>set fwd csum + testpmd>start - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ - -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ - --iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] - testpmd> set fwd mac - testpmd> start +7. Rerun step 4-5. -13. Rerun steps 7. - -14. Quit all testpmd and relaunch vhost with iova=pa by below command:: +8. Quit and relaunch vhost by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ - -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ - --iova=pa -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/root/dpdk/vhost_net0,queues=8,client=1,dmas=[rxq0;rxq1;rxq2;rxq3]' \ + --iova=va -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.3] + testpmd> port config all rss ipv4-tcp + testpmd>set fwd csum + testpmd>start -15. Rerun steps 9. - -Test Case 12: PVP packed ring dynamic queue number vhost enqueue operations with M to N mapping between vrings and CBDMA virtual channels ------------------------------------------------------------------------------------------------------------------------------------------ -This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring when vhost uses the asynchronous enqueue operations -and if the vhost-user can work well when the queue number dynamic change. Both iova as VA and PA mode have been tested. -Both iova as VA and PA mode have been tested. - -1. Bind 1 NIC port and 8 CBDMA devices to vfio-pci, as common step 1. - -2. Launch vhost by below command:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 \ - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,client=1' \ - --iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start - -3. Launch virtio-user by below command:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=1,server=1,packed_vq=1 \ - -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 - testpmd> set fwd mac - testpmd> start - -4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: - - testpmd> show port stats all - -5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: +9. Rerun step 4-5. - testpmd> stop +10. Quit and relaunch vhost with M:N mapping between vrings and CBDMA virtual channels:: -6. Restart vhost port and send imix pkts again, then check the throught can get expected data:: + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:18:00.0 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/root/dpdk/vhost_net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ + --iova=va -- -i --nb-cores=2 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore12@0000:00:04.1,lcore12@0000:00:04.2,lcore12@0000:00:04.3,lcore12@0000:00:04.4,lcore12@0000:00:04.5,lcore12@0000:00:04.6,lcore12@0000:00:04.7] + testpmd> port config all rss ipv4-tcp + testpmd>set fwd csum + testpmd>start - testpmd> start - testpmd> show port stats all +11. Rerun step 4-5. -7. Quit and relaunch vhost with 1:1 mapping between vrings and CBDMA virtual channels, then repeat step 4-6:: +12. Quit and relaunch vhost with diff mapping between vrings and CBDMA virtual channels:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ - -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3]' \ - --iova=va -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.3] - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 \ + --vdev 'net_vhost0,iface=/root/dpdk/vhost_net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ + --iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore12@0000:00:04.1,lcore12@0000:00:04.2,lcore13@0000:00:04.3,lcore13@0000:00:04.4,lcore13@0000:00:04.5,lcore13@0000:00:04.6,lcore14@0000:00:04.4,lcore14@0000:00:04.5,lcore14@0000:00:04.6,lcore14@0000:00:04.7] + testpmd> port config all rss ipv4-tcp + testpmd>set fwd csum + testpmd>start -9. Quit and relaunch vhost with M:N(1:N;M# ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ - -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq6;txq7]' \ - --iova=va -- -i --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.7,lcore12@0000:00:04.1,lcore12@0000:00:04.2,lcore12@0000:00:04.3,lcore13@0000:00:04.2,lcore13@0000:00:04.3,lcore13@0000:00:04.4,lcore14@0000:00:04.2,lcore14@0000:00:04.3,lcore14@0000:00:04.4,lcore14@0000:00:04.5,lcore15@0000:00:04.0,lcore15@0000:00:04.1,lcore15@0000:00:04.2,lcore15@0000:00:04.3,lcore15@0000:00:04.4,lcore15@0000:00:04.5,lcore15@0000:00:04.6,lcore15@0000:00:04.7] - testpmd> set fwd mac - testpmd> start +14. Quit and relaunch virtio-user by below command:: -11. Quit and relaunch vhost with diff M:N(M:1;M>N) mapping between vrings and CBDMA virtual channels, then repeat step 4-6:: + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/root/dpdk/vhost_net0,mrg_rxbuf=1,in_order=0,packed=on,queues=8,server=1 \ + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd>set fwd csum + testpmd>start - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ - -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 \ - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ - --iova=va -- -i --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.1,lcore14@0000:00:04.2,lcore15@0000:00:04.1,lcore15@0000:00:04.2] - testpmd> set fwd mac - testpmd> start +15. Rerun step 4-5. -13. Quit and relaunch vhost with iova=pa by below command:: +16. Quit and relaunch vhost with iova=pa by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ - -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 \ - --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ - --iova=pa -- -i --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.1,lcore14@0000:00:04.2,lcore15@0000:00:04.1,lcore15@0000:00:04.2] - testpmd> set fwd mac - testpmd> start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 \ + --vdev 'net_vhost0,iface=/root/dpdk/vhost_net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ + --iova=pa -- -i --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.1,lcore14@0000:00:04.2,lcore15@0000:00:04.1,lcore15@0000:00:04.2] + testpmd> port config all rss ipv4-tcp + testpmd>set fwd csum + testpmd>start -14. Rerun step 4-6. +17. Rerun step 4-5. From patchwork Thu Aug 11 06:16:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 114824 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 78A61A0548; Thu, 11 Aug 2022 08:20:12 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 72E8D42B98; Thu, 11 Aug 2022 08:20:12 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id 4177140DDA for ; Thu, 11 Aug 2022 08:20:10 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1660198810; x=1691734810; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=3NrEcaXgnoiXKwoKrwhjIUHQiDd7lUR7bkjP6wA63hw=; b=mFXkv0Dn+b/lkZta5HIqRvMC0iY8iWHEc5vZSm5uCv/FkaxPhrA/CcSl vnyswDN+Hl//+1iM0Uj4v/91LeHNfl8ze6186LX/VWMRK7ku//rjTVJ1M oxXSJ5qhHBhqoa8jAvvp/SFm1cYYXAXP/AhCo+9NNNHqxiON5CqJxL7nC /JCuUtrvUztR1X4Ti6imYwRwDzyxGWv1NLrS1x/xl5jVEQF5014R2Irgl x98nSrp6s6oGxiZxPTQqGIWMxsn3WcfuInSg2+vH0u2203SAI5l5fmQVS 2MogPA8VcBFhgf5WZ7KN5skdEs6YSPTgx98IwamzkST2iN5OAqiJOHuMt w==; X-IronPort-AV: E=McAfee;i="6400,9594,10435"; a="288837482" X-IronPort-AV: E=Sophos;i="5.93,228,1654585200"; d="scan'208,223";a="288837482" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Aug 2022 23:20:08 -0700 X-IronPort-AV: E=Sophos;i="5.93,228,1654585200"; d="scan'208,223";a="665234486" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Aug 2022 23:20:06 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V2 2/3] tests/vhost_cbdma: modify testsuite to test virtio dequeue Date: Thu, 11 Aug 2022 02:16:41 -0400 Message-Id: <20220811061641.386671-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org From DPDK-22.07, virtio support async dequeue for split and packed ring path, so modify vhost_cbdma testsuite to test the split and packed ring async dequeue feature. Signed-off-by: Wei Ling --- tests/TestSuite_vhost_cbdma.py | 2158 +++++++++++--------------------- 1 file changed, 713 insertions(+), 1445 deletions(-) diff --git a/tests/TestSuite_vhost_cbdma.py b/tests/TestSuite_vhost_cbdma.py index 0584809d..ff79f581 100644 --- a/tests/TestSuite_vhost_cbdma.py +++ b/tests/TestSuite_vhost_cbdma.py @@ -15,20 +15,20 @@ from framework.settings import HEADER_SIZE, UPDATE_EXPECTED, load_global_setting from framework.test_case import TestCase SPLIT_RING_PATH = { - "inorder_mergeable_path": "mrg_rxbuf=1,in_order=1", - "mergeable_path": "mrg_rxbuf=1,in_order=0", - "inorder_non_mergeable_path": "mrg_rxbuf=0,in_order=1", - "non_mergeable_path": "mrg_rxbuf=0,in_order=0", - "vectorized_path": "mrg_rxbuf=0,in_order=0,vectorized=1", + "inorder_mergeable": "mrg_rxbuf=1,in_order=1", + "mergeable": "mrg_rxbuf=1,in_order=0", + "inorder_non_mergeable": "mrg_rxbuf=0,in_order=1", + "non_mergeable": "mrg_rxbuf=0,in_order=0", + "vectorized": "mrg_rxbuf=0,in_order=0,vectorized=1", } PACKED_RING_PATH = { - "inorder_mergeable_path": "mrg_rxbuf=1,in_order=1,packed_vq=1", - "mergeable_path": "mrg_rxbuf=1,in_order=0,packed_vq=1", - "inorder_non_mergeable_path": "mrg_rxbuf=0,in_order=1,packed_vq=1", - "non_mergeable_path": "mrg_rxbuf=0,in_order=0,packed_vq=1", - "vectorized_path": "mrg_rxbuf=0,in_order=0,vectorized=1,packed_vq=1", - "vectorized_path_not_power_of_2": "mrg_rxbuf=0,in_order=0,vectorized=1,packed_vq=1", + "inorder_mergeable": "mrg_rxbuf=1,in_order=1,packed_vq=1", + "mergeable": "mrg_rxbuf=1,in_order=0,packed_vq=1", + "inorder_non_mergeable": "mrg_rxbuf=0,in_order=1,packed_vq=1", + "non_mergeable": "mrg_rxbuf=0,in_order=0,packed_vq=1", + "vectorized": "mrg_rxbuf=0,in_order=0,vectorized=1,packed_vq=1", + "vectorized_path_not_power_of_2": "mrg_rxbuf=0,in_order=0,vectorized=1,packed_vq=1,queue_size=1025", } @@ -41,12 +41,12 @@ class TestVhostCbdma(TestCase): self.vhost_user_pmd = PmdOutput(self.dut, self.vhost_user) self.virtio_user_pmd = PmdOutput(self.dut, self.virtio_user) self.virtio_mac = "00:01:02:03:04:05" - self.headers_size = HEADER_SIZE["eth"] + HEADER_SIZE["ip"] + self.headers_size = HEADER_SIZE["eth"] + HEADER_SIZE["ip"] + HEADER_SIZE["tcp"] self.pci_info = self.dut.ports_info[0]["pci"] self.ports_socket = self.dut.get_numa_id(self.dut_ports[0]) self.cores_list = self.dut.get_core_list(config="all", socket=self.ports_socket) self.vhost_core_list = self.cores_list[0:9] - self.virtio_core_list = self.cores_list[9:11] + self.virtio_core_list = self.cores_list[10:15] self.out_path = "/tmp/%s" % self.suite_name out = self.tester.send_expect("ls -d %s" % self.out_path, "# ") if "No such file or directory" in out: @@ -71,7 +71,6 @@ class TestVhostCbdma(TestCase): self.nb_desc = self.test_parameters.get(list(self.test_parameters.keys())[0])[0] self.dut.send_expect("killall -I %s" % self.testpmd_name, "#", 20) self.dut.send_expect("rm -rf %s/vhost-net*" % self.base_dir, "#") - self.dut.send_expect("rm -rf /tmp/s0", "#") self.mode_list = [] def get_cbdma_ports_info_and_bind_to_dpdk(self, cbdma_num, allow_diff_socket=False): @@ -112,33 +111,6 @@ class TestVhostCbdma(TestCase): 60, ) - @staticmethod - def generate_dmas_param(queues): - das_list = [] - for i in range(queues): - das_list.append("txq{}".format(i)) - das_param = "[{}]".format(";".join(das_list)) - return das_param - - @staticmethod - def generate_lcore_dma_param(cbdma_list, core_list): - group_num = int(len(cbdma_list) / len(core_list)) - lcore_dma_list = [] - if len(cbdma_list) == 1: - for core in core_list: - lcore_dma_list.append("lcore{}@{}".format(core, cbdma_list[0])) - elif len(core_list) == 1: - for cbdma in cbdma_list: - lcore_dma_list.append("lcore{}@{}".format(core_list[0], cbdma)) - else: - for cbdma in cbdma_list: - core_list_index = int(cbdma_list.index(cbdma) / group_num) - lcore_dma_list.append( - "lcore{}@{}".format(core_list[core_list_index], cbdma) - ) - lcore_dma_param = "[{}]".format(",".join(lcore_dma_list)) - return lcore_dma_param - def bind_cbdma_device_to_kernel(self): self.dut.send_expect("modprobe ioatdma", "# ") self.dut.send_expect( @@ -155,18 +127,20 @@ class TestVhostCbdma(TestCase): port_num = re.search("Number of available ports:\s*(\d*)", out) return int(port_num.group(1)) - def check_each_queue_of_port_packets(self, queues=0): + def check_each_queue_of_port_packets(self, queues): """ check each queue of each port has receive packets """ + self.logger.info(self.vhost_user_pmd.execute_cmd("show port stats all")) out = self.vhost_user_pmd.execute_cmd("stop") + self.logger.info(out) port_num = self.get_vhost_port_num() for port in range(port_num): for queue in range(queues): - if queues > 0: - reg = "Port= %d/Queue= %d" % (port, queue) + if queues == 1: + reg = "Forward statistics for port %d" % port else: - reg = "Forward statistics for port {}".format(port) + reg = "Port= %d/Queue= %d" % (port, queue) index = out.find(reg) rx = re.search("RX-packets:\s*(\d*)", out[index:]) tx = re.search("TX-packets:\s*(\d*)", out[index:]) @@ -174,10 +148,9 @@ class TestVhostCbdma(TestCase): tx_packets = int(tx.group(1)) self.verify( rx_packets > 0 and tx_packets > 0, - "The port {}/queue {} rx-packets or tx-packets is 0 about ".format( - port, queue - ) - + "rx-packets: {}, tx-packets: {}".format(rx_packets, tx_packets), + "The port %d/queue %d rx-packets or tx-packets is 0 about " + % (port, queue) + + "rx-packets: %d, tx-packets: %d" % (rx_packets, tx_packets), ) self.vhost_user_pmd.execute_cmd("start") @@ -195,7 +168,9 @@ class TestVhostCbdma(TestCase): self.vhost_user_pmd.start_testpmd( cores=cores, param=param, eal_param=eal_param, ports=ports, prefix="vhost" ) - self.vhost_user_pmd.execute_cmd("set fwd mac") + if self.nic == "I40E_40G-QSFP_A": + self.vhost_user_pmd.execute_cmd("port config all rss ipv4-tcp") + self.vhost_user_pmd.execute_cmd("set fwd csum") self.vhost_user_pmd.execute_cmd("start") def start_virtio_testpmd(self, cores="Default", param="", eal_param=""): @@ -204,32 +179,42 @@ class TestVhostCbdma(TestCase): self.virtio_user_pmd.start_testpmd( cores=cores, param=param, eal_param=eal_param, no_pci=True, prefix="virtio" ) - self.virtio_user_pmd.execute_cmd("set fwd mac") + self.virtio_user_pmd.execute_cmd("set fwd csum") self.virtio_user_pmd.execute_cmd("start") - self.virtio_user_pmd.execute_cmd("show port info all") - def test_perf_pvp_split_all_path_vhost_txq_1_to_1_cbdma(self): + def test_perf_pvp_split_ring_all_path_multi_queues_vhost_async_operation_with_1_to_1( + self, + ): """ - Test Case 1: PVP split ring all path vhost enqueue operations with 1 to 1 mapping between vrings and CBDMA virtual channels + Test Case 1: PVP split ring all path multi-queues vhost async operation with 1 to 1 mapping between vrings and CBDMA virtual channels """ - cbdma_num = 1 - self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num) - dmas = self.generate_dmas_param(1) - lcore_dma = self.generate_lcore_dma_param( - cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=4) + lcore_dma = ( + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s" + % ( + self.vhost_core_list[1], + self.cbdma_list[0], + self.vhost_core_list[1], + self.cbdma_list[1], + self.vhost_core_list[1], + self.cbdma_list[2], + self.vhost_core_list[1], + self.cbdma_list[3], + ) ) - vhost_eal_param = "--vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas={},dma_ring_size=2048'".format( - dmas + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost_net0,queues=2,dmas=[txq0;txq1;rxq0;rxq1]'" ) vhost_param = ( - "--nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 --lcore-dma={}".format( - lcore_dma - ) + "--nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 --lcore-dma=[%s]" + % lcore_dma ) - virtio_param = "--nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024" allow_pci = [self.dut.ports_info[0]["pci"]] - for i in range(cbdma_num): - allow_pci.append(self.cbdma_list[i]) + for i in self.cbdma_list: + allow_pci.append(i) self.start_vhost_testpmd( cores=self.vhost_core_list, param=vhost_param, @@ -237,18 +222,17 @@ class TestVhostCbdma(TestCase): ports=allow_pci, iova_mode="va", ) + virtio_param = "--nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024" for key, path in SPLIT_RING_PATH.items(): virtio_eal_param = ( - "--vdev 'net_virtio_user0,mac={},path=/tmp/s0,{},queues=1'".format( - self.virtio_mac, path - ) + "--vdev 'net_virtio_user0,mac=%s,path=vhost_net0,%s,queues=2'" + % (self.virtio_mac, path) ) - if key == "non_mergeable_path": + if key == "non_mergeable": new_virtio_param = "--enable-hw-vlan-strip " + virtio_param else: new_virtio_param = virtio_param - self.logger.info("Start virtio-user with {} path with {}".format(key, path)) mode = key + "_VA" self.mode_list.append(mode) self.start_virtio_testpmd( @@ -257,12 +241,12 @@ class TestVhostCbdma(TestCase): eal_param=virtio_eal_param, ) self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets() + self.check_each_queue_of_port_packets(queues=2) - self.logger.info("Restart vhost with {} path with {}".format(key, path)) mode += "_RestartVhost" + self.mode_list.append(mode) self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets() + self.check_each_queue_of_port_packets(queues=2) self.virtio_user_pmd.quit() if not self.check_2M_env: @@ -275,37 +259,29 @@ class TestVhostCbdma(TestCase): iova_mode="pa", ) for key, path in SPLIT_RING_PATH.items(): - virtio_eal_param = ( - "--vdev 'net_virtio_user0,mac={},path=/tmp/s0,{},queues=1'".format( - self.virtio_mac, path + if key == "inorder_mergeable": + virtio_eal_param = ( + "--vdev 'net_virtio_user0,mac=%s,path=./vhost_net0,%s,queues=2'" + % (self.virtio_mac, path) ) - ) - if key == "non_mergeable_path": - new_virtio_param = "--enable-hw-vlan-strip " + virtio_param - else: - new_virtio_param = virtio_param - self.logger.info( - "Start virtio-user with {} path with {}".format(key, path) - ) - mode = key + "_PA" - self.mode_list.append(mode) - self.start_virtio_testpmd( - cores=self.virtio_core_list, - param=new_virtio_param, - eal_param=virtio_eal_param, - ) - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets() + mode = key + "_PA" + self.mode_list.append(mode) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=2) + + mode += "_RestartVhost" + self.vhost_user_pmd.execute_cmd("start") + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=2) + self.virtio_user_pmd.quit() - self.logger.info("Restart vhost with {} path with {}".format(key, path)) - mode += "_RestartVhost" - self.vhost_user_pmd.execute_cmd("start") - self.mode_list.append(mode) - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets() - self.virtio_user_pmd.quit() - self.result_table_print() self.test_target = self.running_case self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ self.test_target @@ -314,28 +290,37 @@ class TestVhostCbdma(TestCase): self.handle_results(mode_list=self.mode_list) self.vhost_user_pmd.quit() - def test_perf_pvp_split_all_path_multi_queues_vhost_txq_1_to_1_cbdma(self): + def test_perf_pvp_split_ring_all_path_multi_queues_vhost_async_operation_with_M_to_1( + self, + ): """ - Test Case 2: PVP split ring all path multi-queues vhost enqueue operations with 1 to 1 mapping between vrings and CBDMA virtual channels + Test Case 2: PVP split ring all path multi-queues vhost async operations with M to 1 mapping between vrings and CBDMA virtual channels """ - cbdma_num = 8 - self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num) - dmas = self.generate_dmas_param(8) - lcore_dma = self.generate_lcore_dma_param( - cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:] - ) - vhost_eal_param = "--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas={},dma_ring_size=2048'".format( - dmas + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=4) + lcore_dma = ( + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s" + % ( + self.vhost_core_list[1], + self.cbdma_list[0], + self.vhost_core_list[2], + self.cbdma_list[1], + self.vhost_core_list[3], + self.cbdma_list[2], + self.vhost_core_list[4], + self.cbdma_list[3], + ) ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" vhost_param = ( - " --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma={}".format( - lcore_dma - ) + "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma=[%s]" + % lcore_dma ) - virtio_param = " --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024" allow_pci = [self.dut.ports_info[0]["pci"]] - for i in range(cbdma_num): - allow_pci.append(self.cbdma_list[i]) + for i in self.cbdma_list: + allow_pci.append(i) self.start_vhost_testpmd( cores=self.vhost_core_list, param=vhost_param, @@ -343,18 +328,17 @@ class TestVhostCbdma(TestCase): ports=allow_pci, iova_mode="va", ) + virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" for key, path in SPLIT_RING_PATH.items(): virtio_eal_param = ( - "--vdev=net_virtio_user0,mac={},path=/tmp/s0,{},queues=8".format( - self.virtio_mac, path - ) + "--vdev 'net_virtio_user0,mac=%s,path=vhost_net0,%s,queues=8'" + % (self.virtio_mac, path) ) - if key == "non_mergeable_path": + if key == "non_mergeable": new_virtio_param = "--enable-hw-vlan-strip " + virtio_param else: new_virtio_param = virtio_param - self.logger.info("Start virtio-user with {} path with {}".format(key, path)) mode = key + "_VA" self.mode_list.append(mode) self.start_virtio_testpmd( @@ -365,16 +349,46 @@ class TestVhostCbdma(TestCase): self.send_imix_packets(mode=mode) self.check_each_queue_of_port_packets(queues=8) - self.logger.info("Restart host with {} path with {}".format(key, path)) mode += "_RestartVhost" self.mode_list.append(mode) - self.vhost_user_pmd.execute_cmd("start") self.send_imix_packets(mode=mode) self.check_each_queue_of_port_packets(queues=8) self.virtio_user_pmd.quit() if not self.check_2M_env: self.vhost_user_pmd.quit() + lcore_dma = ( + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s" + % ( + self.vhost_core_list[1], + self.cbdma_list[0], + self.vhost_core_list[2], + self.cbdma_list[0], + self.vhost_core_list[3], + self.cbdma_list[0], + self.vhost_core_list[4], + self.cbdma_list[0], + self.vhost_core_list[5], + self.cbdma_list[0], + self.vhost_core_list[6], + self.cbdma_list[0], + self.vhost_core_list[7], + self.cbdma_list[0], + self.vhost_core_list[8], + self.cbdma_list[0], + ) + ) + vhost_param = ( + "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma=[%s]" + % lcore_dma + ) self.start_vhost_testpmd( cores=self.vhost_core_list, param=vhost_param, @@ -383,15 +397,13 @@ class TestVhostCbdma(TestCase): iova_mode="pa", ) for key, path in SPLIT_RING_PATH.items(): - if key == "mergeable_path": - virtio_eal_param = "--vdev=net_virtio_user0,mac={},path=/tmp/s0,{},queues=8".format( - self.virtio_mac, path + if key == "inorder_mergeable": + virtio_eal_param = ( + "--vdev 'net_virtio_user0,mac=%s,path=./vhost_net0,%s,queues=8'" + % (self.virtio_mac, path) ) mode = key + "_PA" - self.logger.info( - "Start virtio-user with {} path with {}".format(key, path) - ) self.mode_list.append(mode) self.start_virtio_testpmd( cores=self.virtio_core_list, @@ -402,16 +414,12 @@ class TestVhostCbdma(TestCase): self.check_each_queue_of_port_packets(queues=8) mode += "_RestartVhost" - self.logger.info( - "Restart host with {} path with {}".format(key, path) - ) - self.mode_list.append(mode) self.vhost_user_pmd.execute_cmd("start") + self.mode_list.append(mode) self.send_imix_packets(mode=mode) self.check_each_queue_of_port_packets(queues=8) self.virtio_user_pmd.quit() - self.result_table_print() self.test_target = self.running_case self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ self.test_target @@ -420,28 +428,49 @@ class TestVhostCbdma(TestCase): self.handle_results(mode_list=self.mode_list) self.vhost_user_pmd.quit() - def test_perf_pvp_split_all_path_multi_queues_vhost_txq_M_2_1_cbdma(self): + def test_perf_pvp_split_ring_dynamic_queue_number_vhost_async_operation_with_M_to_N( + self, + ): """ - Test Case 3: PVP split ring all path multi-queues vhost enqueue operations with M to 1 mapping between vrings and CBDMA virtual channels + Test Case 3: PVP split ring dynamic queue number vhost async operations with M to N mapping between vrings and CBDMA virtual channels """ - cbdma_num = 1 - self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num) - dmas = self.generate_dmas_param(8) - lcore_dma = self.generate_lcore_dma_param( - cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] - ) - vhost_eal_param = "--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas={},dma_ring_size=2048'".format( - dmas + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=8) + lcore_dma = ( + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s" + % ( + self.vhost_core_list[1], + self.cbdma_list[0], + self.vhost_core_list[1], + self.cbdma_list[1], + self.vhost_core_list[1], + self.cbdma_list[2], + self.vhost_core_list[1], + self.cbdma_list[3], + self.vhost_core_list[2], + self.cbdma_list[4], + self.vhost_core_list[2], + self.cbdma_list[5], + self.vhost_core_list[2], + self.cbdma_list[6], + self.vhost_core_list[2], + self.cbdma_list[7], + ) ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost_net0,queues=8,client=1,dmas=[txq0;txq1;rxq0;rxq1]'" vhost_param = ( - " --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma={}".format( - lcore_dma - ) + "--nb-cores=2 --txq=2 --rxq=2 --txd=1024 --rxd=1024 --lcore-dma=[%s]" + % lcore_dma ) - virtio_param = " --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024" allow_pci = [self.dut.ports_info[0]["pci"]] - for i in range(cbdma_num): - allow_pci.append(self.cbdma_list[i]) + for i in self.cbdma_list: + allow_pci.append(i) self.start_vhost_testpmd( cores=self.vhost_core_list, param=vhost_param, @@ -449,44 +478,59 @@ class TestVhostCbdma(TestCase): ports=allow_pci, iova_mode="va", ) + virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" for key, path in SPLIT_RING_PATH.items(): virtio_eal_param = ( - "--vdev=net_virtio_user0,mac={},path=/tmp/s0,{},queues=8".format( - self.virtio_mac, path - ) - ) - if key == "non_mergeable_path": - new_virtio_param = "--enable-hw-vlan-strip " + virtio_param - else: - new_virtio_param = virtio_param - - mode = key + "_VA" + "_1_lcore" - self.mode_list.append(mode) - self.logger.info("Start virtio-user with {} path with {}".format(key, path)) - self.start_virtio_testpmd( - cores=self.virtio_core_list, - param=new_virtio_param, - eal_param=virtio_eal_param, + "--vdev 'net_virtio_user0,mac=%s,path=vhost_net0,%s,queues=8,server=1'" + % (self.virtio_mac, path) ) - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=8) + if key == "inorder_mergeable": + mode = key + "_VA_1:N" + self.mode_list.append(mode) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=2) - mode += "_RestartVhost" - self.mode_list.append(mode) - self.logger.info("Restart host with {} path with {}".format(key, path)) - self.vhost_user_pmd.execute_cmd("start") - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=8) - self.virtio_user_pmd.quit() + self.vhost_user_pmd.quit() + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost_net0,queues=8,client=1,'" + vhost_param = "--nb-cores=2 --txq=1 --rxq=1 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=[self.dut.ports_info[0]["pci"]], + iova_mode="va", + ) + mode = "inorder_mergeable" + "_VA_without_CBDMA" + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=1) self.vhost_user_pmd.quit() - lcore_dma = self.generate_lcore_dma_param( - cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:4] + lcore_dma = ( + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s" + % ( + self.vhost_core_list[1], + self.cbdma_list[0], + self.vhost_core_list[2], + self.cbdma_list[1], + self.vhost_core_list[3], + self.cbdma_list[2], + self.vhost_core_list[4], + self.cbdma_list[3], + ) ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost_net0,queues=8,client=1,dmas=[rxq0;rxq1;rxq2;rxq3]'" vhost_param = ( - " --nb-cores=3 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma={}".format( - lcore_dma - ) + "--nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 --lcore-dma=[%s]" + % lcore_dma ) self.start_vhost_testpmd( cores=self.vhost_core_list, @@ -495,44 +539,106 @@ class TestVhostCbdma(TestCase): ports=allow_pci, iova_mode="va", ) - for key, path in SPLIT_RING_PATH.items(): - if key == "inorder_mergeable_path": - virtio_eal_param = ( - "--vdev=net_virtio_user0,mac={},path=/tmp/s0,{},queues=8".format( - self.virtio_mac, path - ) - ) - - mode = key + "_VA" + "_3_lcore" - self.mode_list.append(mode) - self.logger.info( - "Start virtio-user with {} path with {}".format(key, path) - ) - self.start_virtio_testpmd( - cores=self.virtio_core_list, - param=virtio_param, - eal_param=virtio_eal_param, - ) - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=8) - - mode += "_RestartVhost" - self.mode_list.append(mode) - self.logger.info("Restart host with {} path with {}".format(key, path)) - self.vhost_user_pmd.execute_cmd("start") - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=8) - self.virtio_user_pmd.quit() + mode = "inorder_mergeable" + "_VA_1:1" + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=4) self.vhost_user_pmd.quit() - lcore_dma = self.generate_lcore_dma_param( - cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:9] + lcore_dma = ( + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s" + % ( + self.vhost_core_list[1], + self.cbdma_list[0], + self.vhost_core_list[1], + self.cbdma_list[1], + self.vhost_core_list[2], + self.cbdma_list[1], + self.vhost_core_list[2], + self.cbdma_list[2], + self.vhost_core_list[2], + self.cbdma_list[3], + self.vhost_core_list[2], + self.cbdma_list[4], + self.vhost_core_list[2], + self.cbdma_list[5], + self.vhost_core_list[2], + self.cbdma_list[6], + self.vhost_core_list[2], + self.cbdma_list[7], + ) ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost_net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" vhost_param = ( - " --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma={}".format( - lcore_dma + "--nb-cores=2 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma=[%s]" + % lcore_dma + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=allow_pci, + iova_mode="va", + ) + mode = "inorder_mergeable" + "_VA_M:N" + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + self.vhost_user_pmd.quit() + lcore_dma = ( + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s" + % ( + self.vhost_core_list[1], + self.cbdma_list[0], + self.vhost_core_list[2], + self.cbdma_list[0], + self.vhost_core_list[2], + self.cbdma_list[1], + self.vhost_core_list[2], + self.cbdma_list[2], + self.vhost_core_list[3], + self.cbdma_list[3], + self.vhost_core_list[3], + self.cbdma_list[4], + self.vhost_core_list[3], + self.cbdma_list[5], + self.vhost_core_list[3], + self.cbdma_list[6], + self.vhost_core_list[4], + self.cbdma_list[4], + self.vhost_core_list[4], + self.cbdma_list[5], + self.vhost_core_list[4], + self.cbdma_list[6], + self.vhost_core_list[4], + self.cbdma_list[7], ) ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost_net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" + vhost_param = ( + "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma=[%s]" + % lcore_dma + ) self.start_vhost_testpmd( cores=self.vhost_core_list, param=vhost_param, @@ -540,18 +646,21 @@ class TestVhostCbdma(TestCase): ports=allow_pci, iova_mode="va", ) + mode = "inorder_mergeable" + "_VA_M:N_diff" + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + self.virtio_user_pmd.quit() + virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" for key, path in SPLIT_RING_PATH.items(): - if key == "mergeable_path": + if key == "mergeable": virtio_eal_param = ( - "--vdev=net_virtio_user0,mac={},path=/tmp/s0,{},queues=8".format( - self.virtio_mac, path - ) + "--vdev 'net_virtio_user0,mac=%s,path=vhost_net0,%s,queues=8,server=1'" + % (self.virtio_mac, path) ) - mode = key + "_VA" + "_8_lcore" + mode = key + "_VA_M:N_diff" self.mode_list.append(mode) - self.logger.info( - "Start virtio-user with {} path with {}".format(key, path) - ) self.start_virtio_testpmd( cores=self.virtio_core_list, param=virtio_param, @@ -560,52 +669,52 @@ class TestVhostCbdma(TestCase): self.send_imix_packets(mode=mode) self.check_each_queue_of_port_packets(queues=8) - self.mode_list.append(mode) - self.logger.info("Restart host with {} path with {}".format(key, path)) - self.vhost_user_pmd.execute_cmd("start") - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=8) - self.virtio_user_pmd.quit() - - if not self.check_2M_env: - self.vhost_user_pmd.quit() - self.start_vhost_testpmd( - cores=self.vhost_core_list, - param=vhost_param, - eal_param=vhost_eal_param, - ports=allow_pci, - iova_mode="pa", + self.vhost_user_pmd.quit() + lcore_dma = ( + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s" + % ( + self.vhost_core_list[1], + self.cbdma_list[0], + self.vhost_core_list[2], + self.cbdma_list[0], + self.vhost_core_list[3], + self.cbdma_list[1], + self.vhost_core_list[3], + self.cbdma_list[2], + self.vhost_core_list[4], + self.cbdma_list[1], + self.vhost_core_list[4], + self.cbdma_list[2], + self.vhost_core_list[5], + self.cbdma_list[1], + self.vhost_core_list[5], + self.cbdma_list[2], ) - for key, path in SPLIT_RING_PATH.items(): - if key == "inorder_non_mergeable_path": - virtio_eal_param = "--vdev=net_virtio_user0,mac={},path=/tmp/s0,{},queues=8".format( - self.virtio_mac, path - ) - - mode = key + "_PA" + "_8_lcore" - self.mode_list.append(mode) - self.logger.info( - "Start virtio-user with {} path with {}".format(key, path) - ) - self.start_virtio_testpmd( - cores=self.virtio_core_list, - param=virtio_param, - eal_param=virtio_eal_param, - ) - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=8) - - mode += "_RestartVhost" - self.mode_list.append(mode) - self.logger.info( - "Restart host with {} path with {}".format(key, path) - ) - self.vhost_user_pmd.execute_cmd("start") - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=8) - self.virtio_user_pmd.quit() + ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost_net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" + vhost_param = ( + "--nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma=[%s]" + % lcore_dma + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=allow_pci, + iova_mode="pa", + ) + mode = "mergeable" + "_PA_M:N_diff" + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) - self.result_table_print() self.test_target = self.running_case self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ self.test_target @@ -614,28 +723,39 @@ class TestVhostCbdma(TestCase): self.handle_results(mode_list=self.mode_list) self.vhost_user_pmd.quit() - def test_perf_pvp_split_all_path_vhost_txq_1_to_N_cbdma(self): + def test_perf_pvp_packed_ring_all_path_multi_queues_vhost_async_operation_with_1_to_1( + self, + ): """ - Test Case 4: PVP split ring all path vhost enqueue operations with 1 to N mapping between vrings and CBDMA virtual channels + Test Case 4: PVP packed ring all path multi-queues vhost async operations with 1 to 1 mapping between vrings and CBDMA virtual channels """ - cbdma_num = 8 - self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num) - dmas = self.generate_dmas_param(1) - lcore_dma = self.generate_lcore_dma_param( - cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=4) + lcore_dma = ( + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s" + % ( + self.vhost_core_list[1], + self.cbdma_list[0], + self.vhost_core_list[1], + self.cbdma_list[1], + self.vhost_core_list[1], + self.cbdma_list[2], + self.vhost_core_list[1], + self.cbdma_list[3], + ) ) - vhost_eal_param = "--vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas={},dma_ring_size=2048'".format( - dmas + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost_net0,queues=2,dmas=[txq0;txq1;rxq0;rxq1]'" ) vhost_param = ( - " --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 --lcore-dma={}".format( - lcore_dma - ) + "--nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 --lcore-dma=[%s]" + % lcore_dma ) - virtio_param = " --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024" allow_pci = [self.dut.ports_info[0]["pci"]] - for i in range(cbdma_num): - allow_pci.append(self.cbdma_list[i]) + for i in self.cbdma_list: + allow_pci.append(i) self.start_vhost_testpmd( cores=self.vhost_core_list, param=vhost_param, @@ -643,34 +763,32 @@ class TestVhostCbdma(TestCase): ports=allow_pci, iova_mode="va", ) - for key, path in SPLIT_RING_PATH.items(): + for key, path in PACKED_RING_PATH.items(): virtio_eal_param = ( - "--vdev=net_virtio_user0,mac={},path=/tmp/s0,{},queues=1".format( - self.virtio_mac, path - ) + "--vdev 'net_virtio_user0,mac=%s,path=vhost_net0,%s,queues=2'" + % (self.virtio_mac, path) ) - if key == "non_mergeable_path": - new_virtio_param = "--enable-hw-vlan-strip " + virtio_param + if "vectorized" in key: + virtio_eal_param = "--force-max-simd-bitwidth=512 " + virtio_eal_param + if key == "vectorized_path_not_power_of_2": + virtio_param = "--nb-cores=1 --txq=2 --rxq=2 --txd=1025 --rxd=1025" else: - new_virtio_param = virtio_param + virtio_param = "--nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024" mode = key + "_VA" self.mode_list.append(mode) - self.logger.info("Start virtio-user with {} path with {}".format(key, path)) self.start_virtio_testpmd( cores=self.virtio_core_list, - param=new_virtio_param, + param=virtio_param, eal_param=virtio_eal_param, ) self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets() + self.check_each_queue_of_port_packets(queues=2) mode += "_RestartVhost" self.mode_list.append(mode) - self.logger.info("Restart host with {} path with {}".format(key, path)) - self.vhost_user_pmd.execute_cmd("start") self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets() + self.check_each_queue_of_port_packets(queues=2) self.virtio_user_pmd.quit() if not self.check_2M_env: @@ -682,36 +800,31 @@ class TestVhostCbdma(TestCase): ports=allow_pci, iova_mode="pa", ) - for key, path in SPLIT_RING_PATH.items(): - if key == "non_mergeable_path": - virtio_eal_param = "--vdev=net_virtio_user0,mac={},path=/tmp/s0,{},queues=1".format( - self.virtio_mac, path + virtio_param = "--nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024" + for key, path in PACKED_RING_PATH.items(): + if key == "inorder_mergeable": + virtio_eal_param = ( + "--vdev 'net_virtio_user0,mac=%s,path=./vhost_net0,%s,queues=2'" + % (self.virtio_mac, path) ) mode = key + "_PA" self.mode_list.append(mode) - self.logger.info( - "Start virtio-user with {} path with {}".format(key, path) - ) self.start_virtio_testpmd( cores=self.virtio_core_list, param=virtio_param, eal_param=virtio_eal_param, ) self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets() + self.check_each_queue_of_port_packets(queues=2) mode += "_RestartVhost" - self.mode_list.append(mode) - self.logger.info( - "Restart host with {} path with {}".format(key, path) - ) self.vhost_user_pmd.execute_cmd("start") + self.mode_list.append(mode) self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets() + self.check_each_queue_of_port_packets(queues=2) self.virtio_user_pmd.quit() - self.result_table_print() self.test_target = self.running_case self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ self.test_target @@ -720,28 +833,37 @@ class TestVhostCbdma(TestCase): self.handle_results(mode_list=self.mode_list) self.vhost_user_pmd.quit() - def test_perf_pvp_split_all_path_multi_queues_vhost_txq_M_to_N_cbdma(self): + def test_perf_pvp_packed_ring_all_path_multi_queues_vhost_async_operation_with_M_to_1( + self, + ): """ - Test Case 5: PVP split ring all path multi-queues vhost enqueue operations with M to N mapping between vrings and CBDMA virtual channels + Test Case 5: PVP packed ring all path multi-queues vhost async operations with M to 1 mapping between vrings and CBDMA virtual channels """ - cbdma_num = 8 - self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num) - dmas = self.generate_dmas_param(3) - lcore_dma = self.generate_lcore_dma_param( - cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] - ) - vhost_eal_param = "--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas={},dma_ring_size=2048'".format( - dmas + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=4) + lcore_dma = ( + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s" + % ( + self.vhost_core_list[1], + self.cbdma_list[0], + self.vhost_core_list[2], + self.cbdma_list[1], + self.vhost_core_list[3], + self.cbdma_list[2], + self.vhost_core_list[4], + self.cbdma_list[3], + ) ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" vhost_param = ( - " --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma={}".format( - lcore_dma - ) + "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma=[%s]" + % lcore_dma ) - virtio_param = "--nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024" allow_pci = [self.dut.ports_info[0]["pci"]] - for i in range(cbdma_num): - allow_pci.append(self.cbdma_list[i]) + for i in self.cbdma_list: + allow_pci.append(i) self.start_vhost_testpmd( cores=self.vhost_core_list, param=vhost_param, @@ -749,18 +871,20 @@ class TestVhostCbdma(TestCase): ports=allow_pci, iova_mode="va", ) - for key, path in SPLIT_RING_PATH.items(): + for key, path in PACKED_RING_PATH.items(): virtio_eal_param = ( - "--vdev=net_virtio_user0,mac={},path=/tmp/s0,{},queues=8".format( - self.virtio_mac, path - ) + "--vdev 'net_virtio_user0,mac=%s,path=vhost_net0,%s,queues=8'" + % (self.virtio_mac, path) ) - if key == "non_mergeable_path": - virtio_param = "--enable-hw-vlan-strip " + virtio_param + if "vectorized" in key: + virtio_eal_param = "--force-max-simd-bitwidth=512 " + virtio_eal_param + if key == "vectorized_path_not_power_of_2": + virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1025 --rxd=1025" + else: + virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" - mode = key + "_VA" + "_3dmas" + mode = key + "_VA" self.mode_list.append(mode) - self.logger.info("Start virtio-user with {} path with {}".format(key, path)) self.start_virtio_testpmd( cores=self.virtio_core_list, param=virtio_param, @@ -771,55 +895,44 @@ class TestVhostCbdma(TestCase): mode += "_RestartVhost" self.mode_list.append(mode) - self.logger.info("Restart host with {} path with {}".format(key, path)) - self.vhost_user_pmd.execute_cmd("start") self.send_imix_packets(mode=mode) self.check_each_queue_of_port_packets(queues=8) self.virtio_user_pmd.quit() - self.vhost_user_pmd.quit() - dmas = self.generate_dmas_param(8) - vhost_eal_param = "--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas={},dma_ring_size=2048'".format( - dmas - ) - self.start_vhost_testpmd( - cores=self.vhost_core_list, - param=vhost_param, - eal_param=vhost_eal_param, - ports=allow_pci, - iova_mode="va", - ) - for key, path in SPLIT_RING_PATH.items(): - if key == "inorder_non_mergeable_path": - virtio_eal_param = ( - "--vdev=net_virtio_user0,mac={},path=/tmp/s0,{},queues=8".format( - self.virtio_mac, path - ) - ) - - mode = key + "_VA" + "_8dmas" - self.mode_list.append(mode) - self.logger.info( - "Start virtio-user with {} path with {}".format(key, path) - ) - self.start_virtio_testpmd( - cores=self.virtio_core_list, - param=virtio_param, - eal_param=virtio_eal_param, - ) - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=8) - - mode += "_RestartVhost" - self.mode_list.append(mode) - self.logger.info("Restart host with {} path with {}".format(key, path)) - self.vhost_user_pmd.execute_cmd("start") - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=8) - self.virtio_user_pmd.quit() - if not self.check_2M_env: self.vhost_user_pmd.quit() + lcore_dma = ( + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s" + % ( + self.vhost_core_list[1], + self.cbdma_list[0], + self.vhost_core_list[2], + self.cbdma_list[0], + self.vhost_core_list[3], + self.cbdma_list[0], + self.vhost_core_list[4], + self.cbdma_list[0], + self.vhost_core_list[5], + self.cbdma_list[0], + self.vhost_core_list[6], + self.cbdma_list[0], + self.vhost_core_list[7], + self.cbdma_list[0], + self.vhost_core_list[8], + self.cbdma_list[0], + ) + ) + vhost_param = ( + "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma=[%s]" + % lcore_dma + ) self.start_vhost_testpmd( cores=self.vhost_core_list, param=vhost_param, @@ -827,17 +940,16 @@ class TestVhostCbdma(TestCase): ports=allow_pci, iova_mode="pa", ) - for key, path in SPLIT_RING_PATH.items(): - if key == "vectorized_path": - virtio_eal_param = "--vdev=net_virtio_user0,mac={},path=/tmp/s0,{},queues=8".format( - self.virtio_mac, path + virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + for key, path in PACKED_RING_PATH.items(): + if key == "inorder_mergeable": + virtio_eal_param = ( + "--vdev 'net_virtio_user0,mac=%s,path=./vhost_net0,%s,queues=8'" + % (self.virtio_mac, path) ) - mode = key + "_PA" + "_8dmas" + mode = key + "_PA" self.mode_list.append(mode) - self.logger.info( - "Start virtio-user with {} path with {}".format(key, path) - ) self.start_virtio_testpmd( cores=self.virtio_core_list, param=virtio_param, @@ -847,16 +959,12 @@ class TestVhostCbdma(TestCase): self.check_each_queue_of_port_packets(queues=8) mode += "_RestartVhost" - self.mode_list.append(mode) - self.logger.info( - "Restart host with {} path with {}".format(key, path) - ) self.vhost_user_pmd.execute_cmd("start") + self.mode_list.append(mode) self.send_imix_packets(mode=mode) self.check_each_queue_of_port_packets(queues=8) self.virtio_user_pmd.quit() - self.result_table_print() self.test_target = self.running_case self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ self.test_target @@ -865,262 +973,217 @@ class TestVhostCbdma(TestCase): self.handle_results(mode_list=self.mode_list) self.vhost_user_pmd.quit() - def test_perf_pvp_split_dynamic_queues_vhost_txq_M_to_N_cbdma(self): + def test_perf_pvp_packed_ring_dynamic_queue_number_vhost_async_operation_with_M_to_N( + self, + ): """ - Test Case 6: PVP split ring dynamic queue number vhost enqueue operations with M to N mapping between vrings and CBDMA virtual channels + Test Case 6: PVP packed ring dynamic queue number vhost async operations with M to N mapping between vrings and CBDMA virtual channels """ - cbdma_num = 8 - self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num) - vhost_eal_param = "--vdev 'net_vhost0,iface=/tmp/s0,queues=1,client=1'" - vhost_param = " --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024" - virtio_param = " --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=8) + lcore_dma = ( + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s" + % ( + self.vhost_core_list[1], + self.cbdma_list[0], + self.vhost_core_list[1], + self.cbdma_list[1], + self.vhost_core_list[1], + self.cbdma_list[2], + self.vhost_core_list[1], + self.cbdma_list[3], + self.vhost_core_list[2], + self.cbdma_list[4], + self.vhost_core_list[2], + self.cbdma_list[5], + self.vhost_core_list[2], + self.cbdma_list[6], + self.vhost_core_list[2], + self.cbdma_list[7], + ) + ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost_net0,queues=8,client=1,dmas=[txq0;txq1;rxq0;rxq1]'" + vhost_param = ( + "--nb-cores=2 --txq=2 --rxq=2 --txd=1024 --rxd=1024 --lcore-dma=[%s]" + % lcore_dma + ) allow_pci = [self.dut.ports_info[0]["pci"]] - for i in range(cbdma_num): - allow_pci.append(self.cbdma_list[i]) + for i in self.cbdma_list: + allow_pci.append(i) self.start_vhost_testpmd( cores=self.vhost_core_list, param=vhost_param, eal_param=vhost_eal_param, - ports=allow_pci[0:1], + ports=allow_pci, iova_mode="va", ) - for key, path in SPLIT_RING_PATH.items(): - if key == "inorder_mergeable_path": - virtio_eal_param = "--vdev=net_virtio_user0,mac={},path=/tmp/s0,{},queues=8,server=1".format( - self.virtio_mac, path - ) - - mode = key + "_VA" + "_without_cbdma" + virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + for key, path in PACKED_RING_PATH.items(): + virtio_eal_param = ( + "--vdev 'net_virtio_user0,mac=%s,path=vhost_net0,%s,queues=8,server=1'" + % (self.virtio_mac, path) + ) + if key == "inorder_mergeable": + mode = key + "_VA_1:N" self.mode_list.append(mode) - self.logger.info( - "Start virtio-user with {} path with {}".format(key, path) - ) self.start_virtio_testpmd( cores=self.virtio_core_list, param=virtio_param, eal_param=virtio_eal_param, ) self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets() - - mode += "_RestartVhost" - self.mode_list.append(mode) - self.logger.info("Restart host with {} path with {}".format(key, path)) - self.vhost_user_pmd.execute_cmd("start") - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets() + self.check_each_queue_of_port_packets(queues=2) self.vhost_user_pmd.quit() - dmas = self.generate_dmas_param(4) - lcore_dma = self.generate_lcore_dma_param( - cbdma_list=self.cbdma_list[0:4], core_list=self.vhost_core_list[1:5] - ) - vhost_eal_param = ( - "--vdev 'net_vhost0,iface=/tmp/s0,queues=4,client=1,dmas={}'".format(dmas) - ) - vhost_param = ( - " --nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 --lcore-dma={}".format( - lcore_dma - ) - ) - + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost_net0,queues=8,client=1,'" + vhost_param = "--nb-cores=2 --txq=1 --rxq=1 --txd=1024 --rxd=1024" self.start_vhost_testpmd( cores=self.vhost_core_list, param=vhost_param, eal_param=vhost_eal_param, - ports=allow_pci[0:5], + ports=[self.dut.ports_info[0]["pci"]], iova_mode="va", ) - - for key, path in SPLIT_RING_PATH.items(): - if key == "inorder_mergeable_path": - - mode = key + "_VA" + "_1:1" - self.mode_list.append(mode) - self.logger.info( - "Start virtio-user with {} path with {}".format(key, path) - ) - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=4) - - mode += "_RestartVhost" - self.mode_list.append(mode) - self.logger.info( - "Start virtio-user with {} path with {}".format(key, path) - ) - self.vhost_user_pmd.execute_cmd("start") - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=4) + mode = "inorder_mergeable" + "_VA_without_CBDMA" + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=1) self.vhost_user_pmd.quit() - dmas = self.generate_dmas_param(queues=8) - core1 = self.vhost_core_list[1] - core2 = self.vhost_core_list[2] - core3 = self.vhost_core_list[3] - core4 = self.vhost_core_list[4] - core5 = self.vhost_core_list[5] - cbdma0 = self.cbdma_list[0] - cbdma1 = self.cbdma_list[1] - cbdma2 = self.cbdma_list[2] - cbdma3 = self.cbdma_list[3] - cbdma4 = self.cbdma_list[4] - cbdma5 = self.cbdma_list[5] - cbdma6 = self.cbdma_list[6] - cbdma7 = self.cbdma_list[7] lcore_dma = ( - f"[lcore{core1}@{cbdma0},lcore{core1}@{cbdma7}," - + f"lcore{core2}@{cbdma1},lcore{core2}@{cbdma2},lcore{core2}@{cbdma3}," - + f"lcore{core3}@{cbdma2},lcore{core3}@{cbdma3},lcore{core3}@{cbdma4}," - f"lcore{core4}@{cbdma2},lcore{core4}@{cbdma3},lcore{core4}@{cbdma4},lcore{core4}@{cbdma5}," - f"lcore{core5}@{cbdma0},lcore{core5}@{cbdma1},lcore{core5}@{cbdma2},lcore{core5}@{cbdma3}," - f"lcore{core5}@{cbdma4},lcore{core5}@{cbdma5},lcore{core5}@{cbdma6},lcore{core5}@{cbdma7}]" - ) - vhost_eal_param = ( - "--vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas={}'".format(dmas) + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s" + % ( + self.vhost_core_list[1], + self.cbdma_list[0], + self.vhost_core_list[2], + self.cbdma_list[1], + self.vhost_core_list[3], + self.cbdma_list[2], + self.vhost_core_list[4], + self.cbdma_list[3], + ) ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost_net0,queues=8,client=1,dmas=[rxq0;rxq1;rxq2;rxq3]'" vhost_param = ( - " --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma={}".format( - lcore_dma - ) + "--nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 --lcore-dma=[%s]" + % lcore_dma ) self.start_vhost_testpmd( cores=self.vhost_core_list, param=vhost_param, eal_param=vhost_eal_param, - ports=allow_pci[0:9], + ports=allow_pci, iova_mode="va", ) - - for key, path in SPLIT_RING_PATH.items(): - if key == "inorder_mergeable_path": - - mode = key + "_VA" + "_MN" - self.mode_list.append(mode) - self.logger.info( - "Start virtio-user with {} path with {}".format(key, path) - ) - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=8) - - mode += "_RestartVhost" - self.mode_list.append(mode) - self.logger.info("Restart host with {} path with {}".format(key, path)) - self.vhost_user_pmd.execute_cmd("start") - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=8) - - if not self.check_2M_env: - self.vhost_user_pmd.quit() - dmas = self.generate_dmas_param(queues=8) - - vhost_eal_param = ( - "--vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas={}'".format( - dmas - ) - ) - vhost_param = " --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma={}".format( - lcore_dma - ) - self.start_vhost_testpmd( - cores=self.vhost_core_list, - param=vhost_param, - eal_param=vhost_eal_param, - ports=allow_pci[0:5], - iova_mode="pa", - ) - - for key, path in SPLIT_RING_PATH.items(): - if key == "inorder_mergeable_path": - - mode = key + "_PA" + "_M>N" - self.mode_list.append(mode) - self.logger.info( - "Start virtio-user with {} path with {}".format(key, path) - ) - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=8) - - mode += "_RestartVhost" - self.mode_list.append(mode) - self.logger.info( - "ReLaunch host with {} path with {}".format(key, path) - ) - self.vhost_user_pmd.execute_cmd("start") - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=8) - - self.result_table_print() - self.test_target = self.running_case - self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ - self.test_target - ] - self.handle_expected(mode_list=self.mode_list) - self.handle_results(mode_list=self.mode_list) - self.virtio_user_pmd.quit() self.vhost_user_pmd.quit() - - def test_perf_pvp_packed_all_path_vhost_txq_1_to_1_cbdma(self): - """ - Test Case 7: PVP packed ring all path vhost enqueue operations with 1 to 1 mapping between vrings and CBDMA virtual channels - """ - cbdma_num = 1 - self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num) - dmas = self.generate_dmas_param(1) - lcore_dma = self.generate_lcore_dma_param( - cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] - ) - vhost_eal_param = "--vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas={},dma_ring_size=2048'".format( - dmas + lcore_dma = ( + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s" + % ( + self.vhost_core_list[1], + self.cbdma_list[0], + self.vhost_core_list[2], + self.cbdma_list[0], + self.vhost_core_list[2], + self.cbdma_list[1], + self.vhost_core_list[2], + self.cbdma_list[2], + self.vhost_core_list[3], + self.cbdma_list[3], + self.vhost_core_list[3], + self.cbdma_list[4], + self.vhost_core_list[3], + self.cbdma_list[5], + self.vhost_core_list[3], + self.cbdma_list[6], + self.vhost_core_list[4], + self.cbdma_list[4], + self.vhost_core_list[4], + self.cbdma_list[5], + self.vhost_core_list[4], + self.cbdma_list[6], + self.vhost_core_list[4], + self.cbdma_list[7], + ) ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost_net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" vhost_param = ( - " --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 --lcore-dma={}".format( - lcore_dma - ) + "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma=[%s]" + % lcore_dma ) - allow_pci = [self.dut.ports_info[0]["pci"]] - for i in range(cbdma_num): - allow_pci.append(self.cbdma_list[i]) self.start_vhost_testpmd( cores=self.vhost_core_list, param=vhost_param, @@ -1128,188 +1191,75 @@ class TestVhostCbdma(TestCase): ports=allow_pci, iova_mode="va", ) + mode = "inorder_mergeable" + "_VA_M:N_diff" + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + self.virtio_user_pmd.quit() + virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" for key, path in PACKED_RING_PATH.items(): - virtio_eal_param = ( - " --vdev 'net_virtio_user0,mac={},path=/tmp/s0,{},queues=1'".format( - self.virtio_mac, path + if key == "mergeable": + virtio_eal_param = ( + "--vdev 'net_virtio_user0,mac=%s,path=vhost_net0,%s,queues=8,server=1'" + % (self.virtio_mac, path) ) - ) - virtio_param = " --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024" - if key == "vectorized_path_not_power_of_2": - virtio_eal_param += ",queue_size=1025" - virtio_param = " --nb-cores=1 --txq=1 --rxq=1 --txd=1025 --rxd=1025" - if "vectorized" in key: - virtio_eal_param += " --force-max-simd-bitwidth=512" - - mode = key + "_VA" - self.mode_list.append(mode) - self.logger.info("Start virtio-user with {} path with {}".format(key, path)) - self.start_virtio_testpmd( - cores=self.virtio_core_list, - param=virtio_param, - eal_param=virtio_eal_param, - ) - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets() - - mode += "_RestartVhost" - self.mode_list.append(mode) - self.logger.info("Restart host with {} path with {}".format(key, path)) - self.vhost_user_pmd.execute_cmd("start") - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets() - self.virtio_user_pmd.quit() - - if not self.check_2M_env: - self.vhost_user_pmd.quit() - self.start_vhost_testpmd( - cores=self.vhost_core_list, - param=vhost_param, - eal_param=vhost_eal_param, - ports=allow_pci, - iova_mode="pa", - ) - for key, path in PACKED_RING_PATH.items(): - if key == "inorder_mergeable_path": - virtio_eal_param = " --vdev 'net_virtio_user0,mac={},path=/tmp/s0,{},queues=1'".format( - self.virtio_mac, path - ) - virtio_param = " --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024" - - mode = key + "_PA" - self.mode_list.append(mode) - self.logger.info( - "Start virtio-user with {} path with {}".format(key, path) - ) - self.start_virtio_testpmd( - cores=self.virtio_core_list, - param=virtio_param, - eal_param=virtio_eal_param, - ) - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets() - - mode += "_RestartVhost" - self.mode_list.append(mode) - self.logger.info( - "Restart host with {} path with {}".format(key, path) - ) - self.vhost_user_pmd.execute_cmd("start") - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets() - self.virtio_user_pmd.quit() + mode = key + "_VA_M:N_diff" + self.mode_list.append(mode) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) - self.result_table_print() - self.test_target = self.running_case - self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ - self.test_target - ] - self.handle_expected(mode_list=self.mode_list) - self.handle_results(mode_list=self.mode_list) self.vhost_user_pmd.quit() - - def test_perf_pvp_packed_all_path_multi_queues_vhost_txq_1_to_1_cbdma(self): - """ - Test Case 8: PVP packed ring all path multi-queues vhost enqueue operations with 1 to 1 mapping between vrings and CBDMA virtual channels - """ - cbdma_num = 8 - self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num) - dmas = self.generate_dmas_param(8) - lcore_dma = self.generate_lcore_dma_param( - cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:] - ) - vhost_eal_param = "--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas={},dma_ring_size=2048'".format( - dmas + lcore_dma = ( + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s" + % ( + self.vhost_core_list[1], + self.cbdma_list[0], + self.vhost_core_list[2], + self.cbdma_list[0], + self.vhost_core_list[3], + self.cbdma_list[1], + self.vhost_core_list[3], + self.cbdma_list[2], + self.vhost_core_list[4], + self.cbdma_list[1], + self.vhost_core_list[4], + self.cbdma_list[2], + self.vhost_core_list[5], + self.cbdma_list[1], + self.vhost_core_list[5], + self.cbdma_list[2], + ) ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost_net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" vhost_param = ( - " --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma={}".format( - lcore_dma - ) + "--nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma=[%s]" + % lcore_dma ) - allow_pci = [self.dut.ports_info[0]["pci"]] - for i in range(cbdma_num): - allow_pci.append(self.cbdma_list[i]) self.start_vhost_testpmd( cores=self.vhost_core_list, param=vhost_param, eal_param=vhost_eal_param, ports=allow_pci, - iova_mode="va", + iova_mode="pa", ) - for key, path in PACKED_RING_PATH.items(): - virtio_eal_param = ( - " --vdev 'net_virtio_user0,mac={},path=/tmp/s0,{},queues=8'".format( - self.virtio_mac, path - ) - ) - virtio_param = " --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024" - if key == "vectorized_path_not_power_of_2": - virtio_eal_param += ",queue_size=1025" - virtio_param = " --nb-cores=1 --txq=8 --rxq=8 --txd=1025 --rxd=1025" - if "vectorized" in key: - virtio_eal_param += " --force-max-simd-bitwidth=512" - - mode = key + "_VA" - self.mode_list.append(mode) - self.logger.info("Start virtio-user with {} path with {}".format(key, path)) - self.start_virtio_testpmd( - cores=self.virtio_core_list, - param=virtio_param, - eal_param=virtio_eal_param, - ) - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=8) - - mode += "_RestartVhost" - self.mode_list.append(mode) - self.logger.info("Restart host with {} path with {}".format(key, path)) - self.vhost_user_pmd.execute_cmd("start") - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=8) - self.virtio_user_pmd.quit() - - if not self.check_2M_env: - self.vhost_user_pmd.quit() - self.start_vhost_testpmd( - cores=self.vhost_core_list, - param=vhost_param, - eal_param=vhost_eal_param, - ports=allow_pci, - iova_mode="pa", - ) - for key, path in PACKED_RING_PATH.items(): - if key == "mergeable_path": - virtio_param = ( - " --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024" - ) - virtio_eal_param = "--vdev=net_virtio_user0,mac={},path=/tmp/s0,{},queues=8".format( - self.virtio_mac, path - ) - - mode = key + "_PA" - self.mode_list.append(mode) - self.logger.info( - "Start virtio-user with {} path with {}".format(key, path) - ) - self.start_virtio_testpmd( - cores=self.virtio_core_list, - param=virtio_param, - eal_param=virtio_eal_param, - ) - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=8) - - mode += "_RestartVhost" - self.mode_list.append(mode) - self.logger.info( - "Restart host with {} path with {}".format(key, path) - ) - self.vhost_user_pmd.execute_cmd("start") - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=8) - self.virtio_user_pmd.quit() + mode = "mergeable" + "_PA_M:N_diff" + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) - self.result_table_print() self.test_target = self.running_case self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ self.test_target @@ -1318,688 +1268,6 @@ class TestVhostCbdma(TestCase): self.handle_results(mode_list=self.mode_list) self.vhost_user_pmd.quit() - def test_perf_pvp_packed_all_path_multi_queues_vhost_txq_M_to_1_cbdma(self): - """ - Test Case 9: PVP packed ring all path multi-queues vhost enqueue operations with M to 1 mapping between vrings and CBDMA virtual channels - """ - cbdma_num = 1 - self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num) - dmas = self.generate_dmas_param(8) - lcore_dma = self.generate_lcore_dma_param( - cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] - ) - vhost_eal_param = "--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas={},dma_ring_size=2048'".format( - dmas - ) - vhost_param = ( - " --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma={}".format( - lcore_dma - ) - ) - allow_pci = [self.dut.ports_info[0]["pci"]] - for i in range(cbdma_num): - allow_pci.append(self.cbdma_list[i]) - self.start_vhost_testpmd( - cores=self.vhost_core_list, - param=vhost_param, - eal_param=vhost_eal_param, - ports=allow_pci, - iova_mode="va", - ) - for key, path in PACKED_RING_PATH.items(): - virtio_eal_param = ( - " --vdev 'net_virtio_user0,mac={},path=/tmp/s0,{},queues=8'".format( - self.virtio_mac, path - ) - ) - virtio_param = " --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024" - if key == "vectorized_path_not_power_of_2": - virtio_eal_param += ",queue_size=1025" - virtio_param = " --nb-cores=1 --txq=8 --rxq=8 --txd=1025 --rxd=1025" - - mode = key + "_VA" + "_1_lcore" - self.mode_list.append(mode) - self.logger.info("Start virtio-user with {} path with {}".format(key, path)) - self.start_virtio_testpmd( - cores=self.virtio_core_list, - param=virtio_param, - eal_param=virtio_eal_param, - ) - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=8) - - mode += "_RestartVhost" - self.mode_list.append(mode) - self.logger.info("Restart host with {} path with {}".format(key, path)) - self.vhost_user_pmd.execute_cmd("start") - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=8) - self.virtio_user_pmd.quit() - - self.vhost_user_pmd.quit() - lcore_dma = self.generate_lcore_dma_param( - cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:4] - ) - vhost_param = ( - " --nb-cores=3 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma={}".format( - lcore_dma - ) - ) - self.start_vhost_testpmd( - cores=self.vhost_core_list, - param=vhost_param, - eal_param=vhost_eal_param, - ports=allow_pci, - iova_mode="va", - ) - for key, path in PACKED_RING_PATH.items(): - if key == "inorder_mergeable_path": - virtio_eal_param = ( - " --vdev 'net_virtio_user0,mac={},path=/tmp/s0,{},queues=8'".format( - self.virtio_mac, path - ) - ) - virtio_param = " --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024" - - mode = key + "_VA" + "_3_lcore" - self.mode_list.append(mode) - self.logger.info( - "Start virtio-user with {} path with {}".format(key, path) - ) - self.start_virtio_testpmd( - cores=self.virtio_core_list, - param=virtio_param, - eal_param=virtio_eal_param, - ) - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=8) - - mode += "_RestartVhost" - self.mode_list.append(mode) - self.logger.info("Restart host with {} path with {}".format(key, path)) - self.vhost_user_pmd.execute_cmd("start") - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=8) - self.virtio_user_pmd.quit() - - self.vhost_user_pmd.quit() - lcore_dma = self.generate_lcore_dma_param( - cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:9] - ) - vhost_param = ( - " --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma={}".format( - lcore_dma - ) - ) - self.start_vhost_testpmd( - cores=self.vhost_core_list, - param=vhost_param, - eal_param=vhost_eal_param, - ports=allow_pci, - iova_mode="va", - ) - for key, path in PACKED_RING_PATH.items(): - if key == "mergeable_path": - virtio_eal_param = ( - " --vdev 'net_virtio_user0,mac={},path=/tmp/s0,{},queues=8'".format( - self.virtio_mac, path - ) - ) - virtio_param = " --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024" - - mode = key + "_VA" + "_8_lcore" - self.mode_list.append(mode) - self.logger.info( - "Start virtio-user with {} path with {}".format(key, path) - ) - self.start_virtio_testpmd( - cores=self.virtio_core_list, - param=virtio_param, - eal_param=virtio_eal_param, - ) - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=8) - - self.mode_list.append(mode) - self.logger.info("Restart host with {} path with {}".format(key, path)) - self.vhost_user_pmd.execute_cmd("start") - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=8) - self.virtio_user_pmd.quit() - - if not self.check_2M_env: - self.vhost_user_pmd.quit() - self.start_vhost_testpmd( - cores=self.vhost_core_list, - param=vhost_param, - eal_param=vhost_eal_param, - ports=allow_pci, - iova_mode="pa", - ) - for key, path in PACKED_RING_PATH.items(): - if key == "inorder_non_mergeable_path": - virtio_eal_param = " --vdev 'net_virtio_user0,mac={},path=/tmp/s0,{},queues=8'".format( - self.virtio_mac, path - ) - virtio_param = " --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024" - - mode = key + "_PA" + "_8_lcore" - self.mode_list.append(mode) - self.logger.info( - "Start virtio-user with {} path with {}".format(key, path) - ) - self.start_virtio_testpmd( - cores=self.virtio_core_list, - param=virtio_param, - eal_param=virtio_eal_param, - ) - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=8) - - mode += "_RestartVhost" - self.mode_list.append(mode) - self.logger.info( - "Restart host with {} path with {}".format(key, path) - ) - self.vhost_user_pmd.execute_cmd("start") - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=8) - self.virtio_user_pmd.quit() - - self.result_table_print() - self.test_target = self.running_case - self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ - self.test_target - ] - self.handle_expected(mode_list=self.mode_list) - self.handle_results(mode_list=self.mode_list) - self.vhost_user_pmd.quit() - - def test_perf_pvp_packed_all_path_vhost_txq_1_to_N_cbdma(self): - """ - Test Case 10: PVP packed ring all path vhost enqueue operations with 1 to N mapping between vrings and CBDMA virtual channels - """ - cbdma_num = 8 - self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num) - dmas = self.generate_dmas_param(1) - lcore_dma = self.generate_lcore_dma_param( - cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] - ) - vhost_eal_param = "--vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas={},dma_ring_size=2048'".format( - dmas - ) - vhost_param = ( - " --nb-cores=1 --txq=1 --rxq=1--txd=1024 --rxd=1024 --lcore-dma={}".format( - lcore_dma - ) - ) - allow_pci = [self.dut.ports_info[0]["pci"]] - for i in range(cbdma_num): - allow_pci.append(self.cbdma_list[i]) - self.start_vhost_testpmd( - cores=self.vhost_core_list, - param=vhost_param, - eal_param=vhost_eal_param, - ports=allow_pci, - iova_mode="va", - ) - for key, path in PACKED_RING_PATH.items(): - virtio_eal_param = ( - " --vdev 'net_virtio_user0,mac={},path=/tmp/s0,{},queues=1'".format( - self.virtio_mac, path - ) - ) - virtio_param = " --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024" - if key == "vectorized_path_not_power_of_2": - virtio_eal_param += ",queue_size=1025" - virtio_param = " --nb-cores=1 --txq=1 --rxq=1 --txd=1025 --rxd=1025" - - mode = key + "_VA" - self.mode_list.append(mode) - self.logger.info("Start virtio-user with {} path with {}".format(key, path)) - self.start_virtio_testpmd( - cores=self.virtio_core_list, - param=virtio_param, - eal_param=virtio_eal_param, - ) - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets() - - mode += "_RestartVhost" - self.mode_list.append(mode) - self.logger.info("Restart host with {} path with {}".format(key, path)) - self.vhost_user_pmd.execute_cmd("start") - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets() - self.virtio_user_pmd.quit() - - if not self.check_2M_env: - self.vhost_user_pmd.quit() - self.start_vhost_testpmd( - cores=self.vhost_core_list, - param=vhost_param, - eal_param=vhost_eal_param, - ports=allow_pci, - iova_mode="pa", - ) - for key, path in PACKED_RING_PATH.items(): - if key == "non_mergeable_path": - virtio_eal_param = " --vdev 'net_virtio_user0,mac={},path=/tmp/s0,{},queues=1'".format( - self.virtio_mac, path - ) - virtio_param = " --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024" - - mode = key + "_PA" - self.mode_list.append(mode) - self.logger.info( - "Start virtio-user with {} path with {}".format(key, path) - ) - self.start_virtio_testpmd( - cores=self.virtio_core_list, - param=virtio_param, - eal_param=virtio_eal_param, - ) - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets() - - mode += "_RestartVhost" - self.mode_list.append(mode) - self.logger.info( - "Restart host with {} path with {}".format(key, path) - ) - self.vhost_user_pmd.execute_cmd("start") - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets() - self.virtio_user_pmd.quit() - - self.result_table_print() - self.test_target = self.running_case - self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ - self.test_target - ] - self.handle_expected(mode_list=self.mode_list) - self.handle_results(mode_list=self.mode_list) - self.vhost_user_pmd.quit() - - def test_perf_pvp_packed_all_path_multi_queues_vhost_txq_M_to_N_cbdma(self): - """ - Test Case 11: PVP packed ring all path vhost enqueue operations with M to N mapping between vrings and CBDMA virtual channels - """ - cbdma_num = 8 - self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num) - dmas = self.generate_dmas_param(3) - lcore_dma = self.generate_lcore_dma_param( - cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] - ) - vhost_eal_param = "--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas={},dma_ring_size=2048'".format( - dmas - ) - vhost_param = ( - " --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma={}".format( - lcore_dma - ) - ) - allow_pci = [self.dut.ports_info[0]["pci"]] - for i in range(cbdma_num): - allow_pci.append(self.cbdma_list[i]) - self.start_vhost_testpmd( - cores=self.vhost_core_list, - param=vhost_param, - eal_param=vhost_eal_param, - ports=allow_pci, - iova_mode="va", - ) - for key, path in PACKED_RING_PATH.items(): - virtio_eal_param = ( - " --vdev 'net_virtio_user0,mac={},path=/tmp/s0,{},queues=8'".format( - self.virtio_mac, path - ) - ) - virtio_param = " --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024" - if key == "vectorized_path_not_power_of_2": - virtio_eal_param += ",queue_size=1025" - virtio_param = " --nb-cores=1 --txq=8 --rxq=8 --txd=1025 --rxd=1025" - - mode = key + "_VA" + "_3dmas" - self.mode_list.append(mode) - self.start_virtio_testpmd( - cores=self.virtio_core_list, - param=virtio_param, - eal_param=virtio_eal_param, - ) - self.send_imix_packets(mode=mode) - self.logger.info("Start virtio-user with {} path with {}".format(key, path)) - self.check_each_queue_of_port_packets(queues=8) - - mode += "_RestartVhost" - self.mode_list.append(mode) - self.vhost_user_pmd.execute_cmd("start") - self.send_imix_packets(mode=mode) - self.logger.info("Restart host with {} path with {}".format(key, path)) - self.check_each_queue_of_port_packets(queues=8) - self.virtio_user_pmd.quit() - - self.vhost_user_pmd.quit() - dmas = self.generate_dmas_param(queues=8) - vhost_eal_param = "--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas={},dma_ring_size=2048'".format( - dmas - ) - self.start_vhost_testpmd( - cores=self.vhost_core_list, - param=vhost_param, - eal_param=vhost_eal_param, - ports=allow_pci, - iova_mode="va", - ) - for key, path in PACKED_RING_PATH.items(): - if key == "inorder_non_mergeable_path": - virtio_eal_param = ( - " --vdev 'net_virtio_user0,mac={},path=/tmp/s0,{},queues=8'".format( - self.virtio_mac, path - ) - ) - virtio_param = " --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024" - - mode = key + "_VA" + "_8dmas" - self.mode_list.append(mode) - self.logger.info( - "Start virtio-user with {} path with {}".format(key, path) - ) - self.start_virtio_testpmd( - cores=self.virtio_core_list, - param=virtio_param, - eal_param=virtio_eal_param, - ) - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=8) - - mode += "_RestartVhost" - self.mode_list.append(mode) - self.logger.info("Restart host with {} path with {}".format(key, path)) - self.vhost_user_pmd.execute_cmd("start") - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=8) - self.virtio_user_pmd.quit() - - if not self.check_2M_env: - self.vhost_user_pmd.quit() - self.start_vhost_testpmd( - cores=self.vhost_core_list, - param=vhost_param, - eal_param=vhost_eal_param, - ports=allow_pci, - iova_mode="pa", - ) - for key, path in PACKED_RING_PATH.items(): - if key == "vectorized_path": - virtio_eal_param = " --force-max-simd-bitwidth=512 --vdev 'net_virtio_user0,mac={},path=/tmp/s0,{},queues=8'".format( - self.virtio_mac, path - ) - virtio_param = " --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024" - - mode = key + "_PA" + "_8dmas" - self.mode_list.append(mode) - self.start_virtio_testpmd( - cores=self.virtio_core_list, - param=virtio_param, - eal_param=virtio_eal_param, - ) - self.send_imix_packets(mode=mode) - self.logger.info( - "Start virtio-user with {} path with {}".format(key, path) - ) - self.check_each_queue_of_port_packets(queues=8) - - mode += "_RestartVhost" - self.mode_list.append(mode) - self.vhost_user_pmd.execute_cmd("start") - self.send_imix_packets(mode=mode) - self.logger.info( - "Restart host with {} path with {}".format(key, path) - ) - self.check_each_queue_of_port_packets(queues=8) - self.virtio_user_pmd.quit() - - self.result_table_print() - self.test_target = self.running_case - self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ - self.test_target - ] - self.handle_expected(mode_list=self.mode_list) - self.handle_results(mode_list=self.mode_list) - self.vhost_user_pmd.quit() - - def test_perf_pvp_packed_dynamic_queues_vhost_txq_M_to_N_cbdma(self): - """ - Test Case 12: PVP packed ring dynamic queue number vhost enqueue operations with M to N mapping between vrings and CBDMA virtual channels - """ - cbdma_num = 8 - self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num) - vhost_eal_param = "--vdev 'net_vhost0,iface=/tmp/s0,queues=1,client=1'" - vhost_param = " --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024" - virtio_param = " --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024" - allow_pci = [self.dut.ports_info[0]["pci"]] - for i in range(cbdma_num): - allow_pci.append(self.cbdma_list[i]) - self.start_vhost_testpmd( - cores=self.vhost_core_list, - param=vhost_param, - eal_param=vhost_eal_param, - ports=allow_pci[0:1], - iova_mode="va", - ) - for key, path in PACKED_RING_PATH.items(): - if key == "inorder_mergeable_path": - virtio_eal_param = "--vdev=net_virtio_user0,mac={},path=/tmp/s0,{},queues=8,server=1".format( - self.virtio_mac, path - ) - - mode = key + "_VA" + "_without_cbdma" - self.mode_list.append(mode) - self.logger.info( - "Start virtio-user with {} path with {}".format(key, path) - ) - self.start_virtio_testpmd( - cores=self.virtio_core_list, - param=virtio_param, - eal_param=virtio_eal_param, - ) - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets() - - mode += "_RestartVhost" - self.mode_list.append(mode) - self.logger.info("Restart host with {} path with {}".format(key, path)) - self.vhost_user_pmd.execute_cmd("start") - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets() - - self.vhost_user_pmd.quit() - dmas = self.generate_dmas_param(4) - lcore_dma = self.generate_lcore_dma_param( - cbdma_list=self.cbdma_list[0:4], core_list=self.vhost_core_list[1:5] - ) - vhost_eal_param = ( - "--vdev 'net_vhost0,iface=/tmp/s0,queues=4,client=1,dmas={}'".format(dmas) - ) - vhost_param = ( - " --nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 --lcore-dma={}".format( - lcore_dma - ) - ) - - self.start_vhost_testpmd( - cores=self.vhost_core_list, - param=vhost_param, - eal_param=vhost_eal_param, - ports=allow_pci[0:5], - iova_mode="va", - ) - for key, path in PACKED_RING_PATH.items(): - if key == "inorder_mergeable_path": - - mode = key + "_VA" + "_1:1" - self.mode_list.append(mode) - self.logger.info( - "Start virtio-user with {} path with {}".format(key, path) - ) - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=4) - - mode += "_RestartVhost" - self.mode_list.append(mode) - self.logger.info("Restart host with {} path with {}".format(key, path)) - self.vhost_user_pmd.execute_cmd("start") - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=4) - - self.vhost_user_pmd.quit() - dmas = self.generate_dmas_param(8) - core1 = self.vhost_core_list[1] - core2 = self.vhost_core_list[2] - core3 = self.vhost_core_list[3] - core4 = self.vhost_core_list[4] - core5 = self.vhost_core_list[5] - cbdma0 = self.cbdma_list[0] - cbdma1 = self.cbdma_list[1] - cbdma2 = self.cbdma_list[2] - cbdma3 = self.cbdma_list[3] - cbdma4 = self.cbdma_list[4] - cbdma5 = self.cbdma_list[5] - cbdma6 = self.cbdma_list[6] - cbdma7 = self.cbdma_list[7] - lcore_dma = ( - f"[lcore{core1}@{cbdma0},lcore{core1}@{cbdma7}," - + f"lcore{core2}@{cbdma1},lcore{core2}@{cbdma2},lcore{core2}@{cbdma3}," - + f"lcore{core3}@{cbdma2},lcore{core3}@{cbdma3},lcore{core3}@{cbdma4}," - f"lcore{core4}@{cbdma2},lcore{core4}@{cbdma3},lcore{core4}@{cbdma4},lcore{core4}@{cbdma5}," - f"lcore{core5}@{cbdma0},lcore{core5}@{cbdma1},lcore{core5}@{cbdma2},lcore{core5}@{cbdma3}," - f"lcore{core5}@{cbdma4},lcore{core5}@{cbdma5},lcore{core5}@{cbdma6},lcore{core5}@{cbdma7}]" - ) - vhost_eal_param = ( - "--vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas={}'".format(dmas) - ) - vhost_param = ( - " --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma={}".format( - lcore_dma - ) - ) - self.start_vhost_testpmd( - cores=self.vhost_core_list, - param=vhost_param, - eal_param=vhost_eal_param, - ports=allow_pci[0:9], - iova_mode="va", - ) - - for key, path in PACKED_RING_PATH.items(): - if key == "inorder_mergeable_path": - - mode = key + "_VA" + "_MN" - self.mode_list.append(mode) - self.logger.info( - "Start virtio-user with {} path with {}".format(key, path) - ) - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=8) - - mode += "_RestartVhost" - self.mode_list.append(mode) - self.logger.info("Restart host with {} path with {}".format(key, path)) - self.vhost_user_pmd.execute_cmd("start") - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=8) - - if not self.check_2M_env: - self.vhost_user_pmd.quit() - dmas = self.generate_dmas_param(8) - vhost_eal_param = ( - "--vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas={}'".format( - dmas - ) - ) - vhost_param = " --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma={}".format( - lcore_dma - ) - self.start_vhost_testpmd( - cores=self.vhost_core_list, - param=vhost_param, - eal_param=vhost_eal_param, - ports=allow_pci[0:5], - iova_mode="pa", - ) - - for key, path in PACKED_RING_PATH.items(): - if key == "inorder_mergeable_path": - - mode = key + "_PA" + "_M>N" - self.mode_list.append(mode) - self.logger.info( - "Start virtio-user with {} path with {}".format(key, path) - ) - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=8) - - mode += "_RestartVhost" - self.mode_list.append(mode) - self.logger.info( - "Restart host with {} path with {}".format(key, path) - ) - self.vhost_user_pmd.execute_cmd("start") - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=8) - - self.result_table_print() - self.test_target = self.running_case - self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ - self.test_target - ] - self.handle_expected(mode_list=self.mode_list) - self.handle_results(mode_list=self.mode_list) - self.vhost_user_pmd.quit() - self.virtio_user_pmd.quit() - def send_imix_packets(self, mode): """ Send imix packet with packet generator and verify @@ -2015,7 +1283,7 @@ class TestVhostCbdma(TestCase): }, } pkt = Packet() - pkt.assign_layers(["ether", "ipv4", "raw"]) + pkt.assign_layers(["ether", "ipv4", "tcp", "raw"]) pkt.config_layers( [ ("ether", {"dst": "%s" % self.virtio_mac}), @@ -2025,13 +1293,13 @@ class TestVhostCbdma(TestCase): ) pkt.save_pcapfile( self.tester, - "%s/multiqueuerandomip_%s.pcap" % (self.out_path, frame_size), + "%s/%s_%s.pcap" % (self.out_path, self.suite_name, frame_size), ) tgenInput.append( ( port, port, - "%s/multiqueuerandomip_%s.pcap" % (self.out_path, frame_size), + "%s/%s_%s.pcap" % (self.out_path, self.suite_name, frame_size), ) ) From patchwork Thu Aug 11 06:16:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 114825 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A2892A0548; Thu, 11 Aug 2022 08:20:20 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9B1D1427F2; Thu, 11 Aug 2022 08:20:20 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id E4BF940DDA for ; Thu, 11 Aug 2022 08:20:18 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1660198819; x=1691734819; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=pyF38jz886Q/h4xCx5YPx1W9ZMUiN6Z8NxO4uWtp5H0=; b=YGFLGcr07m05Z6AuqekloxCJJ/+6Sdbc08SkFJGd9hjtxga8ipmUsCbN hp9m95iEBixnTkxfqJfNP0/GyL49o97yYzC+e7oRZ3cHVSedwUKGkE4V1 AIS1mIfIHhtiFx8YXd5lgpz8fleRh1rt1l7Ppf5bfuFXII0wKpeudaMuB 7EO+seqAqMqbeGep4SnGDf8s+zC+Xk3Sye1tYmNNkwCCKEqyNffvbKuSy BTg6q15G/Vjs//rCMjw7AxbECbnIcMHnHWrVczOcP8CoXpjuddtPNj+uA P48icA0vzIhj6NnFboXvc2FLQnvL2vLwrVFUwWK3Gl2EXrjMhweSp2e0k A==; X-IronPort-AV: E=McAfee;i="6400,9594,10435"; a="290024389" X-IronPort-AV: E=Sophos;i="5.93,228,1654585200"; d="scan'208";a="290024389" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Aug 2022 23:20:18 -0700 X-IronPort-AV: E=Sophos;i="5.93,228,1654585200"; d="scan'208";a="665234557" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Aug 2022 23:20:16 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V2 3/3] conf/vhost_cbdma: modify testuite config by testcase changed Date: Thu, 11 Aug 2022 02:16:50 -0400 Message-Id: <20220811061650.386731-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Modify conf/vhost_cbdma.cfg by testcase changed. Signed-off-by: Wei Ling --- conf/vhost_cbdma.cfg | 248 ++++++++++++------------------------------- 1 file changed, 68 insertions(+), 180 deletions(-) diff --git a/conf/vhost_cbdma.cfg b/conf/vhost_cbdma.cfg index 08ff1a69..9dbdb77f 100644 --- a/conf/vhost_cbdma.cfg +++ b/conf/vhost_cbdma.cfg @@ -4,187 +4,75 @@ test_parameters = {'imix': [1024]} test_duration = 20 accepted_tolerance = 2 expected_throughput = { - 'test_perf_pvp_split_all_path_vhost_txq_1_to_1_cbdma': { - 'inorder_mergeable_path_VA': {'imix': {1024: 0.000}}, - 'inorder_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, - 'inorder_non_mergeable_path_VA': {'imix': {1024: 0.000}}, - 'inorder_non_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, - 'mergeable_path_VA': {'imix': {1024: 0.000}}, - 'mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, - 'non_mergeable_path_VA': {'imix': {1024: 0.000}}, - 'non_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, - 'vectorized_path_VA': {'imix': {1024: 0.000}}, - 'vectorized_path_VA_RestartVhost': {'imix': {1024: 0.000}}, - 'inorder_mergeable_path_PA': {'imix': {1024: 0.000}}, - 'inorder_mergeable_path_PA_RestartVhost': {'imix': {1024: 0.000}}, - 'inorder_non_mergeable_path_PA': {'imix': {1024: 0.000}}, - 'inorder_non_mergeable_path_PA_RestartVhost': {'imix': {1024: 0.000}}, - 'mergeable_path_PA': {'imix': {1024: 0.000}}, - 'mergeable_path_PA_RestartVhost': {'imix': {1024: 0.000}}, - 'non_mergeable_path_PA': {'imix': {1024: 0.000}}, - 'non_mergeable_path_PA_RestartVhost': {'imix': {1024: 0.000}}, - 'vectorized_path_PA': {'imix': {1024: 0.000}}, - 'vectorized_path_PA_RestartVhost': {'imix': {1024: 0.000}}}, - 'test_perf_pvp_split_all_path_multi_queues_vhost_txq_1_to_1_cbdma': { - 'inorder_mergeable_path_VA': {'imix': {1024: 0.000}}, - 'inorder_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, - 'inorder_non_mergeable_path_VA': {'imix': {1024: 0.000}}, - 'inorder_non_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, - 'mergeable_path_VA': {'imix': {1024: 0.000}}, - 'mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, - 'non_mergeable_path_VA': {'imix': {1024: 0.000}}, - 'non_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, - 'vectorized_path_VA': {'imix': {1024: 0.000}}, - 'vectorized_path_VA_RestartVhost': {'imix': {1024: 0.000}}, - 'mergeable_path_PA': {'imix': {1024: 0.000}}, - 'mergeable_path_PA_RestartVhost': {'imix': {1024: 0.000}}}, - 'test_perf_pvp_split_all_path_multi_queues_vhost_txq_M_2_1_cbdma': { - 'inorder_mergeable_path_VA_1_lcore': {'imix': {1024: 0.000}}, - 'inorder_mergeable_path_VA_1_lcore_RestartVhost': {'imix': {1024: 0.000}}, - 'inorder_non_mergeable_path_VA_1_lcore': {'imix': {1024: 0.000}}, - 'inorder_non_mergeable_path_VA_1_lcore_RestartVhost': {'imix': {1024: 0.000}}, - 'mergeable_path_VA_1_lcore': {'imix': {1024: 0.000}}, - 'mergeable_path_VA_1_lcore_RestartVhost': {'imix': {1024: 0.000}}, - 'non_mergeable_path_VA_1_lcore': {'imix': {1024: 0.000}}, - 'non_mergeable_path_VA_1_lcore_RestartVhost': {'imix': {1024: 0.000}}, - 'vectorized_path_VA_1_lcore': {'imix': {1024: 0.000}}, - 'vectorized_path_VA_1_lcore_RestartVhost': {'imix': {1024: 0.000}}, - 'inorder_mergeable_path_VA_3_lcore': {'imix': {1024: 0.000}}, - 'inorder_mergeable_path_VA_3_lcore_RestartVhost': {'imix': {1024: 0.000}}, - 'mergeable_path_VA_8_lcore': {'imix': {1024: 0.000}}, - 'mergeable_path_VA_8_lcore_RestartVhost': {'imix': {1024: 0.000}}, - 'inorder_non_mergeable_path_PA_8_lcore': {'imix': {1024: 0.000}}, - 'inorder_non_mergeable_path_PA_8_lcore_RestartVhost': {'imix': {1024: 0.000}}}, - 'test_perf_pvp_split_all_path_vhost_txq_1_to_N_cbdma': { - 'inorder_mergeable_path_VA': {'imix': {1024: 0.000}}, - 'inorder_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, - 'inorder_non_mergeable_path_VA': {'imix': {1024: 0.000}}, - 'inorder_non_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, - 'mergeable_path_VA': {'imix': {1024: 0.000}}, - 'mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, - 'non_mergeable_path_VA': {'imix': {1024: 0.000}}, - 'non_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, - 'vectorized_path_VA': {'imix': {1024: 0.000}}, - 'vectorized_path_VA_RestartVhost': {'imix': {1024: 0.000}}, - 'non_mergeable_path_PA': {'imix': {1024: 0.000}}, - 'non_mergeable_path_PA_RestartVhost': {'imix': {1024: 0.000}}, - 'vectorized_path_PA': {'imix': {1024: 0.000}}}, - 'test_perf_pvp_split_all_path_multi_queues_vhost_txq_M_to_N_cbdma': { - 'inorder_mergeable_path_VA_3dmas': {'imix': {1024: 0.000}}, - 'inorder_mergeable_path_VA_3dmas_RestartVhost': {'imix': {1024: 0.000}}, - 'inorder_non_mergeable_path_VA_3dmas': {'imix': {1024: 0.000}}, - 'inorder_non_mergeable_path_VA_3dmas_RestartVhost': {'imix': {1024: 0.000}}, - 'mergeable_path_VA_3dmas': {'imix': {1024: 0.000}}, - 'mergeable_path_VA_3dmas_RestartVhost': {'imix': {1024: 0.000}}, - 'non_mergeable_path_VA_3dmas': {'imix': {1024: 0.000}}, - 'non_mergeable_path_VA_3dmas_RestartVhost': {'imix': {1024: 0.000}}, - 'vectorized_path_VA_3dmas': {'imix': {1024: 0.000}}, - 'vectorized_path_VA_3dmas_RestartVhost': {'imix': {1024: 0.000}}, - 'inorder_non_mergeable_path_VA_8dmas': {'imix': {1024: 0.000}}, - 'inorder_non_mergeable_path_VA_8dmas_RestartVhost': {'imix': {1024: 0.000}}, - 'vectorized_path_PA_8dmas': {'imix': {1024: 0.000}}, - 'vectorized_path_PA_8dmas_RestartVhost': {'imix': {1024: 0.000}}}, - 'test_perf_pvp_split_dynamic_queues_vhost_txq_M_to_N_cbdma': { - 'inorder_mergeable_path_VA_without_cbdma': {'imix': {1024: 0.000}}, - 'inorder_mergeable_path_VA_without_cbdma_RestartVhost': {'imix': {1024: 0.000}}, - 'inorder_mergeable_path_VA_1:1': {'imix': {1024: 0.000}}, - 'inorder_mergeable_path_VA_1:1_RestartVhost': {'imix': {1024: 0.000}}, - 'inorder_mergeable_path_VA_MN': {'imix': {1024: 0.000}}, - 'inorder_mergeable_path_VA_M>N_RestartVhost': {'imix': {1024: 0.000}}, - 'inorder_mergeable_path_PA_M>N': {'imix': {1024: 0.000}}, - 'inorder_mergeable_path_PA_M>N_RestartVhost': {'imix': {1024: 0.000}}}, - 'test_perf_pvp_packed_all_path_vhost_txq_1_to_1_cbdma': { - 'inorder_mergeable_path_VA': {'imix': {1024: 0.000}}, - 'inorder_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, - 'inorder_non_mergeable_path_VA': {'imix': {1024: 0.000}}, - 'inorder_non_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, - 'mergeable_path_VA': {'imix': {1024: 0.000}}, - 'mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, - 'non_mergeable_path_VA': {'imix': {1024: 0.000}}, - 'non_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, - 'vectorized_path_VA': {'imix': {1024: 0.000}}, - 'vectorized_path_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'test_perf_pvp_split_ring_all_path_multi_queues_vhost_async_operation_with_1_to_1': { + 'inorder_mergeable_VA': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_VA': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'mergeable_VA': {'imix': {1024: 0.000}}, + 'mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'non_mergeable_VA': {'imix': {1024: 0.000}}, + 'non_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_VA': {'imix': {1024: 0.000}}, + 'vectorized_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_mergeable_PA': {'imix': {1024: 0.000}}, + 'inorder_mergeable_PA_RestartVhost': {'imix': {1024: 0.000}}}, + 'test_perf_pvp_split_ring_all_path_multi_queues_vhost_async_operation_with_M_to_1': { + 'inorder_mergeable_VA': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_VA': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'mergeable_VA': {'imix': {1024: 0.000}}, + 'mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'non_mergeable_VA': {'imix': {1024: 0.000}}, + 'non_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_VA': {'imix': {1024: 0.000}}, + 'vectorized_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_mergeable_PA': {'imix': {1024: 0.000}}, + 'inorder_mergeable_PA_RestartVhost': {'imix': {1024: 0.000}}}, + 'test_perf_pvp_split_ring_dynamic_queue_number_vhost_async_operation_with_M_to_N': { + 'inorder_mergeable_VA_1:N': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_without_CBDMA': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_1:1': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_M:N': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_M:N_diff': {'imix': {1024: 0.000}}, + 'mergeable_VA_M:N_diff': {'imix': {1024: 0.000}}, + 'mergeable_PA_M:N_diff': {'imix': {1024: 0.000}},}, + 'test_perf_pvp_packed_ring_all_path_multi_queues_vhost_async_operation_with_1_to_1': { + 'inorder_mergeable_VA': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_VA': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'mergeable_VA': {'imix': {1024: 0.000}}, + 'mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'non_mergeable_VA': {'imix': {1024: 0.000}}, + 'non_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_VA': {'imix': {1024: 0.000}}, + 'vectorized_VA_RestartVhost': {'imix': {1024: 0.000}}, 'vectorized_path_not_power_of_2_VA': {'imix': {1024: 0.000}}, 'vectorized_path_not_power_of_2_VA_RestartVhost': {'imix': {1024: 0.000}}, - 'inorder_mergeable_path_PA': {'imix': {1024: 0.000}}, - 'inorder_mergeable_path_PA_RestartVhost': {'imix': {1024: 0.000}}}, - 'test_perf_pvp_packed_all_path_multi_queues_vhost_txq_1_to_1_cbdma': { - 'inorder_mergeable_path_VA': {'imix': {1024: 0.000}}, - 'inorder_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, - 'inorder_non_mergeable_path_VA': {'imix': {1024: 0.000}}, - 'inorder_non_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, - 'mergeable_path_VA': {'imix': {1024: 0.000}}, - 'mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, - 'non_mergeable_path_VA': {'imix': {1024: 0.000}}, - 'non_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, - 'vectorized_path_VA': {'imix': {1024: 0.000}}, - 'vectorized_path_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_mergeable_PA': {'imix': {1024: 0.000}}, + 'inorder_mergeable_PA_RestartVhost': {'imix': {1024: 0.000}}}, + 'test_perf_pvp_packed_ring_all_path_multi_queues_vhost_async_operation_with_M_to_1': { + 'inorder_mergeable_VA': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_VA': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'mergeable_VA': {'imix': {1024: 0.000}}, + 'mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'non_mergeable_VA': {'imix': {1024: 0.000}}, + 'non_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_VA': {'imix': {1024: 0.000}}, + 'vectorized_VA_RestartVhost': {'imix': {1024: 0.000}}, 'vectorized_path_not_power_of_2_VA': {'imix': {1024: 0.000}}, 'vectorized_path_not_power_of_2_VA_RestartVhost': {'imix': {1024: 0.000}}, - 'mergeable_path_PA': {'imix': {1024: 0.000}}, - 'mergeable_path_PA_RestartVhost': {'imix': {1024: 0.000}}}, - 'test_perf_pvp_packed_all_path_multi_queues_vhost_txq_M_to_1_cbdma': { - 'inorder_mergeable_path_VA_1_lcore': {'imix': {1024: 0.000}}, - 'inorder_mergeable_path_VA_1_lcore_RestartVhost': {'imix': {1024: 0.000}}, - 'inorder_non_mergeable_path_VA_1_lcore': {'imix': {1024: 0.000}}, - 'inorder_non_mergeable_path_VA_1_lcore_RestartVhost': {'imix': {1024: 0.000}}, - 'mergeable_path_VA_1_lcore': {'imix': {1024: 0.000}}, - 'mergeable_path_VA_1_lcore_RestartVhost': {'imix': {1024: 0.000}}, - 'non_mergeable_path_VA_1_lcore': {'imix': {1024: 0.000}}, - 'non_mergeable_path_VA_1_lcore_RestartVhost': {'imix': {1024: 0.000}}, - 'vectorized_path_VA_1_lcore': {'imix': {1024: 0.000}}, - 'vectorized_path_VA_1_lcore_RestartVhost': {'imix': {1024: 0.000}}, - 'vectorized_path_not_power_of_2_VA_1_lcore': {'imix': {1024: 0.000}}, - 'vectorized_path_not_power_of_2_VA_1_lcore_RestartVhost': {'imix': {1024: 0.000}}, - 'inorder_mergeable_path_VA_3_lcore': {'imix': {1024: 0.000}}, - 'inorder_mergeable_path_VA_3_lcore_RestartVhost': {'imix': {1024: 0.000}}, - 'mergeable_path_VA_8_lcore': {'imix': {1024: 0.000}}, - 'mergeable_path_VA_8_lcore_RestartVhost': {'imix': {1024: 0.000}}, - 'inorder_non_mergeable_path_PA_8_lcore': {'imix': {1024: 0.000}}, - 'inorder_non_mergeable_path_PA_8_lcore_RestartVhost': {'imix': {1024: 0.000}}}, - 'test_perf_pvp_packed_all_path_vhost_txq_1_to_N_cbdma': { - 'inorder_mergeable_path_VA': {'imix': {1024: 0.000}}, - 'inorder_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, - 'inorder_non_mergeable_path_VA': {'imix': {1024: 0.000}}, - 'inorder_non_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, - 'mergeable_path_VA': {'imix': {1024: 0.000}}, - 'mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, - 'non_mergeable_path_VA': {'imix': {1024: 0.000}}, - 'non_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, - 'vectorized_path_VA': {'imix': {1024: 0.000}}, - 'vectorized_path_VA_RestartVhost': {'imix': {1024: 0.000}}, - 'vectorized_path_not_power_of_2_VA': {'imix': {1024: 0.000}}, - 'vectorized_path_not_power_of_2_VA_RestartVhost': {'imix': {1024: 0.000}}, - 'non_mergeable_path_PA': {'imix': {1024: 0.000}}, - 'non_mergeable_path_PA_RestartVhost': {'imix': {1024: 0.000}}}, - 'test_perf_pvp_packed_all_path_multi_queues_vhost_txq_M_to_N_cbdma': { - 'inorder_mergeable_path_VA_3dmas': {'imix': {1024: 0.000}}, - 'inorder_mergeable_path_VA_3dmas_RestartVhost': {'imix': {1024: 0.000}}, - 'inorder_non_mergeable_path_VA_3dmas': {'imix': {1024: 0.000}}, - 'inorder_non_mergeable_path_VA_3dmas_RestartVhost': {'imix': {1024: 0.000}}, - 'mergeable_path_VA_3dmas': {'imix': {1024: 0.000}}, - 'mergeable_path_VA_3dmas_RestartVhost': {'imix': {1024: 0.000}}, - 'non_mergeable_path_VA_3dmas': {'imix': {1024: 0.000}}, - 'non_mergeable_path_VA_3dmas_RestartVhost': {'imix': {1024: 0.000}}, - 'vectorized_path_VA_3dmas': {'imix': {1024: 0.000}}, - 'vectorized_path_VA_3dmas_RestartVhost': {'imix': {1024: 0.000}}, - 'vectorized_path_not_power_of_2_VA_3dmas': {'imix': {1024: 0.000}}, - 'vectorized_path_not_power_of_2_VA_3dmas_RestartVhost': {'imix': {1024: 0.000}}, - 'inorder_non_mergeable_path_VA_8dmas': {'imix': {1024: 0.000}}, - 'inorder_non_mergeable_path_VA_8dmas_RestartVhost': {'imix': {1024: 0.000}}, - 'vectorized_path_PA_8dmas': {'imix': {1024: 0.000}}, - 'vectorized_path_PA_8dmas_RestartVhost': {'imix': {1024: 0.000}}}, - 'test_perf_pvp_packed_dynamic_queues_vhost_txq_M_to_N_cbdma': { - 'inorder_mergeable_path_VA_without_cbdma': {'imix': {1024: 0.000}}, - 'inorder_mergeable_path_VA_without_cbdma_RestartVhost': {'imix': {1024: 0.000}}, - 'inorder_mergeable_path_VA_1:1': {'imix': {1024: 0.000}}, - 'inorder_mergeable_path_VA_1:1_RestartVhost': {'imix': {1024: 0.000}}, - 'inorder_mergeable_path_VA_MN': {'imix': {1024: 0.000}}, - 'inorder_mergeable_path_VA_M>N_RestartVhost': {'imix': {1024: 0.000}}, - 'inorder_mergeable_path_PA_M>N': {'imix': {1024: 0.000}}, - 'inorder_mergeable_path_PA_M>N_RestartVhost': {'imix': {1024: 0.000}}}} - + 'inorder_mergeable_PA': {'imix': {1024: 0.000}}, + 'inorder_mergeable_PA_RestartVhost': {'imix': {1024: 0.000}}}, + 'test_perf_pvp_packed_ring_dynamic_queue_number_vhost_async_operation_with_M_to_N': { + 'inorder_mergeable_VA_1:N': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_without_CBDMA': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_1:1': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_M:N': {'imix': {1024: 0.000}}, + 'inorder_mergeable_VA_M:N_diff': {'imix': {1024: 0.000}}, + 'mergeable_VA_M:N_diff': {'imix': {1024: 0.000}}, + 'mergeable_PA_M:N_diff': {'imix': {1024: 0.000}},},}