From patchwork Thu May 5 06:58:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 110654 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 49909A00C4; Thu, 5 May 2022 08:59:05 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4173B40E0F; Thu, 5 May 2022 08:59:05 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id 2ED8F40042 for ; Thu, 5 May 2022 08:59:02 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1651733942; x=1683269942; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=duLdG5h9pjdCGvo8qFYlZ4Z9iMEGApyTDifM0NtYmiQ=; b=fUeAxO3Sui1EdNWda+Zwnr5nkyGE6MyEeY4IJBZ2YjTq8Sf93YfvKfNj PtAZ9o7nHCp+KP8Ok2SRsj3MyUigbanVrHps5UTawP0C0JS/YTAMktoQR y4lFyN9k3p5SZk5sR/VFXImk4nlWuG8srgB0Q0V2LFaSLEIGNoOl6YLlo CVPCcN9NS8laJ6Tnm9htEtZL3QwZBxHukJMoTpU46w16HgO2ftqVyKO87 umvLGVRhO7OVONDpAUXZL+aRA8z7x6lcUKyvqnvm5ajMuGd+FDAM595kp 6GRJ7fuEJwgVO4M+v33B7ipmC4lLlQaNZd0i53OYJrYsStaPgj4yumBoC w==; X-IronPort-AV: E=McAfee;i="6400,9594,10337"; a="250005550" X-IronPort-AV: E=Sophos;i="5.91,200,1647327600"; d="scan'208";a="250005550" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2022 23:59:01 -0700 X-IronPort-AV: E=Sophos;i="5.91,200,1647327600"; d="scan'208";a="664843618" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2022 23:58:59 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V3 1/3] test_plans/vhost_cbdma_test_plan: modify testplan by DPDK command change Date: Thu, 5 May 2022 06:58:40 +0000 Message-Id: <20220505065840.68545-1-weix.ling@intel.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org v1: As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path), modify vhost_cbdma testplan by DPDK22.03 Lib change. v2: Modify the `Description` content in test_plan. v3: Fix WARNING info in test_plan. Signed-off-by: Wei Ling --- test_plans/vhost_cbdma_test_plan.rst | 1409 ++++++++++++++++++++------ 1 file changed, 1113 insertions(+), 296 deletions(-) diff --git a/test_plans/vhost_cbdma_test_plan.rst b/test_plans/vhost_cbdma_test_plan.rst index c8f8b8c5..47dae2fb 100644 --- a/test_plans/vhost_cbdma_test_plan.rst +++ b/test_plans/vhost_cbdma_test_plan.rst @@ -1,404 +1,1221 @@ -.. Copyright (c) <2021>, Intel Corporation - All rights reserved. - - Redistribution and use in source and binary forms, with or without - modification, are permitted provided that the following conditions - are met: - - - Redistributions of source code must retain the above copyright - notice, this list of conditions and the following disclaimer. - - - Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in - the documentation and/or other materials provided with the - distribution. - - - Neither the name of Intel Corporation nor the names of its - contributors may be used to endorse or promote products derived - from this software without specific prior written permission. - - THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS - FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE - COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, - INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES - (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR - SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) - HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, - STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) - ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED - OF THE POSSIBILITY OF SUCH DAMAGE. +.. Copyright (c) <2022>, Intel Corporation + All rights reserved. + + Redistribution and use in source and binary forms, with or without + modification, are permitted provided that the following conditions + are met: + + - Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + + - Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in + the documentation and/or other materials provided with the + distribution. + + - Neither the name of Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived + from this software without specific prior written permission. + + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS + FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE + COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, + INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES + (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR + SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) + ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED + OF THE POSSIBILITY OF SUCH DAMAGE. ========================================================== DMA-accelerated Tx operations for vhost-user PMD test plan ========================================================== -Overview +Description +=========== + +This document provides the test plan for testing Vhost asynchronous +data path with CBDMA driver in the PVP topology environment with testpmd. + +CBDMA is a kind of DMA engine, Vhost asynchronous data path leverages DMA devices +to offload memory copies from the CPU and it is implemented in an asynchronous way. +Linux kernel and DPDK provide CBDMA driver, no matter which driver is used, +DPDK DMA library is used in data-path to offload copies to CBDMA, and the only difference is which driver configures CBDMA. +It enables applications, like OVS, to save CPU cycles and hide memory copy overhead, thus achieving higher throughput. +Vhost doesn't manage DMA devices and applications, like OVS, need to manage and configure CBDMA devices. +Applications need to tell vhost what CBDMA devices to use in every data path function call. +This design enables the flexibility for applications to dynamically use DMA channels in different +function modules, not limited in vhost. In addition, vhost supports M:N mapping between vrings +and DMA virtual channels. Specifically, one vring can use multiple different DMA channels +and one DMA channel can be shared by multiple vrings at the same time. + +Note: +1. When CBDMA devices are bound to vfio driver, VA mode is the default and recommended. +For PA mode, page by page mapping may exceed IOMMU's max capability, better to use 1G guest hugepage. +2. DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. + +For more about dpdk-testpmd sample, please refer to the DPDK docments: +https://doc.dpdk.org/guides/testpmd_app_ug/run_app.html +For virtio-user vdev parameter, you can refer to the DPDK docments: +https://doc.dpdk.org/guides/nics/virtio.html#virtio-paths-selection-and-usage. + +Prerequisites +============= + +Topology -------- + Test flow: TG-->NIC-->Vhost-->Virtio-->Vhost-->NIC-->TG -This feature supports to offload large data movement in vhost enqueue operations -from the CPU to the I/OAT(a DMA engine in Intel's processor) device for every queue. -In addition, a queue can only use one I/OAT device, and I/OAT devices cannot be shared -among vhost ports and queues. That is, an I/OAT device can only be used by one queue at -a time. DMA devices(e.g.,CBDMA) used by queues are assigned by users; for a queue without -assigning a DMA device, the PMD will leverages librte_vhost to perform vhost enqueue -operations. Moreover, users cannot enable I/OAT acceleration for live-migration. Large -copies are offloaded from the CPU to the DMA engine in an asynchronous manner. The CPU -just submits copy jobs to the DMA engine and without waiting for DMA copy completion; -there is no CPU intervention during DMA data transfer. By overlapping CPU -computation and DMA copy, we can save precious CPU cycles and improve the overall -throughput for vhost-user PMD based applications, like OVS. Due to startup overheads -associated with DMA engines, small copies are performed by the CPU. -DPDK 21.11 adds vfio support for DMA device in vhost. When DMA devices are bound to -vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping -may exceed IOMMU's max capability, better to use 1G guest hugepage. - -We introduce a new vdev parameter to enable DMA acceleration for Tx operations of queues: -- dmas: This parameter is used to specify the assigned DMA device of a queue. +Hardware +-------- + Supportted NICs: ALL + +Software +-------- + Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz + +General set up +-------------- +1. Compile DPDK:: + + # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library= + # ninja -C -j 110 + For example: + CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=x86_64-native-linuxapp-gcc + ninja -C x86_64-native-linuxapp-gcc -j 110 + +2. Get the PCI device ID and DMA device ID of DUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 are DMA device IDs:: + + # ./usertools/dpdk-devbind.py -s + + Network devices using kernel driver + =================================== + 0000:18:00.0 'Device 159b' if=ens785f0 drv=ice unused=vfio-pci + + DMA devices using kernel driver + =============================== + 0000:00:04.0 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci + 0000:00:04.1 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci + +Test case +========= + +Common steps +------------ +1. Bind 1 NIC port and CBDMA devices to vfio-pci:: + + # ./usertools/dpdk-devbind.py -b vfio-pci + # ./usertools/dpdk-devbind.py -b vfio-pci + + For example, Bind 1 NIC port and 2 CBDMA devices:: + ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:18.0 + ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:04.0,0000:00:04.1 + +2. Send imix packets [64,1518] to NIC by traffic generator:: + + The imix packets include packet size [64, 128, 256, 512, 1024, 1518], and the format of packet is as follows. + +-------------+-------------+-------------+-------------+ + | MAC | MAC | IPV4 | IPV4 | + | Src address | Dst address | Src address | Dst address | + |-------------|-------------|-------------|-------------| + | Random MAC | Virtio mac | Random IP | Random IP | + +-------------+-------------+-------------+-------------+ + All the packets in this test plan use the Virtio mac: 00:11:22:33:44:10. + +Test Case 1: PVP split ring all path vhost enqueue operations with 1 to 1 mapping between vrings and CBDMA virtual channels +--------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring in each virtio path with 1 core and 1 queue +when vhost uses the asynchronous enqueue operations and the mapping between vrings and CBDMA virtual channels is 1:1. +Both iova as VA and PA mode have been tested. + +1. Bind 1 NIC port and 1 CBDMA device to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=1,dmas=[txq0],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0] + testpmd> set fwd mac + testpmd> start + +3. Launch virtio-user with inorder mergeable path:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +4. Send imix packets [64,1518] from packet generator as common step2, and then check the throughput can get expected data:: + + testpmd> show port stats all + +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: + + testpmd> stop + +6. Restart vhost port and send imix pkts again, then check the throuhput can get expected data:: + + testpmd> start + testpmd> show port stats all + +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=0,queues=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,queues=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,queues=1 \ + -- -i --enable-hw-vlan-strip --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +.. note:: + + Rx offload(s) are requested when using split ring non-mergeable path. So add the parameter "--enable-hw-vlan-strip". + +10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,vectorized=1,queues=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +11. Quit all testpmd and relaunch vhost with iova=pa by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=1,dmas=[txq0],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0] + testpmd> set fwd mac + testpmd> start + +12. Rerun steps 3-11. + +Test Case 2: PVP split ring all path multi-queues vhost enqueue operations with 1 to 1 mapping between vrings and CBDMA virtual channels +---------------------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring in each virtio path with multi-queues +when vhost uses the asynchronous enqueue operations and the mapping between vrings and CBDMA virtual channels is 1:1. +Both iova as VA and PA mode have been tested. + +1. Bind 1 NIC port and 8 CBDMA devices to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.3,lcore15@0000:00:04.4,lcore16@0000:00:04.5,lcore17@0000:00:04.6,lcore18@0000:00:04.7] + testpmd> set fwd mac + testpmd> start + +3. Launch virtio-user with inorder mergeable path:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: + + testpmd> show port stats all + +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: + + testpmd> stop + +6. Restart vhost port and send imix pkts again, then check the throuhput can get expected data:: + + testpmd> start + testpmd> show port stats all + +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=0,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,queues=8 \ + -- -i --enable-hw-vlan-strip --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,vectorized=1,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +11. Quit all testpmd and relaunch vhost with iova=pa by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.3,lcore15@0000:00:04.4,lcore16@0000:00:04.5,lcore17@0000:00:04.6,lcore18@0000:00:04.7] + testpmd> set fwd mac + testpmd> start + +12. Rerun step 7. + +Test Case 3: PVP split ring all path multi-queues vhost enqueue operations with M to 1 mapping between vrings and CBDMA virtual channels +---------------------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring in each virtio path with multi-queues +when vhost uses the asynchronous enqueue operations and the mapping between vrings and CBDMA virtual channels is M:1. +Both iova as VA and PA mode have been tested. + +1. Bind 1 NIC port and 8 CBDMA devices to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0] + testpmd> set fwd mac + testpmd> start + +3. Launch virtio-user with inorder mergeable path:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +3. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: + + testpmd> show port stats all + +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: + + testpmd> stop + +6. Restart vhost port and send imix pkts again, then check the throught can get expected data:: + + testpmd> start + testpmd> show port stats all + +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=0,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,queues=8 \ + -- -i --enable-hw-vlan-strip --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,vectorized=1,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +11. Quit all testpmd and relaunch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=3 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.0] + testpmd> set fwd mac + testpmd> start + +12. Rerun steps 4-6. + +13. Quit all testpmd and relaunch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.0,lcore14@0000:00:04.0,lcore15@0000:00:04.0,lcore16@0000:00:04.0,lcore17@0000:00:04.0,lcore18@0000:00:04.0] + testpmd> set fwd mac + testpmd> start + +14. Rerun steps 7. + +15. Quit all testpmd and relaunch vhost with iova=pa by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.0,lcore14@0000:00:04.0,lcore15@0000:00:04.0,lcore16@0000:00:04.0,lcore17@0000:00:04.0,lcore18@0000:00:04.0] + testpmd> set fwd mac + testpmd> start + +16. Rerun steps 7. + +Test Case 4: PVP split ring all path vhost enqueue operations with 1 to N mapping between vrings and CBDMA virtual channels +--------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring in each virtio path when vhost uses +the asynchronous enqueue operations and the mapping between vrings and CBDMA virtual channels is 1:N. +Both iova as VA and PA mode have been tested. + +1. Bind 1 NIC port and 8 CBDMA devices to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=1,dmas=[txq0],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + testpmd> set fwd mac + testpmd> start + +3. Launch virtio-user with inorder mergeable path:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: + + testpmd> show port stats all + +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: + + testpmd> stop + +6. Restart vhost port and send imix pkts again, then check the throught can get expected data:: + + testpmd> start + testpmd> show port stats all + +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=0,queues=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,queues=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,queues=1 \ + -- -i --enable-hw-vlan-strip --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,vectorized=1,queues=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +11. Quit all testpmd and relaunch vhost with iova=pa by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=1,dmas=[txq0],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + testpmd> set fwd mac + testpmd> start + +12. Rerun steps 9. + +Test Case 5: PVP split ring all path multi-queues vhost enqueue operations with M to N mapping between vrings and CBDMA virtual channels +---------------------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring in each virtio path with multi-queues +when vhost uses the asynchronous enqueue operations and the mapping between vrings and CBDMA virtual channels is M:N. +Both iova as VA and PA mode have been tested. + +1. Bind 1 NIC port and 8 CBDMA devices to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + testpmd> set fwd mac + testpmd> start + +3. Launch virtio-user with inorder mergeable path:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: + + testpmd> show port stats all + +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: + + testpmd> stop + +6. Restart vhost port and send imix pkts again, then check the throught can get expected data:: + + testpmd> start + testpmd> show port stats all + +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=0,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,queues=8 \ + -- -i --enable-hw-vlan-strip --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,vectorized=1,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +11. Quit all testpmd and relaunch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + testpmd> set fwd mac + testpmd> start + +12. Rerun steps 8. + +13. Quit all testpmd and relaunch vhost with iova=pa by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + testpmd> set fwd mac + testpmd> start + +14. Rerun steps 10. + +Test Case 6: PVP split ring dynamic queue number vhost enqueue operations with M to N mapping between vrings and CBDMA virtual channels +--------------------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and Traffic Generator(For example, Trex) to test performance of split ring when vhost uses the asynchronous enqueue operations +and if the vhost-user can work well when the queue number dynamic change. Both iova as VA and PA mode have been tested. +Both iova as VA and PA mode have been tested. + +1. Bind 1 NIC port and 8 CBDMA devices to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,client=1' \ + --iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +3. Launch virtio-user by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=8,server=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: + + testpmd> show port stats all + +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log. + + testpmd> stop + +6. Restart vhost port and send imix pkts again, then check the throught can get expected data:: + + testpmd> start + testpmd> show port stats all + +7. Quit and relaunch vhost with 1:1 mapping between vrings and CBDMA virtual channels, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3]' \ + --iova=va -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.3] + testpmd> set fwd mac + testpmd> start + +8. Quit and relaunch vhost with M:N(1:N;M# ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq6;txq7]' \ + --iova=va -- -i --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.7,lcore12@0000:00:04.1,lcore12@0000:00:04.2,lcore12@0000:00:04.3,lcore13@0000:00:04.2,lcore13@0000:00:04.3,lcore13@0000:00:04.4,lcore14@0000:00:04.2,lcore14@0000:00:04.3,lcore14@0000:00:04.4,lcore14@0000:00:04.5,lcore15@0000:00:04.0,lcore15@0000:00:04.1,lcore15@0000:00:04.2,lcore15@0000:00:04.3,lcore15@0000:00:04.4,lcore15@0000:00:04.5,lcore15@0000:00:04.6,lcore15@0000:00:04.7] + testpmd> set fwd mac + testpmd> start + +9. Quit and relaunch vhost with diff M:N(M:1;M>N) mapping between vrings and CBDMA virtual channels, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ + --iova=va -- -i --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.1,lcore14@0000:00:04.2,lcore15@0000:00:04.1,lcore15@0000:00:04.2] + testpmd> set fwd mac + testpmd> start + +10. Quit and relaunch vhost with iova=pa by below command, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ + --iova=pa -- -i --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.1,lcore14@0000:00:04.2,lcore15@0000:00:04.1,lcore15@0000:00:04.2] + testpmd> set fwd mac + testpmd> start + +Test Case 7: PVP packed ring all path vhost enqueue operations with 1 to 1 mapping between vrings and CBDMA virtual channels +---------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring in each virtio path with 1 core and 1 queue +when vhost uses the asynchronous enqueue operations and the mapping between vrings and CBDMA virtual channels is 1:1. +Both iova as VA and PA mode have been tested. + +1. Bind 1 NIC port and 1 CBDMA device to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=1,dmas=[txq0],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0] + testpmd> set fwd mac + testpmd> start + +3. Launch virtio-user with inorder mergeable path:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=1,packed_vq=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: + + testpmd> show port stats all + +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: + + testpmd> stop + +6. Restart vhost port and send imix pkts again, then check the throught can get expected data:: + + testpmd> start + testpmd> show port stats all + +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=0,queues=1,packed_vq=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,queues=1,packed_vq=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,queues=1,packed_vq=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +.. note:: + + If building and running environment support (AVX512 || NEON) && in-order feature is negotiated && Rx mergeable + is not negotiated && TCP_LRO Rx offloading is disabled && vectorized option enabled, packed virtqueue vectorized Rx path will be selected. + +11. Relaunch virtio-user with vectorized path and ring size is not power of 2, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=1,queue_size=1025 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1025 --rxd=1025 + testpmd> set fwd mac + testpmd> start + +12. Quit all testpmd and relaunch vhost with iova=pa by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=1,dmas=[txq0],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0] + testpmd> set fwd mac + testpmd> start + +12. Rerun steps 3-6. + +Test Case 8: PVP packed ring all path multi-queues vhost enqueue operations with 1 to 1 mapping between vrings and CBDMA virtual channels +----------------------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring in each virtio path with multi-queues +when vhost uses the asynchronous enqueue operations and the mapping between vrings and CBDMA virtual channels is 1:1. +Both iova as VA and PA mode have been tested. + +1. Bind 1 NIC port and 8 CBDMA devices to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.3,lcore15@0000:00:04.4,lcore16@0000:00:04.5,lcore17@0000:00:04.6,lcore18@0000:00:04.7] + testpmd> set fwd mac + testpmd> start + +3. Launch virtio-user with inorder mergeable path:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: + + testpmd> show port stats all + +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: + + testpmd> stop + +6. Restart vhost port and send imix pkts again, then check the throught can get expected data:: + + testpmd> start + testpmd> show port stats all + +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=0,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start + +11. Relaunch virtio-user with vectorized path and ring size is not power of 2, then repeat step 4-6:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=8,queue_size=1025 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1025 --rxd=1025 + testpmd> set fwd mac + testpmd> start + +12. Quit all testpmd and relaunch vhost with iova=pa by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.3,lcore15@0000:00:04.4,lcore16@0000:00:04.5,lcore17@0000:00:04.6,lcore18@0000:00:04.7] + testpmd> set fwd mac + testpmd> start + +13. Rerun step 7. + +Test Case 9: PVP packed ring all path multi-queues vhost enqueue operations with M to 1 mapping between vrings and CBDMA virtual channels +----------------------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring in each virtio path with multi-queues +when vhost uses the asynchronous enqueue operations and the mapping between vrings and CBDMA virtual channels is M:1. +Both iova as VA and PA mode have been tested. + +1. Bind 1 NIC port and 1 CBDMA device to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0] + testpmd> set fwd mac + testpmd> start -Here is an example: -./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c f -n 4 \ ---vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@0000:00:04.0] \ ---iova=va -- -i' +3. Launch virtio-user with inorder mergeable path:: -Test Case 1: PVP split ring all path vhost enqueue operations with cbdma -======================================================================== + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start -Packet pipeline: -================ -TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG +4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: -1. Bind 1 CBDMA port and 1 NIC port to vfio-pci, then launch vhost by below command:: + testpmd> show port stats all - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@0000:00:04.0]' \ - --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: -2. Launch virtio-user with inorder mergeable path:: + testpmd> stop - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=1 \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start +6. Restart vhost port and send imix pkts again, then check the throught can get expected data:: -3. Send imix packets [64,1518] from packet generator, check the throughput can get expected data, restart vhost port and send imix pkts again, check get same throuhput:: + testpmd> start + testpmd> show port stats all - testpmd>show port stats all - testpmd>stop - testpmd>start - testpmd>show port stats all +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: -4. Relaunch virtio-user with mergeable path, then repeat step 3:: + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=0,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=1 \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: -5. Relaunch virtio-user with inorder non-mergeable path, then repeat step 3:: + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1 \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: -6. Relaunch virtio-user with non-mergeable path, then repeat step 3:: + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1 \ - -- -i --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start +10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: -7. Relaunch virtio-user with vector_rx path, then repeat step 3:: + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=8 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,vectorized=1,queues=1 \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start +11. Relaunch virtio-user with vectorized path and ring size is not power of 2, then repeat step 4-6:: -8. Quit all testpmd and relaunch vhost with iova=pa by below command:: + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=8,queue_size=1025 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1025 --rxd=1025 + testpmd> set fwd mac + testpmd> start - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@0000:00:04.0]' \ - --iova=pa -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start +12. Quit all testpmd and relaunch vhost by below command:: -9. Rerun steps 2-7. + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=3 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.0] + testpmd> set fwd mac + testpmd> start -Test Case 2: PVP split ring dynamic queue number vhost enqueue operations with cbdma -===================================================================================== +13. Rerun steps 3-6. -1. Bind 8 CBDMA ports and 1 NIC port to vfio-pci, then launch vhost by below command:: +14. Quit all testpmd and relaunch vhost by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1' \ - --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 - >set fwd mac - >start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.0,lcore14@0000:00:04.0,lcore15@0000:00:04.0,lcore16@0000:00:04.0,lcore17@0000:00:04.0,lcore18@0000:00:04.0] + testpmd> set fwd mac + testpmd> start -2. Launch virtio-user by below command:: +15. Rerun steps 7. - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 30-31 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8,server=1 \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 - >set fwd mac - >start +16. Quit all testpmd and relaunch vhost with iova=pa by below command:: -3. Send imix packets[64,1518] from packet generator with random ip, check perforamnce can get target. + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.0,lcore14@0000:00:04.0,lcore15@0000:00:04.0,lcore16@0000:00:04.0,lcore17@0000:00:04.0,lcore18@0000:00:04.0] + testpmd> set fwd mac + testpmd> start -4. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log. +17. Rerun steps 8. -5. Quit and relaunch vhost with 4 queues w/ cbdma and 4 queues w/o cbdma:: +Test Case 10: PVP packed ring all path vhost enqueue operations with 1 to N mapping between vrings and CBDMA virtual channels +----------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring in each virtio path when vhost uses +the asynchronous enqueue operations and the mapping between vrings and CBDMA virtual channels is 1:N. +Both iova as VA and PA mode have been tested. - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3]' \ - --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 - >set fwd mac - >start +1. Bind 1 NIC port and 8 CBDMA devices to vfio-pci, as common step 1. -6. Send imix packets[64,1518] from packet generator with random ip, check perforamnce can get target. +2. Launch vhost by below command:: -7. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log. + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=1,dmas=[txq0],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + testpmd> set fwd mac + testpmd> start -8. Quit and relaunch vhost with 8 queues w/ cbdma:: +3. Launch virtio-user with inorder mergeable path:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ - --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 - >set fwd mac - >start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=1,packed_vq=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start -9. Send imix packets[64,1518] from packet generator with random ip, check perforamnce can get target. +4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: -10. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log. + testpmd> show port stats all -11. Quit and relaunch vhost with iova=pa, 6 queues w/ cbdma and 2 queues w/o cbdma:: +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5]' \ - --iova=pa -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 - >set fwd mac - >start + testpmd> stop -12. Send imix packets[64,1518] from packet generator with random ip, check perforamnce can get target. +6. Restart vhost port and send imix pkts again, then check the throught can get expected data:: -13. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log. + testpmd> start + testpmd> show port stats all -Test Case 3: PVP packed ring all path vhost enqueue operations with cbdma -========================================================================= +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: -Packet pipeline: -================ -TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=0,queues=1,packed_vq=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start -1. Bind 1 CBDMA port and 1 NIC port to vfio-pci, then launch vhost by below command:: +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@0000:80:04.0]' \ - --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,queues=1,packed_vq=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start -2. Launch virtio-user with inorder mergeable path:: +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=1,packed_vq=1 \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,queues=1,packed_vq=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start -3. Send imix packets [64,1518] from packet generator, check the throughput can get expected data, restart vhost port and send imix pkts again, check get same throuhput:: +10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: - testpmd>show port stats all - testpmd>stop - testpmd>start - testpmd>show port stats all + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queues=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start -4. Relaunch virtio-user with mergeable path, then repeat step 3:: +11. Relaunch virtio-user with vectorized path and ring size is not power of 2, then repeat step 3:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=1,packed_vq=1 \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queues=1,queue_size=1025 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1025 --rxd=1025 + testpmd> set fwd mac + testpmd> start -5. Relaunch virtio-user with inorder non-mergeable path, then repeat step 3:: +12. Quit all testpmd and relaunch vhost with iova=pa by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1,packed_vq=1 \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=1,dmas=[txq0],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + testpmd> set fwd mac + testpmd> start -6. Relaunch virtio-user with non-mergeable path, then repeat step 3:: +13. Rerun steps 9. - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1,packed_vq=1 \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start +Test Case 11: PVP packed ring all path multi-queues vhost enqueue operations with M to N mapping between vrings and CBDMA virtual channels +------------------------------------------------------------------------------------------------------------------------------------------ +This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring in each virtio path with multi-queues +when vhost uses the asynchronous enqueue operations and the mapping between vrings and CBDMA virtual channels is M:N. +Both iova as VA and PA mode have been tested. -7. Relaunch virtio-user with vectorized path, then repeat step 3:: +1. Bind 1 NIC port and 8 CBDMA devices to vfio-pci, as common step 1. - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queues=1 \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start +2. Launch vhost by below command:: -8. Relaunch virtio-user with vectorized path and ring size is not power of 2, then repeat step 3:: + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + testpmd> set fwd mac + testpmd> start - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queues=1,queue_size=1025 \ - -- -i --nb-cores=1 --txd=1025 --rxd=1025 - >set fwd mac - >start +3. Launch virtio-user with inorder mergeable path:: -9. Quit all testpmd and relaunch vhost with iova=pa by below command:: + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@0000:80:04.0]' \ - --iova=pa -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start +4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: -10. Rerun steps 2-8. + testpmd> show port stats all -Test Case 4: PVP packed ring dynamic queue number vhost enqueue operations with cbdma -===================================================================================== +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: -1. Bind 8 CBDMA ports and 1 NIC port to vfio-pci, then launch vhost by below command:: + testpmd> stop - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1' \ - --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 - >set fwd mac - >start +6. Restart vhost port and send imix pkts again, then check the throught can get expected data:: -2. Launch virtio-user by below command:: + testpmd> start + testpmd> show port stats all - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 30-31 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=8,server=1,packed_vq=1 \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 - >set fwd mac - >start +7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: -3. Send imix packets from packet generator with random ip, check perforamnce can get target. + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=0,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start -4. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log. +8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: -5. Quit and relaunch vhost with 4 queues w/ cbdma and 4 queues w/o cbdma:: + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=1,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3]' \ - --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 - >set fwd mac - >start +9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: -6. Send imix packets from packet generator with random ip, check perforamnce can get target. + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,queues=8,packed_vq=1 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start -7. Stop vhost port, check vhost RX and TX direction both exist packtes in 4 queues from vhost log. +10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: -8. Quit and relaunch vhost with 8 queues w/ cbdma:: + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=8, \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \ - --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 - >set fwd mac - >start +11. Relaunch virtio-user with vectorized path and ring size is not power of 2, then repeat step 4-6:: -9. Send imix packets from packet generator with random ip, check perforamnce can get target. + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=8,queue_size=1025 \ + -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1025 --rxd=1025 + testpmd> set fwd mac + testpmd> start -10. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log. +12. Quit all testpmd and relaunch vhost by below command:: -11. Quit and relaunch vhost with iova=pa, 6 queues w/ cbdma and 2 queues w/o cbdma:: + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + testpmd> set fwd mac + testpmd> start - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5]' \ - --iova=pa -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 - >set fwd mac - >start +13. Rerun steps 7. -12. Send imix packets from packet generator with random ip, check perforamnce can get target. +14. Quit all testpmd and relaunch vhost with iova=pa by below command:: -13. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log. + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + testpmd> set fwd mac + testpmd> start -Test Case 5: loopback split ring large chain packets stress test with cbdma enqueue -==================================================================================== +15. Rerun steps 9. -Packet pipeline: -================ -Vhost <--> Virtio +Test Case 12: PVP packed ring dynamic queue number vhost enqueue operations with M to N mapping between vrings and CBDMA virtual channels +----------------------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd and Traffic Generator(For example, Trex) to test performance of packed ring when vhost uses the asynchronous enqueue operations +and if the vhost-user can work well when the queue number dynamic change. Both iova as VA and PA mode have been tested. +Both iova as VA and PA mode have been tested. -1. Bind 1 CBDMA channel to vfio-pci and launch vhost:: +1. Bind 1 NIC port and 8 CBDMA devices to vfio-pci, as common step 1. - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:00:04.0]' \ - --iova=va -- -i --nb-cores=1 --mbuf-size=65535 +2. Launch vhost by below command:: -2. Launch virtio and start testpmd:: + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,client=1' \ + --iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=testpmd0 --no-pci \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=1, \ - mrg_rxbuf=1,in_order=0,vectorized=1,queue_size=2048 \ - -- -i --rxq=1 --txq=1 --txd=2048 --rxd=2048 --nb-cores=1 - >start +3. Launch virtio-user by below command:: -3. Send large packets from vhost, check virtio can receive packets:: + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost_net0,mrg_rxbuf=1,in_order=1,queues=1,server=1,packed_vq=1 \ + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 + testpmd> set fwd mac + testpmd> start - testpmd> vhost enable tx all - testpmd> set txpkts 65535,65535,65535,65535,65535 - testpmd> start tx_first 32 - testpmd> show port stats all +4. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data:: -4. Quit all testpmd and relaunch vhost with iova=pa:: + testpmd> show port stats all - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:00:04.0]' \ - --iova=pa -- -i --nb-cores=1 --mbuf-size=65535 +5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: -5. Rerun steps 2-3. + testpmd> stop -Test Case 6: loopback packed ring large chain packets stress test with cbdma enqueue -==================================================================================== +6. Restart vhost port and send imix pkts again, then check the throught can get expected data:: -Packet pipeline: -================ -Vhost <--> Virtio + testpmd> start + testpmd> show port stats all -1. Bind 1 CBDMA channel to vfio-pci and launch vhost:: +7. Quit and relaunch vhost with 1:1 mapping between vrings and CBDMA virtual channels, then repeat step 4-6:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:00:04.0]' \ - --iova=va -- -i --nb-cores=1 --mbuf-size=65535 + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3]' \ + --iova=va -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.3] + testpmd> set fwd mac + testpmd> start -2. Launch virtio and start testpmd:: +9. Quit and relaunch vhost with M:N(1:N;Mstart + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq6;txq7]' \ + --iova=va -- -i --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.7,lcore12@0000:00:04.1,lcore12@0000:00:04.2,lcore12@0000:00:04.3,lcore13@0000:00:04.2,lcore13@0000:00:04.3,lcore13@0000:00:04.4,lcore14@0000:00:04.2,lcore14@0000:00:04.3,lcore14@0000:00:04.4,lcore14@0000:00:04.5,lcore15@0000:00:04.0,lcore15@0000:00:04.1,lcore15@0000:00:04.2,lcore15@0000:00:04.3,lcore15@0000:00:04.4,lcore15@0000:00:04.5,lcore15@0000:00:04.6,lcore15@0000:00:04.7] + testpmd> set fwd mac + testpmd> start -3. Send large packets from vhost, check virtio can receive packets:: +11. Quit and relaunch vhost with diff M:N(M:1;M>N) mapping between vrings and CBDMA virtual channels, then repeat step 4-6:: - testpmd> vhost enable tx all - testpmd> set txpkts 65535,65535,65535,65535,65535 - testpmd> start tx_first 32 - testpmd> show port stats all + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ + --iova=va -- -i --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.1,lcore14@0000:00:04.2,lcore15@0000:00:04.1,lcore15@0000:00:04.2] + testpmd> set fwd mac + testpmd> start -4. Quit all testpmd and relaunch vhost with iova=pa:: +13. Quit and relaunch vhost with iova=pa by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:00:04.0]' --iova=pa -- -i --nb-cores=1 --mbuf-size=65535 + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ + -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 \ + --vdev 'net_vhost0,iface=/tmp/vhost_net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ + --iova=pa -- -i --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ + --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.1,lcore14@0000:00:04.2,lcore15@0000:00:04.1,lcore15@0000:00:04.2] + testpmd> set fwd mac + testpmd> start -5. Rerun steps 2-3. +14. Rerun step 4-6. From patchwork Thu May 5 06:58:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 110655 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 78BBCA00C4; Thu, 5 May 2022 08:59:19 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7169D40E2D; Thu, 5 May 2022 08:59:19 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 8758040042 for ; Thu, 5 May 2022 08:59:17 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1651733957; x=1683269957; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=Wpfa+tlQRZb4+dY5PJRtPXiAGdWzPlJG+XOhBf19+xI=; b=YpB9Sszlyo2/SoJS0whMpkZYqbzhqsUfD+ZiDqbO4fjKDa5c4NP+IYGb ItrYgbClLO5idb2Cy8QEOQPVog5L1ZplDxopxTHCnaOkkXjd29rxhP6dc FFlBkNpNmeSUU+BnoYupqur0GQfWCyWQHXslISlBiPEx4GMBUIn3CpV1S SF/yA4GSfRLqZyIImvK6L0wG3QENNcrSBpnZ4oVLJtFRdZjd04hpOS0Ws jHGdxQouPLLgKgT/LfZNSudKweGnRPePHOasCF4SeI9MNnpxG5nuPyA1o wfxk3rWKdQJxMp+vwNthMtVpIhlbt6UNGOcn5pIw2rYRb5RVNYsvB5gRW w==; X-IronPort-AV: E=McAfee;i="6400,9594,10337"; a="266865347" X-IronPort-AV: E=Sophos;i="5.91,200,1647327600"; d="scan'208";a="266865347" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2022 23:59:16 -0700 X-IronPort-AV: E=Sophos;i="5.91,200,1647327600"; d="scan'208";a="891211470" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2022 23:59:12 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V3 2/3] tests/vhost_cbdma: modify testsuite by DPDK command change Date: Thu, 5 May 2022 06:58:53 +0000 Message-Id: <20220505065853.68575-1-weix.ling@intel.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path), modify vhost_cbdma testsuite by DPDK22.03 Lib change. Signed-off-by: Wei Ling --- tests/TestSuite_vhost_cbdma.py | 2567 +++++++++++++++++++++++--------- 1 file changed, 1887 insertions(+), 680 deletions(-) diff --git a/tests/TestSuite_vhost_cbdma.py b/tests/TestSuite_vhost_cbdma.py index f08d7cc3..fd546584 100644 --- a/tests/TestSuite_vhost_cbdma.py +++ b/tests/TestSuite_vhost_cbdma.py @@ -1,6 +1,6 @@ # BSD LICENSE # -# Copyright(c) <2019> Intel Corporation. +# Copyright(c) <2022> Intel Corporation. # All rights reserved. # # Redistribution and use in source and binary forms, with or without @@ -29,21 +29,10 @@ # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -""" -DPDK Test suite. -We introduce a new vdev parameter to enable DMA acceleration for Tx -operations of queues: - - dmas: This parameter is used to specify the assigned DMA device of - a queue. - -Here is an example: - $ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c f -n 4 \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0]' -""" + import json import os import re -import time from copy import deepcopy import framework.rst as rst @@ -53,23 +42,40 @@ from framework.pmd_output import PmdOutput from framework.settings import HEADER_SIZE, UPDATE_EXPECTED, load_global_setting from framework.test_case import TestCase +SPLIT_RING_PATH = { + "inorder_mergeable_path": "mrg_rxbuf=1,in_order=1", + "mergeable_path": "mrg_rxbuf=1,in_order=0", + "inorder_non_mergeable_path": "mrg_rxbuf=0,in_order=1", + "non_mergeable_path": "mrg_rxbuf=0,in_order=0", + "vectorized_path": "mrg_rxbuf=0,in_order=0,vectorized=1", +} + +PACKED_RING_PATH = { + "inorder_mergeable_path": "mrg_rxbuf=1,in_order=1,packed_vq=1", + "mergeable_path": "mrg_rxbuf=1,in_order=0,packed_vq=1", + "inorder_non_mergeable_path": "mrg_rxbuf=0,in_order=1,packed_vq=1", + "non_mergeable_path": "mrg_rxbuf=0,in_order=0,packed_vq=1", + "vectorized_path": "mrg_rxbuf=0,in_order=0,vectorized=1,packed_vq=1", + "vectorized_path_not_power_of_2": "mrg_rxbuf=0,in_order=0,vectorized=1,packed_vq=1", +} + + +class TestVhostCbdma(TestCase): -class TestVirTioVhostCbdma(TestCase): def set_up_all(self): self.dut_ports = self.dut.get_ports() self.number_of_ports = 1 self.vhost_user = self.dut.new_session(suite="vhost-user") self.virtio_user = self.dut.new_session(suite="virtio-user") - self.virtio_user1 = self.dut.new_session(suite="virtio-user1") - self.pmdout_vhost_user = PmdOutput(self.dut, self.vhost_user) - self.pmdout_virtio_user = PmdOutput(self.dut, self.virtio_user) + self.vhost_user_pmd = PmdOutput(self.dut, self.vhost_user) + self.virtio_user_pmd = PmdOutput(self.dut, self.virtio_user) self.virtio_mac = "00:01:02:03:04:05" self.headers_size = HEADER_SIZE["eth"] + HEADER_SIZE["ip"] self.pci_info = self.dut.ports_info[0]["pci"] - self.socket = self.dut.get_numa_id(self.dut_ports[0]) - self.cores = self.dut.get_core_list("all", socket=self.socket) - self.cbdma_dev_infos = [] self.ports_socket = self.dut.get_numa_id(self.dut_ports[0]) + self.cores_list = self.dut.get_core_list(config="all", socket=self.ports_socket) + self.vhost_core_list = self.cores_list[0:9] + self.virtio_core_list = self.cores_list[9:11] self.out_path = "/tmp/%s" % self.suite_name out = self.tester.send_expect("ls -d %s" % self.out_path, "# ") if "No such file or directory" in out: @@ -84,11 +90,7 @@ class TestVirTioVhostCbdma(TestCase): """ Run before each test case. """ - self.table_header = ["Frame"] - self.table_header.append("Mode/RXD-TXD") - self.used_cbdma = [] - self.table_header.append("Mpps") - self.table_header.append("% linerate") + self.table_header = ["Frame", "Mode/RXD-TXD", "Mpps", "% linerate"] self.result_table_create(self.table_header) self.test_parameters = self.get_suite_cfg()["test_parameters"] self.test_duration = self.get_suite_cfg()["test_duration"] @@ -101,10 +103,13 @@ class TestVirTioVhostCbdma(TestCase): self.dut.send_expect("rm -rf /tmp/s0", "#") self.mode_list = [] - def get_cbdma_ports_info_and_bind_to_dpdk(self, cbdma_num): + def get_cbdma_ports_info_and_bind_to_dpdk(self, cbdma_num, allow_diff_socket=False): """ - get all cbdma ports + get and bind cbdma ports into DPDK driver """ + self.all_cbdma_list = [] + self.cbdma_list = [] + self.cbdma_str = "" out = self.dut.send_expect( "./usertools/dpdk-devbind.py --status-dev dma", "# ", 30 ) @@ -119,62 +124,91 @@ class TestVirTioVhostCbdma(TestCase): cur_socket = 1 else: cur_socket = 0 - if self.ports_socket == cur_socket: - self.cbdma_dev_infos.append(pci_info.group(1)) + if allow_diff_socket: + self.all_cbdma_list.append(pci_info.group(1)) + else: + if self.ports_socket == cur_socket: + self.all_cbdma_list.append(pci_info.group(1)) self.verify( - len(self.cbdma_dev_infos) >= cbdma_num, - "There no enough cbdma device to run this suite", + len(self.all_cbdma_list) >= cbdma_num, "There no enough cbdma device" ) - self.used_cbdma = self.cbdma_dev_infos[0:cbdma_num] - self.device_str = " ".join(self.used_cbdma) + self.cbdma_list = self.all_cbdma_list[0:cbdma_num] + self.cbdma_str = " ".join(self.cbdma_list) self.dut.send_expect( "./usertools/dpdk-devbind.py --force --bind=%s %s" - % (self.drivername, self.device_str), + % (self.drivername, self.cbdma_str), "# ", 60, ) + @staticmethod + def generate_dmas_param(queues): + das_list = [] + for i in range(queues): + das_list.append("txq{}".format(i)) + das_param = "[{}]".format(";".join(das_list)) + return das_param + + @staticmethod + def generate_lcore_dma_param(cbdma_list, core_list): + group_num = int(len(cbdma_list) / len(core_list)) + lcore_dma_list = [] + if len(cbdma_list) == 1: + for core in core_list: + lcore_dma_list.append("lcore{}@{}".format(core, cbdma_list[0])) + elif len(core_list) == 1: + for cbdma in cbdma_list: + lcore_dma_list.append("lcore{}@{}".format(core_list[0], cbdma)) + else: + for cbdma in cbdma_list: + core_list_index = int(cbdma_list.index(cbdma) / group_num) + lcore_dma_list.append( + "lcore{}@{}".format(core_list[core_list_index], cbdma) + ) + lcore_dma_param = "[{}]".format(",".join(lcore_dma_list)) + return lcore_dma_param + def bind_cbdma_device_to_kernel(self): - if self.device_str is not None: - self.dut.send_expect("modprobe ioatdma", "# ") - self.dut.send_expect( - "./usertools/dpdk-devbind.py -u %s" % self.device_str, "# ", 30 - ) - self.dut.send_expect( - "./usertools/dpdk-devbind.py --force --bind=ioatdma %s" - % self.device_str, - "# ", - 60, - ) + self.dut.send_expect("modprobe ioatdma", "# ") + self.dut.send_expect( + "./usertools/dpdk-devbind.py -u %s" % self.cbdma_str, "# ", 30 + ) + self.dut.send_expect( + "./usertools/dpdk-devbind.py --force --bind=ioatdma %s" % self.cbdma_str, + "# ", + 60, + ) + + def get_vhost_port_num(self): + out = self.vhost_user.send_expect("show port summary all", "testpmd> ", 60) + port_num = re.search("Number of available ports:\s*(\d*)", out) + return int(port_num.group(1)) - def check_packets_of_each_queue(self, queue_list): + def check_each_queue_of_port_packets(self, queues=0): """ - check each queue has receive packets + check each queue of each port has receive packets """ - out = self.vhost_user.send_expect("stop", "testpmd> ", 60) - for queue_index in queue_list: - queue = "Queue= %d" % queue_index - index = out.find(queue) - rx = re.search("RX-packets:\s*(\d*)", out[index:]) - tx = re.search("TX-packets:\s*(\d*)", out[index:]) - rx_packets = int(rx.group(1)) - tx_packets = int(tx.group(1)) - self.verify( - rx_packets > 0 and tx_packets > 0, - "The queue %d rx-packets or tx-packets is 0 about " % queue_index - + "rx-packets:%d, tx-packets:%d" % (rx_packets, tx_packets), - ) - self.vhost_user.send_expect("clear port stats all", "testpmd> ", 30) - self.vhost_user.send_expect("start", "testpmd> ", 30) - - def check_port_stats_result(self, session): - out = session.send_expect("show port stats all", "testpmd> ", 30) - self.result_first = re.findall(r"RX-packets: (\w+)", out) - self.result_secondary = re.findall(r"TX-packets: (\w+)", out) - self.verify( - int(self.result_first[0]) > 1 and int(self.result_secondary[0]) > 1, - "forward packets no correctly", - ) + out = self.vhost_user_pmd.execute_cmd("stop") + port_num = self.get_vhost_port_num() + for port in range(port_num): + for queue in range(queues): + if queues > 0: + reg = "Port= %d/Queue= %d" % (port, queue) + else: + reg = "Forward statistics for port {}".format(port) + index = out.find(reg) + rx = re.search("RX-packets:\s*(\d*)", out[index:]) + tx = re.search("TX-packets:\s*(\d*)", out[index:]) + rx_packets = int(rx.group(1)) + tx_packets = int(tx.group(1)) + self.verify( + rx_packets > 0 and tx_packets > 0, + "The port {}/queue {} rx-packets or tx-packets is 0 about ".format( + port, queue + ) + + "rx-packets: {}, tx-packets: {}".format(rx_packets, tx_packets), + ) + self.vhost_user_pmd.execute_cmd("start") @property def check_2M_env(self): @@ -183,650 +217,1900 @@ class TestVirTioVhostCbdma(TestCase): ) return True if out == "2048" else False - def launch_testpmd_as_vhost_user( - self, - command, - cores="Default", - dev="", - ports="", - iova_mode="pa", - set_pmd_param=True, - ): - if iova_mode: - iova_parm = "--iova=" + iova_mode - else: - iova_parm = "" - self.pmdout_vhost_user.start_testpmd( - cores=cores, - param=command, - vdevs=[dev], - ports=ports, - prefix="vhost", - eal_param=iova_parm, - ) - if set_pmd_param: - self.vhost_user.send_expect("set fwd mac", "testpmd> ", 30) - self.vhost_user.send_expect("start", "testpmd> ", 30) - - def launch_testpmd_as_virtio_user( - self, command, cores="Default", dev="", set_pmd_param=True, eal_param="" + def start_vhost_testpmd( + self, cores="Default", param="", eal_param="", ports="", iova_mode="va" ): + eal_param += " --iova=" + iova_mode + self.vhost_user_pmd.start_testpmd( + cores=cores, param=param, eal_param=eal_param, ports=ports, prefix="vhost" + ) + self.vhost_user_pmd.execute_cmd("set fwd mac") + self.vhost_user_pmd.execute_cmd("start") + + def start_virtio_testpmd(self, cores="Default", param="", eal_param=""): if self.check_2M_env: eal_param += " --single-file-segments" - self.pmdout_virtio_user.start_testpmd( - cores, - command, - vdevs=[dev], - no_pci=True, - prefix="virtio", - eal_param=eal_param, - ) - if set_pmd_param: - self.virtio_user.send_expect("set fwd mac", "testpmd> ", 30) - self.virtio_user.send_expect("start", "testpmd> ", 30) - self.virtio_user.send_expect("show port info all", "testpmd> ", 30) - - def diff_param_launch_send_and_verify( - self, mode, params, dev, cores, eal_param="", is_quit=True, launch_virtio=True - ): - if launch_virtio: - self.launch_testpmd_as_virtio_user( - params, cores, dev=dev, eal_param=eal_param - ) - self.send_and_verify(mode) - if is_quit: - self.virtio_user.send_expect("quit", "# ") - time.sleep(3) + self.virtio_user_pmd.start_testpmd( + cores=cores, param=param, eal_param=eal_param, no_pci=True, prefix="virtio" + ) + self.virtio_user_pmd.execute_cmd("set fwd mac") + self.virtio_user_pmd.execute_cmd("start") + self.virtio_user_pmd.execute_cmd("show port info all") - def test_perf_pvp_spilt_ring_all_path_vhost_enqueue_operations_with_cbdma(self): + def test_perf_pvp_split_all_path_vhost_txq_1_to_1_cbdma(self): """ - Test Case 1: PVP split ring all path vhost enqueue operations with cbdma + Test Case 1: PVP split ring all path vhost enqueue operations with 1 to 1 mapping between vrings and CBDMA virtual channels """ - self.test_target = self.running_case - self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ - self.test_target - ] - txd_rxd = 1024 - vhost_param = " --nb-cores=%d --txd=%d --rxd=%d" - nb_cores = 1 - queues = 1 - self.get_cbdma_ports_info_and_bind_to_dpdk(1) - vhost_vdevs = ( - f"'net_vhost0,iface=/tmp/s0,queues=%d,dmas=[txq0@{self.device_str}]'" - ) - virtio_path_dict = { - "inorder_mergeable_path": "mrg_rxbuf=1,in_order=1", - "mergeable_path": "mrg_rxbuf=1,in_order=0", - "inorder_non_mergeable_path": "mrg_rxbuf=0,in_order=1", - "non_mergeable_path": "mrg_rxbuf=0,in_order=0", - "vector_rx_path": "mrg_rxbuf=0,in_order=0,vectorized=1", - } + cbdma_num = 1 + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num) + dmas = self.generate_dmas_param(1) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] + ) + vhost_eal_param = "--vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + vhost_param = ( + "--nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 --lcore-dma={}".format( + lcore_dma + ) + ) + virtio_param = "--nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024" allow_pci = [self.dut.ports_info[0]["pci"]] - for index in range(1): - allow_pci.append(self.cbdma_dev_infos[index]) - self.launch_testpmd_as_vhost_user( - vhost_param % (nb_cores, txd_rxd, txd_rxd), - self.cores[0:2], - dev=vhost_vdevs % (nb_cores), + for i in range(cbdma_num): + allow_pci.append(self.cbdma_list[i]) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, ports=allow_pci, iova_mode="va", ) - for key, path_mode in virtio_path_dict.items(): - if key == "non_mergeable_path": - virtio_param = ( - " --enable-hw-vlan-strip --nb-cores=%d --txd=%d --rxd=%d" - % (nb_cores, txd_rxd, txd_rxd) + for key, path in SPLIT_RING_PATH.items(): + virtio_eal_param = ( + "--vdev 'net_virtio_user0,mac={},path=/tmp/s0,{},queues=1'".format( + self.virtio_mac, path ) + ) + if key == "non_mergeable_path": + new_virtio_param = "--enable-hw-vlan-strip " + virtio_param else: - virtio_param = " --nb-cores=%d --txd=%d --rxd=%d" % ( - nb_cores, - txd_rxd, - txd_rxd, - ) - vdevs = ( - f"'net_virtio_user0,mac={self.virtio_mac},path=/tmp/s0,{path_mode},queues=%d'" - % nb_cores + new_virtio_param = virtio_param + + self.logger.info( + "Start virtio-user with {} path with {}".format(key, path) ) mode = key + "_VA" self.mode_list.append(mode) - self.diff_param_launch_send_and_verify( - mode, - virtio_param, - vdevs, - self.cores[2:4], - is_quit=False, - launch_virtio=True, - ) - self.vhost_user.send_expect("show port stats all", "testpmd> ", 10) - self.vhost_user.send_expect("stop", "testpmd> ", 10) - self.vhost_user.send_expect("start", "testpmd> ", 10) - self.vhost_user.send_expect("show port info all", "testpmd> ", 30) - self.vhost_user.send_expect("show port stats all", "testpmd> ", 10) - mode += "_RestartVhost" - self.mode_list.append(mode) - self.diff_param_launch_send_and_verify( - mode, - virtio_param, - vdevs, - self.cores[2:4], - is_quit=True, - launch_virtio=False, + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=new_virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets() + + self.logger.info( + "Restart vhost with {} path with {}".format(key, path) ) + mode += "_RestartVhost" + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets() + self.virtio_user_pmd.quit() if not self.check_2M_env: - self.logger.info("Quit and relaunch vhost testpmd with PA mode") - self.vhost_user.send_expect("quit", "# ") - self.launch_testpmd_as_vhost_user( - vhost_param % (nb_cores, txd_rxd, txd_rxd), - self.cores[0:2], - dev=vhost_vdevs % (nb_cores), + self.vhost_user_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, ports=allow_pci, iova_mode="pa", ) - for key, path_mode in virtio_path_dict.items(): - if key == "non_mergeable_path": - virtio_param = ( - " --enable-hw-vlan-strip --nb-cores=%d --txd=%d --rxd=%d" - % (nb_cores, txd_rxd, txd_rxd) + for key, path in SPLIT_RING_PATH.items(): + virtio_eal_param = ( + "--vdev 'net_virtio_user0,mac={},path=/tmp/s0,{},queues=1'".format( + self.virtio_mac, path ) + ) + if key == "non_mergeable_path": + new_virtio_param = "--enable-hw-vlan-strip " + virtio_param else: - virtio_param = " --nb-cores=%d --txd=%d --rxd=%d" % ( - nb_cores, - txd_rxd, - txd_rxd, - ) - vdevs = ( - f"'net_virtio_user0,mac={self.virtio_mac},path=/tmp/s0,{path_mode},queues=%d'" - % queues + new_virtio_param = virtio_param + + self.logger.info( + "Start virtio-user with {} path with {}".format(key, path) ) mode = key + "_PA" self.mode_list.append(mode) - self.diff_param_launch_send_and_verify( - mode, - virtio_param, - vdevs, - self.cores[2:4], - is_quit=False, - launch_virtio=True, + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=new_virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets() + + self.logger.info( + "Restart vhost with {} path with {}".format(key, path) ) - self.vhost_user.send_expect("show port stats all", "testpmd> ", 10) - self.vhost_user.send_expect("stop", "testpmd> ", 10) - self.vhost_user.send_expect("start", "testpmd> ", 10) - self.vhost_user.send_expect("show port info all", "testpmd> ", 30) - self.vhost_user.send_expect("show port stats all", "testpmd> ", 10) mode += "_RestartVhost" + self.vhost_user_pmd.execute_cmd("start") self.mode_list.append(mode) - self.diff_param_launch_send_and_verify( - mode, - virtio_param, - vdevs, - self.cores[2:4], - is_quit=True, - launch_virtio=False, - ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets() + self.virtio_user_pmd.quit() self.result_table_print() + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] self.handle_expected(mode_list=self.mode_list) self.handle_results(mode_list=self.mode_list) + self.vhost_user_pmd.quit() - def test_perf_pvp_spilt_ring_all_dynamic_queue_number_vhost_enqueue_operations_with_cbdma( - self, - ): + def test_perf_pvp_split_all_path_multi_queues_vhost_txq_1_to_1_cbdma(self): """ - Test Case2: PVP split ring dynamic queue number vhost enqueue operations with cbdma + Test Case 2: PVP split ring all path multi-queues vhost enqueue operations with 1 to 1 mapping between vrings and CBDMA virtual channels """ - self.test_target = self.running_case - self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ - self.test_target - ] - nb_cores = 1 - txd_rxd = 1024 - queues = 8 - virtio_path = "/tmp/s0" - path_mode = "mrg_rxbuf=1,in_order=1" - self.get_cbdma_ports_info_and_bind_to_dpdk(8) - vhost_param = " --nb-cores=%d --txd=%d --rxd=%d --txq=%d --rxq=%d " - virtio_param = " --nb-cores=%d --txd=%d --rxd=%d --txq=%d --rxq=%d " - vhost_dev = f"'net_vhost0,iface={virtio_path},queues=%d,client=1,%s'" - virtio_dev = f"net_virtio_user0,mac={self.virtio_mac},path={virtio_path},{path_mode},queues={queues},server=1" - allow_pci = [self.dut.ports_info[0]["pci"]] - for index in range(8): - allow_pci.append(self.cbdma_dev_infos[index]) - self.launch_testpmd_as_vhost_user( - vhost_param % (nb_cores, txd_rxd, txd_rxd, queues, queues), - self.cores[0:2], - dev=vhost_dev % (queues, ""), - ports=[allow_pci[0]], - iova_mode="va", + cbdma_num = 8 + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num) + dmas = self.generate_dmas_param(8) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:] ) - self.mode_list.append("with_0_cbdma") - self.launch_testpmd_as_virtio_user( - virtio_param % (nb_cores, txd_rxd, txd_rxd, queues, queues), - self.cores[2:4], - dev=virtio_dev, - ) - self.send_and_verify("with_0_cbdma", queue_list=range(queues)) - - self.vhost_user.send_expect("quit", "#") - vhost_dmas = f"dmas=[txq0@{self.used_cbdma[0]};txq1@{self.used_cbdma[1]};txq2@{self.used_cbdma[2]};txq3@{self.used_cbdma[3]}]" - self.launch_testpmd_as_vhost_user( - vhost_param % (nb_cores, txd_rxd, txd_rxd, queues, queues), - self.cores[0:2], - dev=vhost_dev % (queues, vhost_dmas), - ports=allow_pci[:5], - iova_mode="va", + vhost_eal_param = "--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas={},dma_ring_size=2048'".format( + dmas ) - self.mode_list.append("with_4_cbdma") - self.send_and_verify("with_4_cbdma", queue_list=range(int(queues / 2))) - - self.vhost_user.send_expect("quit", "#") - vhost_dmas = f"dmas=[txq0@{self.used_cbdma[0]};txq1@{self.used_cbdma[1]};txq2@{self.used_cbdma[2]};txq3@{self.used_cbdma[3]};txq4@{self.used_cbdma[4]};txq5@{self.used_cbdma[5]};txq6@{self.used_cbdma[6]};txq7@{self.used_cbdma[7]}]" - self.launch_testpmd_as_vhost_user( - vhost_param % (nb_cores, txd_rxd, txd_rxd, queues, queues), - self.cores[0:2], - dev=vhost_dev % (queues, vhost_dmas), + vhost_param = ( + " --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma={}".format( + lcore_dma + ) + ) + virtio_param = " --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + allow_pci = [self.dut.ports_info[0]["pci"]] + for i in range(cbdma_num): + allow_pci.append(self.cbdma_list[i]) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, ports=allow_pci, iova_mode="va", ) - self.mode_list.append("with_8_cbdma") - self.send_and_verify("with_8_cbdma", queue_list=range(queues)) + for key, path in SPLIT_RING_PATH.items(): + virtio_eal_param = ( + "--vdev=net_virtio_user0,mac={},path=/tmp/s0,{},queues=8".format( + self.virtio_mac, path + ) + ) + if key == "non_mergeable_path": + new_virtio_param = "--enable-hw-vlan-strip " + virtio_param + else: + new_virtio_param = virtio_param + + self.logger.info( + "Start virtio-user with {} path with {}".format(key, path) + ) + mode = key + "_VA" + self.mode_list.append(mode) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=new_virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + self.logger.info( + "Restart host with {} path with {}".format(key, path) + ) + mode += "_RestartVhost" + self.mode_list.append(mode) + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + self.virtio_user_pmd.quit() if not self.check_2M_env: - self.logger.info("Quit and relaunch vhost testpmd with PA mode") - self.vhost_user.send_expect("quit", "#") - vhost_dmas = f"dmas=[txq0@{self.used_cbdma[0]};txq1@{self.used_cbdma[1]};txq2@{self.used_cbdma[2]};txq3@{self.used_cbdma[3]};txq4@{self.used_cbdma[4]};txq5@{self.used_cbdma[5]}]" - self.launch_testpmd_as_vhost_user( - vhost_param % (nb_cores, txd_rxd, txd_rxd, queues, queues), - self.cores[0:2], - dev=vhost_dev % (queues, vhost_dmas), + self.vhost_user_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, ports=allow_pci, iova_mode="pa", ) - self.mode_list.append("with_6_cbdma") - self.send_and_verify("with_6_cbdma", queue_list=range(queues)) + for key, path in SPLIT_RING_PATH.items(): + if key == "mergeable_path": + virtio_eal_param = "--vdev=net_virtio_user0,mac={},path=/tmp/s0,{},queues=8".format( + self.virtio_mac, path + ) + + mode = key + "_PA" + self.logger.info( + "Start virtio-user with {} path with {}".format(key, path) + ) + self.mode_list.append(mode) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + mode += "_RestartVhost" + self.logger.info( + "Restart host with {} path with {}".format(key, path) + ) + self.mode_list.append(mode) + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + self.virtio_user_pmd.quit() - self.virtio_user.send_expect("quit", "# ") - self.vhost_user.send_expect("quit", "# ") self.result_table_print() + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] self.handle_expected(mode_list=self.mode_list) self.handle_results(mode_list=self.mode_list) + self.vhost_user_pmd.quit() - def test_perf_pvp_packed_ring_all_path_vhost_enqueue_operations_with_cbdma(self): + def test_perf_pvp_split_all_path_multi_queues_vhost_txq_M_2_1_cbdma(self): """ - Test Case 3: PVP packed ring all path vhost enqueue operations with cbdma + Test Case 3: PVP split ring all path multi-queues vhost enqueue operations with M to 1 mapping between vrings and CBDMA virtual channels """ - self.test_target = self.running_case - self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ - self.test_target - ] - txd_rxd = 1024 - vhost_param = " --nb-cores=%d --txd=%d --rxd=%d" - nb_cores = 1 - queues = 1 - self.get_cbdma_ports_info_and_bind_to_dpdk(1) - vhost_vdevs = ( - f"'net_vhost0,iface=/tmp/s0,queues=%d,dmas=[txq0@{self.device_str}]'" - ) - virtio_path_dict = { - "inorder_mergeable_path": "mrg_rxbuf=1,in_order=1,packed_vq=1", - "mergeable_path": "mrg_rxbuf=1,in_order=0,packed_vq=1", - "inorder_non_mergeable_path": "mrg_rxbuf=0,in_order=1,packed_vq=1", - "non_mergeable_path": "mrg_rxbuf=0,in_order=0,packed_vq=1", - "vector_rx_path": "mrg_rxbuf=0,in_order=1,vectorized=1,packed_vq=1", - "vector_rx_path_not_power_of_2": "mrg_rxbuf=0,in_order=1,vectorized=1,packed_vq=1,queue_size=1025", - } + cbdma_num = 1 + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num) + dmas = self.generate_dmas_param(8) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] + ) + vhost_eal_param = "--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas={},dma_ring_size=2048'".format( + dmas + ) + vhost_param = ( + " --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma={}".format( + lcore_dma + ) + ) + virtio_param = " --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024" allow_pci = [self.dut.ports_info[0]["pci"]] - for index in range(1): - allow_pci.append(self.cbdma_dev_infos[index]) - self.launch_testpmd_as_vhost_user( - vhost_param % (nb_cores, txd_rxd, txd_rxd), - self.cores[0:2], - dev=vhost_vdevs % (nb_cores), + for i in range(cbdma_num): + allow_pci.append(self.cbdma_list[i]) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, ports=allow_pci, iova_mode="va", ) - for key, path_mode in virtio_path_dict.items(): - if key == "vector_rx_path_not_power_of_2": - virtio_param = " --nb-cores=%d --txd=%d --rxd=%d" % ( - nb_cores, - (txd_rxd + 1), - (txd_rxd + 1), - ) - else: - virtio_param = " --nb-cores=%d --txd=%d --rxd=%d" % ( - nb_cores, - txd_rxd, - txd_rxd, + for key, path in SPLIT_RING_PATH.items(): + virtio_eal_param = ( + "--vdev=net_virtio_user0,mac={},path=/tmp/s0,{},queues=8".format( + self.virtio_mac, path ) - if "vector_rx_path" in key: - eal_param = "--force-max-simd-bitwidth=512" - else: - eal_param = "" - vdevs = ( - f"'net_virtio_user0,mac={self.virtio_mac},path=/tmp/s0,{path_mode},queues=%d'" - % queues ) - mode = key + "_VA" + if key == "non_mergeable_path": + new_virtio_param = "--enable-hw-vlan-strip " + virtio_param + else: + new_virtio_param = virtio_param + + mode = key + "_VA" + "_1_lcore" self.mode_list.append(mode) - self.diff_param_launch_send_and_verify( - mode, - virtio_param, - vdevs, - self.cores[2:4], - eal_param=eal_param, - is_quit=False, - launch_virtio=True, - ) - self.vhost_user.send_expect("show port stats all", "testpmd> ", 10) - self.vhost_user.send_expect("stop", "testpmd> ", 10) - self.vhost_user.send_expect("start", "testpmd> ", 10) - self.vhost_user.send_expect("show port info all", "testpmd> ", 30) - self.vhost_user.send_expect("show port stats all", "testpmd> ", 10) + self.logger.info( + "Start virtio-user with {} path with {}".format(key, path) + ) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=new_virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + mode += "_RestartVhost" self.mode_list.append(mode) - self.diff_param_launch_send_and_verify( - mode, - virtio_param, - vdevs, - self.cores[2:4], - is_quit=True, - launch_virtio=False, + self.logger.info( + "Restart host with {} path with {}".format(key, path) ) - if not self.check_2M_env: - self.logger.info("Quit and relaunch vhost testpmd with PA mode") - self.vhost_user.send_expect("quit", "# ") - self.launch_testpmd_as_vhost_user( - vhost_param % (queues, txd_rxd, txd_rxd), - self.cores[0:2], - dev=vhost_vdevs % (queues), - ports=allow_pci, - iova_mode="pa", + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + self.virtio_user_pmd.quit() + + self.vhost_user_pmd.quit() + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:4] + ) + vhost_param = ( + " --nb-cores=3 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma={}".format( + lcore_dma ) - for key, path_mode in virtio_path_dict.items(): - if key == "vector_rx_path_not_power_of_2": - virtio_param = " --nb-cores=%d --txd=%d --rxd=%d" % ( - nb_cores, - (txd_rxd + 1), - (txd_rxd + 1), - ) - else: - virtio_param = " --nb-cores=%d --txd=%d --rxd=%d" % ( - nb_cores, - txd_rxd, - txd_rxd, + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=allow_pci, + iova_mode="va", + ) + for key, path in SPLIT_RING_PATH.items(): + if key == "inorder_mergeable_path": + virtio_eal_param = ( + "--vdev=net_virtio_user0,mac={},path=/tmp/s0,{},queues=8".format( + self.virtio_mac, path ) - if "vector_rx_path" in key: - eal_param = "--force-max-simd-bitwidth=512" - else: - eal_param = "" - vdevs = ( - f"'net_virtio_user0,mac={self.virtio_mac},path=/tmp/s0,{path_mode},queues=%d'" - % queues ) - mode = key + "_PA" + + mode = key + "_VA" + "_3_lcore" self.mode_list.append(mode) - self.diff_param_launch_send_and_verify( - mode, - virtio_param, - vdevs, - self.cores[2:4], - eal_param=eal_param, - is_quit=False, - launch_virtio=True, + self.logger.info( + "Start virtio-user with {} path with {}".format(key, path) ) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) - self.vhost_user.send_expect("show port stats all", "testpmd> ", 10) - self.vhost_user.send_expect("stop", "testpmd> ", 10) - self.vhost_user.send_expect("start", "testpmd> ", 10) - self.vhost_user.send_expect("show port info all", "testpmd> ", 30) - self.vhost_user.send_expect("show port stats all", "testpmd> ", 10) mode += "_RestartVhost" self.mode_list.append(mode) - self.diff_param_launch_send_and_verify( - mode, - virtio_param, - vdevs, - self.cores[2:4], - is_quit=True, - launch_virtio=False, + self.logger.info( + "Restart host with {} path with {}".format(key, path) + ) + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + self.virtio_user_pmd.quit() + + self.vhost_user_pmd.quit() + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:9] + ) + vhost_param = ( + " --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma={}".format( + lcore_dma + ) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=allow_pci, + iova_mode="va", + ) + for key, path in SPLIT_RING_PATH.items(): + if key == "mergeable_path": + virtio_eal_param = ( + "--vdev=net_virtio_user0,mac={},path=/tmp/s0,{},queues=8".format( + self.virtio_mac, path + ) + ) + mode = key + "_VA" + "_8_lcore" + self.mode_list.append(mode) + self.logger.info( + "Start virtio-user with {} path with {}".format(key, path) + ) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + self.mode_list.append(mode) + self.logger.info( + "Restart host with {} path with {}".format(key, path) ) + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + self.virtio_user_pmd.quit() + + if not self.check_2M_env: + self.vhost_user_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=allow_pci, + iova_mode="pa", + ) + for key, path in SPLIT_RING_PATH.items(): + if key == "inorder_non_mergeable_path": + virtio_eal_param = "--vdev=net_virtio_user0,mac={},path=/tmp/s0,{},queues=8".format( + self.virtio_mac, path + ) + + mode = key + "_PA" + "_8_lcore" + self.mode_list.append(mode) + self.logger.info( + "Start virtio-user with {} path with {}".format(key, path) + ) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + mode += "_RestartVhost" + self.mode_list.append(mode) + self.logger.info( + "Restart host with {} path with {}".format(key, path) + ) + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + self.virtio_user_pmd.quit() self.result_table_print() + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] self.handle_expected(mode_list=self.mode_list) self.handle_results(mode_list=self.mode_list) + self.vhost_user_pmd.quit() - def test_perf_pvp_packed_ring_all_dynamic_queue_number_vhost_enqueue_operations_with_cbdma( - self, - ): + def test_perf_pvp_split_all_path_vhost_txq_1_to_N_cbdma(self): """ - Test Case 4: PVP packed ring dynamic queue number vhost enqueue operations with cbdma + Test Case 4: PVP split ring all path vhost enqueue operations with 1 to N mapping between vrings and CBDMA virtual channels """ - self.test_target = self.running_case - self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ - self.test_target - ] - nb_cores = 1 - txd_rxd = 1024 - queues = 8 - virtio_path = "/tmp/s0" - path_mode = "mrg_rxbuf=1,in_order=1,packed_vq=1" - self.get_cbdma_ports_info_and_bind_to_dpdk(8) - vhost_param = " --nb-cores=%d --txd=%d --rxd=%d --txq=%d --rxq=%d " - virtio_param = " --nb-cores=%d --txd=%d --rxd=%d --txq=%d --rxq=%d " - vhost_dev = f"'net_vhost0,iface={virtio_path},queues=%d,client=1,%s'" - virtio_dev = f"net_virtio_user0,mac={self.virtio_mac},path={virtio_path},{path_mode},queues={queues},server=1" - allow_pci = [self.dut.ports_info[0]["pci"]] - for index in range(8): - allow_pci.append(self.cbdma_dev_infos[index]) - self.launch_testpmd_as_vhost_user( - vhost_param % (nb_cores, txd_rxd, txd_rxd, queues, queues), - self.cores[0:2], - dev=vhost_dev % (queues, ""), - ports=[allow_pci[0]], - iova_mode="va", + cbdma_num = 8 + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num) + dmas = self.generate_dmas_param(1) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] ) - self.mode_list.append("with_0_cbdma") - self.launch_testpmd_as_virtio_user( - virtio_param % (nb_cores, txd_rxd, txd_rxd, queues, queues), - self.cores[2:4], - dev=virtio_dev, - ) - self.send_and_verify("with_0_cbdma", queue_list=range(queues)) - - self.vhost_user.send_expect("quit", "#") - vhost_dmas = f"dmas=[txq0@{self.used_cbdma[0]};txq1@{self.used_cbdma[1]};txq2@{self.used_cbdma[2]};txq3@{self.used_cbdma[3]}]" - self.launch_testpmd_as_vhost_user( - vhost_param % (nb_cores, txd_rxd, txd_rxd, queues, queues), - self.cores[0:2], - dev=vhost_dev % (queues, vhost_dmas), - ports=allow_pci[:5], - iova_mode="va", + vhost_eal_param = "--vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas={},dma_ring_size=2048'".format( + dmas ) - self.mode_list.append("with_4_cbdma") - self.send_and_verify("with_4_cbdma", queue_list=range(int(queues / 2))) - - self.vhost_user.send_expect("quit", "#") - vhost_dmas = f"dmas=[txq0@{self.used_cbdma[0]};txq1@{self.used_cbdma[1]};txq2@{self.used_cbdma[2]};txq3@{self.used_cbdma[3]};txq4@{self.used_cbdma[4]};txq5@{self.used_cbdma[5]};txq6@{self.used_cbdma[6]};txq7@{self.used_cbdma[7]}]" - self.launch_testpmd_as_vhost_user( - vhost_param % (nb_cores, txd_rxd, txd_rxd, queues, queues), - self.cores[0:2], - dev=vhost_dev % (queues, vhost_dmas), + vhost_param = ( + " --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 --lcore-dma={}".format( + lcore_dma + ) + ) + virtio_param = " --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024" + allow_pci = [self.dut.ports_info[0]["pci"]] + for i in range(cbdma_num): + allow_pci.append(self.cbdma_list[i]) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, ports=allow_pci, iova_mode="va", ) - self.mode_list.append("with_8_cbdma") - self.send_and_verify("with_8_cbdma", queue_list=range(queues)) + for key, path in SPLIT_RING_PATH.items(): + virtio_eal_param = ( + "--vdev=net_virtio_user0,mac={},path=/tmp/s0,{},queues=1".format( + self.virtio_mac, path + ) + ) + if key == "non_mergeable_path": + new_virtio_param = "--enable-hw-vlan-strip " + virtio_param + else: + new_virtio_param = virtio_param + + mode = key + "_VA" + self.mode_list.append(mode) + self.logger.info( + "Start virtio-user with {} path with {}".format(key, path) + ) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=new_virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets() + + mode += "_RestartVhost" + self.mode_list.append(mode) + self.logger.info( + "Restart host with {} path with {}".format(key, path) + ) + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets() + self.virtio_user_pmd.quit() if not self.check_2M_env: - self.logger.info("Quit and relaunch vhost testpmd with PA mode") - self.vhost_user.send_expect("quit", "#") - vhost_dmas = f"dmas=[txq0@{self.used_cbdma[0]};txq1@{self.used_cbdma[1]};txq2@{self.used_cbdma[2]};txq3@{self.used_cbdma[3]};txq4@{self.used_cbdma[4]};txq5@{self.used_cbdma[5]}]" - self.launch_testpmd_as_vhost_user( - vhost_param % (nb_cores, txd_rxd, txd_rxd, queues, queues), - self.cores[0:2], - dev=vhost_dev % (queues, vhost_dmas), + self.vhost_user_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, ports=allow_pci, iova_mode="pa", ) - self.mode_list.append("with_6_cbdma") - self.send_and_verify("with_6_cbdma", queue_list=range(queues)) + for key, path in SPLIT_RING_PATH.items(): + if key == "non_mergeable_path": + virtio_eal_param = "--vdev=net_virtio_user0,mac={},path=/tmp/s0,{},queues=1".format( + self.virtio_mac, path + ) + + mode = key + "_PA" + self.mode_list.append(mode) + self.logger.info( + "Start virtio-user with {} path with {}".format(key, path) + ) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets() + + mode += "_RestartVhost" + self.mode_list.append(mode) + self.logger.info( + "Restart host with {} path with {}".format(key, path) + ) + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets() + self.virtio_user_pmd.quit() - self.virtio_user.send_expect("quit", "# ") - self.vhost_user.send_expect("quit", "# ") self.result_table_print() + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] self.handle_expected(mode_list=self.mode_list) self.handle_results(mode_list=self.mode_list) + self.vhost_user_pmd.quit() - def send_chain_packets_and_verify(self): - self.pmdout_virtio_user.execute_cmd("clear port stats all") - self.pmdout_virtio_user.execute_cmd("start") - self.pmdout_vhost_user.execute_cmd("vhost enable tx all") - self.pmdout_vhost_user.execute_cmd("set txpkts 65535,65535,65535,65535,65535") - self.pmdout_vhost_user.execute_cmd("start tx_first 32") - self.pmdout_vhost_user.execute_cmd("show port stats all") - out = self.pmdout_virtio_user.execute_cmd("show port stats all") - rx_pkts = int(re.search("RX-packets: (\d+)", out).group(1)) - self.verify(rx_pkts > 0, "virtio-user can not received packets") - - def test_loopback_split_ring_large_chain_packets_stress_test_with_cbdma_enqueue( - self, - ): + def test_perf_pvp_split_all_path_multi_queues_vhost_txq_M_to_N_cbdma(self): """ - Test Case5: loopback split ring large chain packets stress test with cbdma enqueue + Test Case 5: PVP split ring all path multi-queues vhost enqueue operations with M to N mapping between vrings and CBDMA virtual channels """ - nb_cores = 1 - queues = 1 - txd_rxd = 2048 - txq_rxq = 1 - virtio_path = "/tmp/s0" - path_mode = "mrg_rxbuf=1,in_order=0,vectorized=1,queue_size=2048" - self.get_cbdma_ports_info_and_bind_to_dpdk(1) - vhost_param = " --nb-cores=%d --mbuf-size=65535" - virtio_param = " --nb-cores=%d --txd=%d --rxd=%d --txq=%d --rxq=%d " - virtio_dev = f"net_virtio_user0,mac={self.virtio_mac},path={virtio_path},{path_mode},queues=%d" - vhost_vdevs = ( - f"'net_vhost0,iface=/tmp/s0,queues=%d,dmas=[txq0@{self.device_str}]'" - ) - allow_pci = [] - for index in range(1): - allow_pci.append(self.cbdma_dev_infos[index]) - self.launch_testpmd_as_vhost_user( - vhost_param % (nb_cores), - self.cores[0:2], - dev=vhost_vdevs % (queues), + cbdma_num = 8 + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num) + dmas = self.generate_dmas_param(3) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] + ) + vhost_eal_param = "--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas={},dma_ring_size=2048'".format( + dmas + ) + vhost_param = ( + " --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma={}".format( + lcore_dma + ) + ) + virtio_param = "--nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + allow_pci = [self.dut.ports_info[0]["pci"]] + for i in range(cbdma_num): + allow_pci.append(self.cbdma_list[i]) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, ports=allow_pci, iova_mode="va", - set_pmd_param=False, - ) - self.launch_testpmd_as_virtio_user( - virtio_param % (nb_cores, txd_rxd, txd_rxd, txq_rxq, txq_rxq), - self.cores[2:4], - dev=virtio_dev % (queues), - set_pmd_param=False, ) - self.send_chain_packets_and_verify() + for key, path in SPLIT_RING_PATH.items(): + virtio_eal_param = ( + "--vdev=net_virtio_user0,mac={},path=/tmp/s0,{},queues=8".format( + self.virtio_mac, path + ) + ) + if key == "non_mergeable_path": + virtio_param = "--enable-hw-vlan-strip " + virtio_param - if not self.check_2M_env: - self.logger.info("Quit and relaunch vhost testpmd with PA mode") - self.pmdout_virtio_user.execute_cmd("quit", "#") - self.pmdout_vhost_user.execute_cmd("quit", "#") - self.launch_testpmd_as_vhost_user( - vhost_param % (nb_cores), - self.cores[0:2], - dev=vhost_vdevs % (queues), - ports=allow_pci, - iova_mode="pa", - set_pmd_param=False, + mode = key + "_VA" + "_3dmas" + self.mode_list.append(mode) + self.logger.info( + "Start virtio-user with {} path with {}".format(key, path) ) - self.launch_testpmd_as_virtio_user( - virtio_param % (nb_cores, txd_rxd, txd_rxd, txq_rxq, txq_rxq), - self.cores[2:4], - dev=virtio_dev % (queues), - set_pmd_param=False, + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, ) - self.send_chain_packets_and_verify() + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) - def test_loopback_packed_ring_large_chain_packets_stress_test_with_cbdma_enqueue( - self, - ): - """ - Test Case6: loopback packed ring large chain packets stress test with cbdma enqueue - """ - nb_cores = 1 - queues = 1 - txd_rxd = 2048 - txq_rxq = 1 - virtio_path = "/tmp/s0" - path_mode = "mrg_rxbuf=1,in_order=0,vectorized=1,packed_vq=1,queue_size=2048" - self.get_cbdma_ports_info_and_bind_to_dpdk(1) - vhost_param = " --nb-cores=%d --mbuf-size=65535" - virtio_param = " --nb-cores=%d --txd=%d --rxd=%d --txq=%d --rxq=%d " - virtio_dev = f"net_virtio_user0,mac={self.virtio_mac},path={virtio_path},{path_mode},queues=%d" - vhost_vdevs = ( - f"'net_vhost0,iface=/tmp/s0,queues=%d,dmas=[txq0@{self.device_str}]'" - ) - allow_pci = [] - for index in range(1): - allow_pci.append(self.cbdma_dev_infos[index]) - self.launch_testpmd_as_vhost_user( - vhost_param % (nb_cores), - self.cores[0:2], - dev=vhost_vdevs % (queues), + mode += "_RestartVhost" + self.mode_list.append(mode) + self.logger.info( + "Restart host with {} path with {}".format(key, path) + ) + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + self.virtio_user_pmd.quit() + + self.vhost_user_pmd.quit() + dmas = self.generate_dmas_param(8) + vhost_eal_param = "--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas={},dma_ring_size=2048'".format( + dmas + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, ports=allow_pci, iova_mode="va", - set_pmd_param=False, - ) - self.launch_testpmd_as_virtio_user( - virtio_param % (nb_cores, txd_rxd, txd_rxd, txq_rxq, txq_rxq), - self.cores[2:4], - dev=virtio_dev % (queues), - set_pmd_param=False, ) - self.send_chain_packets_and_verify() + for key, path in SPLIT_RING_PATH.items(): + if key == "inorder_non_mergeable_path": + virtio_eal_param = ( + "--vdev=net_virtio_user0,mac={},path=/tmp/s0,{},queues=8".format( + self.virtio_mac, path + ) + ) + + mode = key + "_VA" + "_8dmas" + self.mode_list.append(mode) + self.logger.info( + "Start virtio-user with {} path with {}".format(key, path) + ) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + mode += "_RestartVhost" + self.mode_list.append(mode) + self.logger.info( + "Restart host with {} path with {}".format(key, path) + ) + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + self.virtio_user_pmd.quit() if not self.check_2M_env: - self.logger.info("Quit and relaunch vhost testpmd with PA mode") - self.pmdout_virtio_user.execute_cmd("quit", "#") - self.pmdout_vhost_user.execute_cmd("quit", "#") - self.launch_testpmd_as_vhost_user( - vhost_param % (nb_cores), - self.cores[0:2], - dev=vhost_vdevs % (queues), + self.vhost_user_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, ports=allow_pci, iova_mode="pa", - set_pmd_param=False, ) - self.launch_testpmd_as_virtio_user( - virtio_param % (nb_cores, txd_rxd, txd_rxd, txq_rxq, txq_rxq), - self.cores[2:4], - dev=virtio_dev % (queues), - set_pmd_param=False, - ) - self.send_chain_packets_and_verify() + for key, path in SPLIT_RING_PATH.items(): + if key == "vectorized_path": + virtio_eal_param = "--vdev=net_virtio_user0,mac={},path=/tmp/s0,{},queues=8".format( + self.virtio_mac, path + ) + + mode = key + "_PA" + "_8dmas" + self.mode_list.append(mode) + self.logger.info( + "Start virtio-user with {} path with {}".format(key, path) + ) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + mode += "_RestartVhost" + self.mode_list.append(mode) + self.logger.info( + "Restart host with {} path with {}".format(key, path) + ) + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + self.virtio_user_pmd.quit() + + self.result_table_print() + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] + self.handle_expected(mode_list=self.mode_list) + self.handle_results(mode_list=self.mode_list) + self.vhost_user_pmd.quit() - def send_imix_and_verify(self, mode, multiple_queue=True, queue_list=[]): + def test_perf_pvp_split_dynamic_queues_vhost_txq_M_to_N_cbdma(self): """ - Send imix packet with packet generator and verify + Test Case 6: PVP split ring dynamic queue number vhost enqueue operations with M to N mapping between vrings and CBDMA virtual channels """ - frame_sizes = [ - 64, - 128, - 256, - 512, - 1024, - 1280, - 1518, - ] - tgenInput = [] - for frame_size in frame_sizes: - payload_size = frame_size - self.headers_size - port = self.tester.get_local_port(self.dut_ports[0]) - fields_config = { - "ip": { - "src": {"action": "random"}, - }, - } - if not multiple_queue: - fields_config = None + cbdma_num = 8 + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num) + vhost_eal_param = "--vdev 'net_vhost0,iface=/tmp/s0,queues=1,client=1'" + vhost_param = " --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024" + virtio_param = " --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + allow_pci = [self.dut.ports_info[0]["pci"]] + for i in range(cbdma_num): + allow_pci.append(self.cbdma_list[i]) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=allow_pci[0:1], + iova_mode="va", + ) + for key, path in SPLIT_RING_PATH.items(): + if key == "inorder_mergeable_path": + virtio_eal_param = "--vdev=net_virtio_user0,mac={},path=/tmp/s0,{},queues=8,server=1".format( + self.virtio_mac, path + ) + + mode = key + "_VA" + "_without_cbdma" + self.mode_list.append(mode) + self.logger.info( + "Start virtio-user with {} path with {}".format(key, path) + ) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets() + + mode += "_RestartVhost" + self.mode_list.append(mode) + self.logger.info( + "Restart host with {} path with {}".format(key, path) + ) + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets() + + self.vhost_user_pmd.quit() + dmas = self.generate_dmas_param(4) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list[0:4], core_list=self.vhost_core_list[1:5] + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=/tmp/s0,queues=4,client=1,dmas={}'".format(dmas) + ) + vhost_param = ( + " --nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 --lcore-dma={}".format( + lcore_dma + ) + ) + + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=allow_pci[0:5], + iova_mode="va", + ) + + for key, path in SPLIT_RING_PATH.items(): + if key == "inorder_mergeable_path": + + mode = key + "_VA" + "_1:1" + self.mode_list.append(mode) + self.logger.info( + "Start virtio-user with {} path with {}".format(key, path) + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=4) + + mode += "_RestartVhost" + self.mode_list.append(mode) + self.logger.info( + "Start virtio-user with {} path with {}".format(key, path) + ) + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=4) + + self.vhost_user_pmd.quit() + dmas = self.generate_dmas_param(queues=8) + core1 = self.vhost_core_list[1] + core2 = self.vhost_core_list[2] + core3 = self.vhost_core_list[3] + core4 = self.vhost_core_list[4] + core5 = self.vhost_core_list[5] + cbdma0 = self.cbdma_list[0] + cbdma1 = self.cbdma_list[1] + cbdma2 = self.cbdma_list[2] + cbdma3 = self.cbdma_list[3] + cbdma4 = self.cbdma_list[4] + cbdma5 = self.cbdma_list[5] + cbdma6 = self.cbdma_list[6] + cbdma7 = self.cbdma_list[7] + lcore_dma = ( + f"[lcore{core1}@{cbdma0},lcore{core1}@{cbdma7}," + + f"lcore{core2}@{cbdma1},lcore{core2}@{cbdma2},lcore{core2}@{cbdma3}," + + f"lcore{core3}@{cbdma2},lcore{core3}@{cbdma3},lcore{core3}@{cbdma4}," + f"lcore{core4}@{cbdma2},lcore{core4}@{cbdma3},lcore{core4}@{cbdma4},lcore{core4}@{cbdma5}," + f"lcore{core5}@{cbdma0},lcore{core5}@{cbdma1},lcore{core5}@{cbdma2},lcore{core5}@{cbdma3}," + f"lcore{core5}@{cbdma4},lcore{core5}@{cbdma5},lcore{core5}@{cbdma6},lcore{core5}@{cbdma7}]" + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas={}'".format(dmas) + ) + vhost_param = ( + " --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma={}".format( + lcore_dma + ) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=allow_pci[0:9], + iova_mode="va", + ) + + for key, path in SPLIT_RING_PATH.items(): + if key == "inorder_mergeable_path": + + mode = key + "_VA" + "_MN" + self.mode_list.append(mode) + self.logger.info( + "Start virtio-user with {} path with {}".format(key, path) + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + mode += "_RestartVhost" + self.mode_list.append(mode) + self.logger.info( + "Restart host with {} path with {}".format(key, path) + ) + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + if not self.check_2M_env: + self.vhost_user_pmd.quit() + dmas = self.generate_dmas_param(queues=8) + + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas={}'".format( + dmas + ) + ) + vhost_param = " --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma={}".format( + lcore_dma + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=allow_pci[0:5], + iova_mode="pa", + ) + + for key, path in SPLIT_RING_PATH.items(): + if key == "inorder_mergeable_path": + + mode = key + "_PA" + "_M>N" + self.mode_list.append(mode) + self.logger.info( + "Start virtio-user with {} path with {}".format(key, path) + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + mode += "_RestartVhost" + self.mode_list.append(mode) + self.logger.info( + "ReLaunch host with {} path with {}".format(key, path) + ) + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + self.result_table_print() + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] + self.handle_expected(mode_list=self.mode_list) + self.handle_results(mode_list=self.mode_list) + self.virtio_user_pmd.quit() + self.vhost_user_pmd.quit() + + def test_perf_pvp_packed_all_path_vhost_txq_1_to_1_cbdma(self): + """ + Test Case 7: PVP packed ring all path vhost enqueue operations with 1 to 1 mapping between vrings and CBDMA virtual channels + """ + cbdma_num = 1 + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num) + dmas = self.generate_dmas_param(1) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] + ) + vhost_eal_param = "--vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + vhost_param = ( + " --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 --lcore-dma={}".format( + lcore_dma + ) + ) + allow_pci = [self.dut.ports_info[0]["pci"]] + for i in range(cbdma_num): + allow_pci.append(self.cbdma_list[i]) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=allow_pci, + iova_mode="va", + ) + for key, path in PACKED_RING_PATH.items(): + virtio_eal_param = ( + " --vdev 'net_virtio_user0,mac={},path=/tmp/s0,{},queues=1'".format( + self.virtio_mac, path + ) + ) + virtio_param = " --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024" + if key == "vectorized_path_not_power_of_2": + virtio_eal_param += ",queue_size=1025" + virtio_param = " --nb-cores=1 --txq=1 --rxq=1 --txd=1025 --rxd=1025" + if "vectorized" in key: + virtio_eal_param += " --force-max-simd-bitwidth=512" + + mode = key + "_VA" + self.mode_list.append(mode) + self.logger.info( + "Start virtio-user with {} path with {}".format(key, path) + ) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets() + + mode += "_RestartVhost" + self.mode_list.append(mode) + self.logger.info( + "Restart host with {} path with {}".format(key, path) + ) + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets() + self.virtio_user_pmd.quit() + + if not self.check_2M_env: + self.vhost_user_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=allow_pci, + iova_mode="pa", + ) + for key, path in PACKED_RING_PATH.items(): + if key == "inorder_mergeable_path": + virtio_eal_param = " --vdev 'net_virtio_user0,mac={},path=/tmp/s0,{},queues=1'".format( + self.virtio_mac, path + ) + virtio_param = " --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024" + + mode = key + "_PA" + self.mode_list.append(mode) + self.logger.info( + "Start virtio-user with {} path with {}".format(key, path) + ) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets() + + mode += "_RestartVhost" + self.mode_list.append(mode) + self.logger.info( + "Restart host with {} path with {}".format(key, path) + ) + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets() + self.virtio_user_pmd.quit() + + self.result_table_print() + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] + self.handle_expected(mode_list=self.mode_list) + self.handle_results(mode_list=self.mode_list) + self.vhost_user_pmd.quit() + + def test_perf_pvp_packed_all_path_multi_queues_vhost_txq_1_to_1_cbdma(self): + """ + Test Case 8: PVP packed ring all path multi-queues vhost enqueue operations with 1 to 1 mapping between vrings and CBDMA virtual channels + """ + cbdma_num = 8 + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num) + dmas = self.generate_dmas_param(8) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:] + ) + vhost_eal_param = "--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas={},dma_ring_size=2048'".format( + dmas + ) + vhost_param = ( + " --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma={}".format( + lcore_dma + ) + ) + allow_pci = [self.dut.ports_info[0]["pci"]] + for i in range(cbdma_num): + allow_pci.append(self.cbdma_list[i]) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=allow_pci, + iova_mode="va", + ) + for key, path in PACKED_RING_PATH.items(): + virtio_eal_param = ( + " --vdev 'net_virtio_user0,mac={},path=/tmp/s0,{},queues=8'".format( + self.virtio_mac, path + ) + ) + virtio_param = " --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + if key == "vectorized_path_not_power_of_2": + virtio_eal_param += ",queue_size=1025" + virtio_param = " --nb-cores=1 --txq=8 --rxq=8 --txd=1025 --rxd=1025" + if "vectorized" in key: + virtio_eal_param += " --force-max-simd-bitwidth=512" + + mode = key + "_VA" + self.mode_list.append(mode) + self.logger.info( + "Start virtio-user with {} path with {}".format(key, path) + ) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + mode += "_RestartVhost" + self.mode_list.append(mode) + self.logger.info( + "Restart host with {} path with {}".format(key, path) + ) + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + self.virtio_user_pmd.quit() + + if not self.check_2M_env: + self.vhost_user_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=allow_pci, + iova_mode="pa", + ) + for key, path in PACKED_RING_PATH.items(): + if key == "mergeable_path": + virtio_param = ( + " --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac={},path=/tmp/s0,{},queues=8".format( + self.virtio_mac, path + ) + + mode = key + "_PA" + self.mode_list.append(mode) + self.logger.info( + "Start virtio-user with {} path with {}".format(key, path) + ) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + mode += "_RestartVhost" + self.mode_list.append(mode) + self.logger.info( + "Restart host with {} path with {}".format(key, path) + ) + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + self.virtio_user_pmd.quit() + + self.result_table_print() + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] + self.handle_expected(mode_list=self.mode_list) + self.handle_results(mode_list=self.mode_list) + self.vhost_user_pmd.quit() + + def test_perf_pvp_packed_all_path_multi_queues_vhost_txq_M_to_1_cbdma(self): + """ + Test Case 9: PVP packed ring all path multi-queues vhost enqueue operations with M to 1 mapping between vrings and CBDMA virtual channels + """ + cbdma_num = 1 + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num) + dmas = self.generate_dmas_param(8) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] + ) + vhost_eal_param = "--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas={},dma_ring_size=2048'".format( + dmas + ) + vhost_param = ( + " --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma={}".format( + lcore_dma + ) + ) + allow_pci = [self.dut.ports_info[0]["pci"]] + for i in range(cbdma_num): + allow_pci.append(self.cbdma_list[i]) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=allow_pci, + iova_mode="va", + ) + for key, path in PACKED_RING_PATH.items(): + virtio_eal_param = ( + " --vdev 'net_virtio_user0,mac={},path=/tmp/s0,{},queues=8'".format( + self.virtio_mac, path + ) + ) + virtio_param = " --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + if key == "vectorized_path_not_power_of_2": + virtio_eal_param += ",queue_size=1025" + virtio_param = " --nb-cores=1 --txq=8 --rxq=8 --txd=1025 --rxd=1025" + + mode = key + "_VA" + "_1_lcore" + self.mode_list.append(mode) + self.logger.info( + "Start virtio-user with {} path with {}".format(key, path) + ) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + mode += "_RestartVhost" + self.mode_list.append(mode) + self.logger.info( + "Restart host with {} path with {}".format(key, path) + ) + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + self.virtio_user_pmd.quit() + + self.vhost_user_pmd.quit() + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:4] + ) + vhost_param = ( + " --nb-cores=3 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma={}".format( + lcore_dma + ) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=allow_pci, + iova_mode="va", + ) + for key, path in PACKED_RING_PATH.items(): + if key == "inorder_mergeable_path": + virtio_eal_param = ( + " --vdev 'net_virtio_user0,mac={},path=/tmp/s0,{},queues=8'".format( + self.virtio_mac, path + ) + ) + virtio_param = " --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + + mode = key + "_VA" + "_3_lcore" + self.mode_list.append(mode) + self.logger.info( + "Start virtio-user with {} path with {}".format(key, path) + ) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + mode += "_RestartVhost" + self.mode_list.append(mode) + self.logger.info( + "Restart host with {} path with {}".format(key, path) + ) + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + self.virtio_user_pmd.quit() + + self.vhost_user_pmd.quit() + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:9] + ) + vhost_param = ( + " --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma={}".format( + lcore_dma + ) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=allow_pci, + iova_mode="va", + ) + for key, path in PACKED_RING_PATH.items(): + if key == "mergeable_path": + virtio_eal_param = ( + " --vdev 'net_virtio_user0,mac={},path=/tmp/s0,{},queues=8'".format( + self.virtio_mac, path + ) + ) + virtio_param = " --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + + mode = key + "_VA" + "_8_lcore" + self.mode_list.append(mode) + self.logger.info( + "Start virtio-user with {} path with {}".format(key, path) + ) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + self.mode_list.append(mode) + self.logger.info( + "Restart host with {} path with {}".format(key, path) + ) + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + self.virtio_user_pmd.quit() + + if not self.check_2M_env: + self.vhost_user_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=allow_pci, + iova_mode="pa", + ) + for key, path in PACKED_RING_PATH.items(): + if key == "inorder_non_mergeable_path": + virtio_eal_param = " --vdev 'net_virtio_user0,mac={},path=/tmp/s0,{},queues=8'".format( + self.virtio_mac, path + ) + virtio_param = " --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + + mode = key + "_PA" + "_8_lcore" + self.mode_list.append(mode) + self.logger.info( + "Start virtio-user with {} path with {}".format(key, path) + ) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + mode += "_RestartVhost" + self.mode_list.append(mode) + self.logger.info( + "Restart host with {} path with {}".format(key, path) + ) + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + self.virtio_user_pmd.quit() + + self.result_table_print() + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] + self.handle_expected(mode_list=self.mode_list) + self.handle_results(mode_list=self.mode_list) + self.vhost_user_pmd.quit() + + def test_perf_pvp_packed_all_path_vhost_txq_1_to_N_cbdma(self): + """ + Test Case 10: PVP packed ring all path vhost enqueue operations with 1 to N mapping between vrings and CBDMA virtual channels + """ + cbdma_num = 8 + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num) + dmas = self.generate_dmas_param(1) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] + ) + vhost_eal_param = "--vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + vhost_param = ( + " --nb-cores=1 --txq=1 --rxq=1--txd=1024 --rxd=1024 --lcore-dma={}".format( + lcore_dma + ) + ) + allow_pci = [self.dut.ports_info[0]["pci"]] + for i in range(cbdma_num): + allow_pci.append(self.cbdma_list[i]) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=allow_pci, + iova_mode="va", + ) + for key, path in PACKED_RING_PATH.items(): + virtio_eal_param = ( + " --vdev 'net_virtio_user0,mac={},path=/tmp/s0,{},queues=1'".format( + self.virtio_mac, path + ) + ) + virtio_param = " --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024" + if key == "vectorized_path_not_power_of_2": + virtio_eal_param += ",queue_size=1025" + virtio_param = " --nb-cores=1 --txq=1 --rxq=1 --txd=1025 --rxd=1025" + + mode = key + "_VA" + self.mode_list.append(mode) + self.logger.info( + "Start virtio-user with {} path with {}".format(key, path) + ) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets() + + mode += "_RestartVhost" + self.mode_list.append(mode) + self.logger.info( + "Restart host with {} path with {}".format(key, path) + ) + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets() + self.virtio_user_pmd.quit() + + if not self.check_2M_env: + self.vhost_user_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=allow_pci, + iova_mode="pa", + ) + for key, path in PACKED_RING_PATH.items(): + if key == "non_mergeable_path": + virtio_eal_param = " --vdev 'net_virtio_user0,mac={},path=/tmp/s0,{},queues=1'".format( + self.virtio_mac, path + ) + virtio_param = " --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024" + + mode = key + "_PA" + self.mode_list.append(mode) + self.logger.info( + "Start virtio-user with {} path with {}".format(key, path) + ) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets() + + mode += "_RestartVhost" + self.mode_list.append(mode) + self.logger.info( + "Restart host with {} path with {}".format(key, path) + ) + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets() + self.virtio_user_pmd.quit() + + self.result_table_print() + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] + self.handle_expected(mode_list=self.mode_list) + self.handle_results(mode_list=self.mode_list) + self.vhost_user_pmd.quit() + + def test_perf_pvp_packed_all_path_multi_queues_vhost_txq_M_to_N_cbdma(self): + """ + Test Case 11: PVP packed ring all path vhost enqueue operations with M to N mapping between vrings and CBDMA virtual channels + """ + cbdma_num = 8 + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num) + dmas = self.generate_dmas_param(3) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] + ) + vhost_eal_param = "--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas={},dma_ring_size=2048'".format( + dmas + ) + vhost_param = ( + " --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma={}".format( + lcore_dma + ) + ) + allow_pci = [self.dut.ports_info[0]["pci"]] + for i in range(cbdma_num): + allow_pci.append(self.cbdma_list[i]) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=allow_pci, + iova_mode="va", + ) + for key, path in PACKED_RING_PATH.items(): + virtio_eal_param = ( + " --vdev 'net_virtio_user0,mac={},path=/tmp/s0,{},queues=8'".format( + self.virtio_mac, path + ) + ) + virtio_param = " --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + if key == "vectorized_path_not_power_of_2": + virtio_eal_param += ",queue_size=1025" + virtio_param = " --nb-cores=1 --txq=8 --rxq=8 --txd=1025 --rxd=1025" + + mode = key + "_VA" + "_3dmas" + self.mode_list.append(mode) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.logger.info( + "Start virtio-user with {} path with {}".format(key, path) + ) + self.check_each_queue_of_port_packets(queues=8) + + mode += "_RestartVhost" + self.mode_list.append(mode) + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_packets(mode=mode) + self.logger.info( + "Restart host with {} path with {}".format(key, path) + ) + self.check_each_queue_of_port_packets(queues=8) + self.virtio_user_pmd.quit() + + self.vhost_user_pmd.quit() + dmas = self.generate_dmas_param(queues=8) + vhost_eal_param = "--vdev 'net_vhost0,iface=/tmp/s0,queues=8,dmas={},dma_ring_size=2048'".format( + dmas + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=allow_pci, + iova_mode="va", + ) + for key, path in PACKED_RING_PATH.items(): + if key == "inorder_non_mergeable_path": + virtio_eal_param = ( + " --vdev 'net_virtio_user0,mac={},path=/tmp/s0,{},queues=8'".format( + self.virtio_mac, path + ) + ) + virtio_param = " --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + + mode = key + "_VA" + "_8dmas" + self.mode_list.append(mode) + self.logger.info( + "Start virtio-user with {} path with {}".format(key, path) + ) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + mode += "_RestartVhost" + self.mode_list.append(mode) + self.logger.info( + "Restart host with {} path with {}".format(key, path) + ) + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + self.virtio_user_pmd.quit() + + if not self.check_2M_env: + self.vhost_user_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=allow_pci, + iova_mode="pa", + ) + for key, path in PACKED_RING_PATH.items(): + if key == "vectorized_path": + virtio_eal_param = " --force-max-simd-bitwidth=512 --vdev 'net_virtio_user0,mac={},path=/tmp/s0,{},queues=8'".format( + self.virtio_mac, path + ) + virtio_param = " --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + + mode = key + "_PA" + "_8dmas" + self.mode_list.append(mode) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.logger.info( + "Start virtio-user with {} path with {}".format(key, path) + ) + self.check_each_queue_of_port_packets(queues=8) + + mode += "_RestartVhost" + self.mode_list.append(mode) + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_packets(mode=mode) + self.logger.info( + "Restart host with {} path with {}".format(key, path) + ) + self.check_each_queue_of_port_packets(queues=8) + self.virtio_user_pmd.quit() + + self.result_table_print() + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] + self.handle_expected(mode_list=self.mode_list) + self.handle_results(mode_list=self.mode_list) + self.vhost_user_pmd.quit() + + def test_perf_pvp_packed_dynamic_queues_vhost_txq_M_to_N_cbdma(self): + """ + Test Case 12: PVP packed ring dynamic queue number vhost enqueue operations with M to N mapping between vrings and CBDMA virtual channels + """ + cbdma_num = 8 + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num) + vhost_eal_param = "--vdev 'net_vhost0,iface=/tmp/s0,queues=1,client=1'" + vhost_param = " --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024" + virtio_param = " --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + allow_pci = [self.dut.ports_info[0]["pci"]] + for i in range(cbdma_num): + allow_pci.append(self.cbdma_list[i]) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=allow_pci[0:1], + iova_mode="va", + ) + for key, path in PACKED_RING_PATH.items(): + if key == "inorder_mergeable_path": + virtio_eal_param = "--vdev=net_virtio_user0,mac={},path=/tmp/s0,{},queues=8,server=1".format( + self.virtio_mac, path + ) + + mode = key + "_VA" + "_without_cbdma" + self.mode_list.append(mode) + self.logger.info( + "Start virtio-user with {} path with {}".format(key, path) + ) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets() + + mode += "_RestartVhost" + self.mode_list.append(mode) + self.logger.info( + "Restart host with {} path with {}".format(key, path) + ) + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets() + + self.vhost_user_pmd.quit() + dmas = self.generate_dmas_param(4) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list[0:4], core_list=self.vhost_core_list[1:5] + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=/tmp/s0,queues=4,client=1,dmas={}'".format(dmas) + ) + vhost_param = ( + " --nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 --lcore-dma={}".format( + lcore_dma + ) + ) + + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=allow_pci[0:5], + iova_mode="va", + ) + for key, path in PACKED_RING_PATH.items(): + if key == "inorder_mergeable_path": + + mode = key + "_VA" + "_1:1" + self.mode_list.append(mode) + self.logger.info( + "Start virtio-user with {} path with {}".format(key, path) + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=4) + + mode += "_RestartVhost" + self.mode_list.append(mode) + self.logger.info( + "Restart host with {} path with {}".format(key, path) + ) + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=4) + + self.vhost_user_pmd.quit() + dmas = self.generate_dmas_param(8) + core1 = self.vhost_core_list[1] + core2 = self.vhost_core_list[2] + core3 = self.vhost_core_list[3] + core4 = self.vhost_core_list[4] + core5 = self.vhost_core_list[5] + cbdma0 = self.cbdma_list[0] + cbdma1 = self.cbdma_list[1] + cbdma2 = self.cbdma_list[2] + cbdma3 = self.cbdma_list[3] + cbdma4 = self.cbdma_list[4] + cbdma5 = self.cbdma_list[5] + cbdma6 = self.cbdma_list[6] + cbdma7 = self.cbdma_list[7] + lcore_dma = ( + f"[lcore{core1}@{cbdma0},lcore{core1}@{cbdma7}," + + f"lcore{core2}@{cbdma1},lcore{core2}@{cbdma2},lcore{core2}@{cbdma3}," + + f"lcore{core3}@{cbdma2},lcore{core3}@{cbdma3},lcore{core3}@{cbdma4}," + f"lcore{core4}@{cbdma2},lcore{core4}@{cbdma3},lcore{core4}@{cbdma4},lcore{core4}@{cbdma5}," + f"lcore{core5}@{cbdma0},lcore{core5}@{cbdma1},lcore{core5}@{cbdma2},lcore{core5}@{cbdma3}," + f"lcore{core5}@{cbdma4},lcore{core5}@{cbdma5},lcore{core5}@{cbdma6},lcore{core5}@{cbdma7}]" + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas={}'".format(dmas) + ) + vhost_param = ( + " --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma={}".format( + lcore_dma + ) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=allow_pci[0:9], + iova_mode="va", + ) + + for key, path in PACKED_RING_PATH.items(): + if key == "inorder_mergeable_path": + + mode = key + "_VA" + "_MN" + self.mode_list.append(mode) + self.logger.info( + "Start virtio-user with {} path with {}".format(key, path) + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + mode += "_RestartVhost" + self.mode_list.append(mode) + self.logger.info( + "Restart host with {} path with {}".format(key, path) + ) + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + if not self.check_2M_env: + self.vhost_user_pmd.quit() + dmas = self.generate_dmas_param(8) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas={}'".format( + dmas + ) + ) + vhost_param = " --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma={}".format( + lcore_dma + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=allow_pci[0:5], + iova_mode="pa", + ) + + for key, path in PACKED_RING_PATH.items(): + if key == "inorder_mergeable_path": + + mode = key + "_PA" + "_M>N" + self.mode_list.append(mode) + self.logger.info( + "Start virtio-user with {} path with {}".format(key, path) + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + mode += "_RestartVhost" + self.mode_list.append(mode) + self.logger.info( + "Restart host with {} path with {}".format(key, path) + ) + self.vhost_user_pmd.execute_cmd("start") + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + self.result_table_print() + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] + self.handle_expected(mode_list=self.mode_list) + self.handle_results(mode_list=self.mode_list) + self.vhost_user_pmd.quit() + self.virtio_user_pmd.quit() + + def send_imix_packets(self, mode): + """ + Send imix packet with packet generator and verify + """ + frame_sizes = [64, 128, 256, 512, 1024, 1518] + tgenInput = [] + for frame_size in frame_sizes: + payload_size = frame_size - self.headers_size + port = self.tester.get_local_port(self.dut_ports[0]) + fields_config = { + "ip": { + "src": {"action": "random"}, + }, + } pkt = Packet() pkt.assign_layers(["ether", "ipv4", "raw"]) pkt.config_layers( @@ -874,84 +2158,10 @@ class TestVirTioVhostCbdma(TestCase): results_row.append(Mpps) results_row.append(throughput) self.result_table_add(results_row) - if queue_list: - self.check_packets_of_each_queue(queue_list=queue_list) - - def send_and_verify( - self, - mode, - multiple_queue=True, - queue_list=[], - frame_sizes=None, - pkt_length_mode="imix", - ): - """ - Send packet with packet generator and verify - """ - if pkt_length_mode == "imix": - self.send_imix_and_verify(mode, multiple_queue, queue_list) - return - - self.throughput[mode] = dict() - for frame_size in frame_sizes: - self.throughput[mode][frame_size] = dict() - payload_size = frame_size - self.headers_size - tgenInput = [] - port = self.tester.get_local_port(self.dut_ports[0]) - fields_config = { - "ip": { - "src": {"action": "random"}, - }, - } - if not multiple_queue: - fields_config = None - pkt1 = Packet() - pkt1.assign_layers(["ether", "ipv4", "raw"]) - pkt1.config_layers( - [ - ("ether", {"dst": "%s" % self.virtio_mac}), - ("ipv4", {"src": "1.1.1.1"}), - ("raw", {"payload": ["01"] * int("%d" % payload_size)}), - ] - ) - pkt1.save_pcapfile( - self.tester, - "%s/multiqueuerandomip_%s.pcap" % (self.out_path, frame_size), - ) - tgenInput.append( - ( - port, - port, - "%s/multiqueuerandomip_%s.pcap" % (self.out_path, frame_size), - ) - ) - self.tester.pktgen.clear_streams() - streams = self.pktgen_helper.prepare_stream_from_tginput( - tgenInput, 100, fields_config, self.tester.pktgen - ) - trans_options = {"delay": 5, "duration": 20} - _, pps = self.tester.pktgen.measure_throughput( - stream_ids=streams, options=trans_options - ) - Mpps = pps / 1000000.0 - self.verify( - Mpps > 0, - "%s can not receive packets of frame size %d" - % (self.running_case, frame_size), - ) - throughput = Mpps * 100 / float(self.wirespeed(self.nic, frame_size, 1)) - self.throughput[mode][frame_size][self.nb_desc] = Mpps - results_row = [frame_size] - results_row.append(mode) - results_row.append(Mpps) - results_row.append(throughput) - self.result_table_add(results_row) - if queue_list: - self.check_packets_of_each_queue(queue_list=queue_list) def handle_expected(self, mode_list): """ - Update expected numbers to configurate file: $DTS_CFG_FOLDER/$suite_name.cfg + Update expected numbers to configurate file: conf/$suite_name.cfg """ if load_global_setting(UPDATE_EXPECTED) == "yes": for mode in mode_list: @@ -1026,7 +2236,6 @@ class TestVirTioVhostCbdma(TestCase): ) ret_datas[nb_desc] = deepcopy(ret_data) self.test_result[mode][frame_size] = deepcopy(ret_datas) - # Create test results table self.result_table_create(header) for mode in mode_list: for frame_size in self.test_parameters.keys(): @@ -1037,9 +2246,7 @@ class TestVirTioVhostCbdma(TestCase): self.test_result[mode][frame_size][nb_desc][header[i]] ) self.result_table_add(table_row) - # present test results to screen self.result_table_print() - # save test results as a file if self.save_result_flag: self.save_result(self.test_result, mode_list) From patchwork Thu May 5 06:59:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 110656 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AB6C4A00C4; Thu, 5 May 2022 08:59:34 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A246F40C35; Thu, 5 May 2022 08:59:34 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 9601240042 for ; Thu, 5 May 2022 08:59:32 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1651733972; x=1683269972; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=6yPhEGxEPJGGNvGt1Xf7YaD7f83QH88Y4/CNWyWPvJs=; b=mWXBDOJ+f57rR9EEq4gVceEQnysb+AvCEvRcHAn5sbk0Wu0taPG+ZUjr Ia4fgNinl30LpJ9lg08f3l/CvUKS5M+1yokF0ULbh0nL+hPGaKI7TUvG8 wyn4yUpxv6lZd5vg/90bWS3vD0MnHeXWoaHUCT+L5oaFw9XAoFoBMTZ8C MLbEWHD2lhAuKtzpMxUjM6A0DHaeDpzn4zPllEcVGdBGopCtpRla6pxTC sLYWbRjSJ0IdUt8PvuZfjermzgQl08WpWZXQyXWrCWjItQOmS0Ko/tg5w SrlZydGlnf3wj55VgpGFyBkDsLoUZwhC2AkGO035uyJrJN8EWXtf49aJy Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10337"; a="354459206" X-IronPort-AV: E=Sophos;i="5.91,200,1647327600"; d="scan'208";a="354459206" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2022 23:59:30 -0700 X-IronPort-AV: E=Sophos;i="5.91,200,1647327600"; d="scan'208";a="891211508" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2022 23:59:28 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [PATCH V3 3/3] conf/vhost_cbdma: modify testsuite config file by testcase name change Date: Thu, 5 May 2022 06:59:09 +0000 Message-Id: <20220505065909.68601-1-weix.ling@intel.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path), modify vhost_cbdma config file by DPDK22.03 Lib and testcase name change. Signed-off-by: Wei Ling --- conf/vhost_cbdma.cfg | 242 +++++++++++++++++++++++++++++++++---------- 1 file changed, 185 insertions(+), 57 deletions(-) diff --git a/conf/vhost_cbdma.cfg b/conf/vhost_cbdma.cfg index a22fa32c..08ff1a69 100644 --- a/conf/vhost_cbdma.cfg +++ b/conf/vhost_cbdma.cfg @@ -1,62 +1,190 @@ [suite] update_expected = True -test_parameters = {'imix': [1024],} +test_parameters = {'imix': [1024]} test_duration = 20 accepted_tolerance = 2 expected_throughput = { - 'test_perf_pvp_spilt_ring_all_path_vhost_enqueue_operations_with_cbdma':{ - 'inorder_mergeable_path_VA': {'imix': {1024: 0.0},}, - 'inorder_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.0},}, - 'inorder_non_mergeable_path_VA': {'imix': {1024: 0.0},}, - 'inorder_non_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.0},}, - 'mergeable_path_VA': {'imix': {1024: 0.0},}, - 'mergeable_path_VA_RestartVhost': {'imix': {1024: 0.0},}, - 'non_mergeable_path_VA': {'imix': {1024: 0.0},}, - 'non_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.0},}, - 'vector_rx_path_VA': {'imix': {1024: 0.0},}, - 'vector_rx_path_VA_RestartVhost': {'imix': {1024: 0.0},}, - 'inorder_mergeable_path_PA': {'imix': {1024: 0.0},}, - 'inorder_mergeable_path_PA_RestartVhost': {'imix': {1024: 0.0},}, - 'inorder_non_mergeable_path_PA': {'imix': {1024: 0.0},}, - 'inorder_non_mergeable_path_PA_RestartVhost': {'imix': {1024: 0.0},}, - 'mergeable_path_PA': {'imix': {1024: 0.0},}, - 'mergeable_path_PA_RestartVhost': {'imix': {1024: 0.0},}, - 'non_mergeable_path_PA': {'imix': {1024: 0.0},}, - 'non_mergeable_path_PA_RestartVhost': {'imix': {1024: 0.0},}, - 'vector_rx_path_PA': {'imix': {1024: 0.0},}, - 'vector_rx_path_PA_RestartVhost': {'imix': {1024: 0.0},}}, - 'test_perf_pvp_spilt_ring_all_dynamic_queue_number_vhost_enqueue_operations_with_cbdma':{ - 'with_0_cbdma': {'imix': {1024: 0.0},}, - 'with_4_cbdma': {'imix': {1024: 0.0},}, - 'with_8_cbdma': {'imix': {1024: 0.0},}, - 'with_6_cbdma': {'imix': {1024: 0.0},}}, - 'test_perf_pvp_packed_ring_all_path_vhost_enqueue_operations_with_cbdma':{ - 'inorder_mergeable_path_VA': {'imix': {1024: 0.0},}, - 'inorder_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.0},}, - 'inorder_non_mergeable_path_VA': {'imix': {1024: 0.0},}, - 'inorder_non_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.0},}, - 'mergeable_path_VA': {'imix': {1024: 0.0},}, - 'mergeable_path_VA_RestartVhost': {'imix': {1024: 0.0},}, - 'non_mergeable_path_VA': {'imix': {1024: 0.0},}, - 'non_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.0},}, - 'vector_rx_path_VA': {'imix': {1024: 0.0},}, - 'vector_rx_path_VA_RestartVhost': {'imix': {1024: 0.0},}, - 'vector_rx_path_not_power_of_2_VA':{'imix': {1024: 0.0},}, - 'vector_rx_path_not_power_of_2_VA_RestartVhost':{'imix': {1024: 0.0},}, - 'inorder_mergeable_path_PA': {'imix': {1024: 0.0},}, - 'inorder_mergeable_path_PA_RestartVhost': {'imix': {1024: 0.0},}, - 'inorder_non_mergeable_path_PA': {'imix': {1024: 0.0},}, - 'inorder_non_mergeable_path_PA_RestartVhost': {'imix': {1024: 0.0},}, - 'mergeable_path_PA': {'imix': {1024: 0.0},}, - 'mergeable_path_PA_RestartVhost': {'imix': {1024: 0.0},}, - 'non_mergeable_path_PA': {'imix': {1024: 0.0},}, - 'non_mergeable_path_PA_RestartVhost': {'imix': {1024: 0.0},}, - 'vector_rx_path_PA': {'imix': {1024: 0.0},}, - 'vector_rx_path_PA_RestartVhost': {'imix': {1024: 0.0},}, - 'vector_rx_path_not_power_of_2_PA':{'imix': {1024: 0.0},}, - 'vector_rx_path_not_power_of_2_PA_RestartVhost':{'imix': {1024: 0.0},}}, - 'test_perf_pvp_packed_ring_all_dynamic_queue_number_vhost_enqueue_operations_with_cbdma':{ - 'with_0_cbdma': {'imix': {1024: 0.0},}, - 'with_4_cbdma': {'imix': {1024: 0.0},}, - 'with_8_cbdma': {'imix': {1024: 0.0},}, - 'with_6_cbdma': {'imix': {1024: 0.0},}},} \ No newline at end of file + 'test_perf_pvp_split_all_path_vhost_txq_1_to_1_cbdma': { + 'inorder_mergeable_path_VA': {'imix': {1024: 0.000}}, + 'inorder_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_path_VA': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'mergeable_path_VA': {'imix': {1024: 0.000}}, + 'mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'non_mergeable_path_VA': {'imix': {1024: 0.000}}, + 'non_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_path_VA': {'imix': {1024: 0.000}}, + 'vectorized_path_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_mergeable_path_PA': {'imix': {1024: 0.000}}, + 'inorder_mergeable_path_PA_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_path_PA': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_path_PA_RestartVhost': {'imix': {1024: 0.000}}, + 'mergeable_path_PA': {'imix': {1024: 0.000}}, + 'mergeable_path_PA_RestartVhost': {'imix': {1024: 0.000}}, + 'non_mergeable_path_PA': {'imix': {1024: 0.000}}, + 'non_mergeable_path_PA_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_path_PA': {'imix': {1024: 0.000}}, + 'vectorized_path_PA_RestartVhost': {'imix': {1024: 0.000}}}, + 'test_perf_pvp_split_all_path_multi_queues_vhost_txq_1_to_1_cbdma': { + 'inorder_mergeable_path_VA': {'imix': {1024: 0.000}}, + 'inorder_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_path_VA': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'mergeable_path_VA': {'imix': {1024: 0.000}}, + 'mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'non_mergeable_path_VA': {'imix': {1024: 0.000}}, + 'non_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_path_VA': {'imix': {1024: 0.000}}, + 'vectorized_path_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'mergeable_path_PA': {'imix': {1024: 0.000}}, + 'mergeable_path_PA_RestartVhost': {'imix': {1024: 0.000}}}, + 'test_perf_pvp_split_all_path_multi_queues_vhost_txq_M_2_1_cbdma': { + 'inorder_mergeable_path_VA_1_lcore': {'imix': {1024: 0.000}}, + 'inorder_mergeable_path_VA_1_lcore_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_path_VA_1_lcore': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_path_VA_1_lcore_RestartVhost': {'imix': {1024: 0.000}}, + 'mergeable_path_VA_1_lcore': {'imix': {1024: 0.000}}, + 'mergeable_path_VA_1_lcore_RestartVhost': {'imix': {1024: 0.000}}, + 'non_mergeable_path_VA_1_lcore': {'imix': {1024: 0.000}}, + 'non_mergeable_path_VA_1_lcore_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_path_VA_1_lcore': {'imix': {1024: 0.000}}, + 'vectorized_path_VA_1_lcore_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_mergeable_path_VA_3_lcore': {'imix': {1024: 0.000}}, + 'inorder_mergeable_path_VA_3_lcore_RestartVhost': {'imix': {1024: 0.000}}, + 'mergeable_path_VA_8_lcore': {'imix': {1024: 0.000}}, + 'mergeable_path_VA_8_lcore_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_path_PA_8_lcore': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_path_PA_8_lcore_RestartVhost': {'imix': {1024: 0.000}}}, + 'test_perf_pvp_split_all_path_vhost_txq_1_to_N_cbdma': { + 'inorder_mergeable_path_VA': {'imix': {1024: 0.000}}, + 'inorder_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_path_VA': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'mergeable_path_VA': {'imix': {1024: 0.000}}, + 'mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'non_mergeable_path_VA': {'imix': {1024: 0.000}}, + 'non_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_path_VA': {'imix': {1024: 0.000}}, + 'vectorized_path_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'non_mergeable_path_PA': {'imix': {1024: 0.000}}, + 'non_mergeable_path_PA_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_path_PA': {'imix': {1024: 0.000}}}, + 'test_perf_pvp_split_all_path_multi_queues_vhost_txq_M_to_N_cbdma': { + 'inorder_mergeable_path_VA_3dmas': {'imix': {1024: 0.000}}, + 'inorder_mergeable_path_VA_3dmas_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_path_VA_3dmas': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_path_VA_3dmas_RestartVhost': {'imix': {1024: 0.000}}, + 'mergeable_path_VA_3dmas': {'imix': {1024: 0.000}}, + 'mergeable_path_VA_3dmas_RestartVhost': {'imix': {1024: 0.000}}, + 'non_mergeable_path_VA_3dmas': {'imix': {1024: 0.000}}, + 'non_mergeable_path_VA_3dmas_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_path_VA_3dmas': {'imix': {1024: 0.000}}, + 'vectorized_path_VA_3dmas_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_path_VA_8dmas': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_path_VA_8dmas_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_path_PA_8dmas': {'imix': {1024: 0.000}}, + 'vectorized_path_PA_8dmas_RestartVhost': {'imix': {1024: 0.000}}}, + 'test_perf_pvp_split_dynamic_queues_vhost_txq_M_to_N_cbdma': { + 'inorder_mergeable_path_VA_without_cbdma': {'imix': {1024: 0.000}}, + 'inorder_mergeable_path_VA_without_cbdma_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_mergeable_path_VA_1:1': {'imix': {1024: 0.000}}, + 'inorder_mergeable_path_VA_1:1_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_mergeable_path_VA_MN': {'imix': {1024: 0.000}}, + 'inorder_mergeable_path_VA_M>N_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_mergeable_path_PA_M>N': {'imix': {1024: 0.000}}, + 'inorder_mergeable_path_PA_M>N_RestartVhost': {'imix': {1024: 0.000}}}, + 'test_perf_pvp_packed_all_path_vhost_txq_1_to_1_cbdma': { + 'inorder_mergeable_path_VA': {'imix': {1024: 0.000}}, + 'inorder_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_path_VA': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'mergeable_path_VA': {'imix': {1024: 0.000}}, + 'mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'non_mergeable_path_VA': {'imix': {1024: 0.000}}, + 'non_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_path_VA': {'imix': {1024: 0.000}}, + 'vectorized_path_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_path_not_power_of_2_VA': {'imix': {1024: 0.000}}, + 'vectorized_path_not_power_of_2_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_mergeable_path_PA': {'imix': {1024: 0.000}}, + 'inorder_mergeable_path_PA_RestartVhost': {'imix': {1024: 0.000}}}, + 'test_perf_pvp_packed_all_path_multi_queues_vhost_txq_1_to_1_cbdma': { + 'inorder_mergeable_path_VA': {'imix': {1024: 0.000}}, + 'inorder_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_path_VA': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'mergeable_path_VA': {'imix': {1024: 0.000}}, + 'mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'non_mergeable_path_VA': {'imix': {1024: 0.000}}, + 'non_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_path_VA': {'imix': {1024: 0.000}}, + 'vectorized_path_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_path_not_power_of_2_VA': {'imix': {1024: 0.000}}, + 'vectorized_path_not_power_of_2_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'mergeable_path_PA': {'imix': {1024: 0.000}}, + 'mergeable_path_PA_RestartVhost': {'imix': {1024: 0.000}}}, + 'test_perf_pvp_packed_all_path_multi_queues_vhost_txq_M_to_1_cbdma': { + 'inorder_mergeable_path_VA_1_lcore': {'imix': {1024: 0.000}}, + 'inorder_mergeable_path_VA_1_lcore_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_path_VA_1_lcore': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_path_VA_1_lcore_RestartVhost': {'imix': {1024: 0.000}}, + 'mergeable_path_VA_1_lcore': {'imix': {1024: 0.000}}, + 'mergeable_path_VA_1_lcore_RestartVhost': {'imix': {1024: 0.000}}, + 'non_mergeable_path_VA_1_lcore': {'imix': {1024: 0.000}}, + 'non_mergeable_path_VA_1_lcore_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_path_VA_1_lcore': {'imix': {1024: 0.000}}, + 'vectorized_path_VA_1_lcore_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_path_not_power_of_2_VA_1_lcore': {'imix': {1024: 0.000}}, + 'vectorized_path_not_power_of_2_VA_1_lcore_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_mergeable_path_VA_3_lcore': {'imix': {1024: 0.000}}, + 'inorder_mergeable_path_VA_3_lcore_RestartVhost': {'imix': {1024: 0.000}}, + 'mergeable_path_VA_8_lcore': {'imix': {1024: 0.000}}, + 'mergeable_path_VA_8_lcore_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_path_PA_8_lcore': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_path_PA_8_lcore_RestartVhost': {'imix': {1024: 0.000}}}, + 'test_perf_pvp_packed_all_path_vhost_txq_1_to_N_cbdma': { + 'inorder_mergeable_path_VA': {'imix': {1024: 0.000}}, + 'inorder_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_path_VA': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'mergeable_path_VA': {'imix': {1024: 0.000}}, + 'mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'non_mergeable_path_VA': {'imix': {1024: 0.000}}, + 'non_mergeable_path_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_path_VA': {'imix': {1024: 0.000}}, + 'vectorized_path_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_path_not_power_of_2_VA': {'imix': {1024: 0.000}}, + 'vectorized_path_not_power_of_2_VA_RestartVhost': {'imix': {1024: 0.000}}, + 'non_mergeable_path_PA': {'imix': {1024: 0.000}}, + 'non_mergeable_path_PA_RestartVhost': {'imix': {1024: 0.000}}}, + 'test_perf_pvp_packed_all_path_multi_queues_vhost_txq_M_to_N_cbdma': { + 'inorder_mergeable_path_VA_3dmas': {'imix': {1024: 0.000}}, + 'inorder_mergeable_path_VA_3dmas_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_path_VA_3dmas': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_path_VA_3dmas_RestartVhost': {'imix': {1024: 0.000}}, + 'mergeable_path_VA_3dmas': {'imix': {1024: 0.000}}, + 'mergeable_path_VA_3dmas_RestartVhost': {'imix': {1024: 0.000}}, + 'non_mergeable_path_VA_3dmas': {'imix': {1024: 0.000}}, + 'non_mergeable_path_VA_3dmas_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_path_VA_3dmas': {'imix': {1024: 0.000}}, + 'vectorized_path_VA_3dmas_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_path_not_power_of_2_VA_3dmas': {'imix': {1024: 0.000}}, + 'vectorized_path_not_power_of_2_VA_3dmas_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_path_VA_8dmas': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_path_VA_8dmas_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_path_PA_8dmas': {'imix': {1024: 0.000}}, + 'vectorized_path_PA_8dmas_RestartVhost': {'imix': {1024: 0.000}}}, + 'test_perf_pvp_packed_dynamic_queues_vhost_txq_M_to_N_cbdma': { + 'inorder_mergeable_path_VA_without_cbdma': {'imix': {1024: 0.000}}, + 'inorder_mergeable_path_VA_without_cbdma_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_mergeable_path_VA_1:1': {'imix': {1024: 0.000}}, + 'inorder_mergeable_path_VA_1:1_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_mergeable_path_VA_MN': {'imix': {1024: 0.000}}, + 'inorder_mergeable_path_VA_M>N_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_mergeable_path_PA_M>N': {'imix': {1024: 0.000}}, + 'inorder_mergeable_path_PA_M>N_RestartVhost': {'imix': {1024: 0.000}}}} +