From patchwork Fri Nov 11 07:11:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 119764 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8D3C1A0542; Fri, 11 Nov 2022 08:18:02 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 864694014F; Fri, 11 Nov 2022 08:18:02 +0100 (CET) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 3357A400EF for ; Fri, 11 Nov 2022 08:18:00 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1668151080; x=1699687080; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=g3iZpHBlwHPZc6t18oXskx32hLSnXVg13d9JoH3yhaY=; b=iDNvxBPcPgE7gNSN327d+Knjw34TT6zNSZR8fIruUmZAfvzzJKQZB6LU oFhMYGWMUmeuvmQtjiuR+VZgWXCs+fnIUZNvst1QOb6uJ8jrl/9+lMJvh IeNBdA8sZO9cPTZIqBX4iN6rJ9dqaaiItejN2X5ZnwBlASobJ9vMTU9IU bf0t6wRtMi1kRB++m/XZlMV31lNXO4ebvKXA5EIwZdKl3guEqbynpfdgH NwD7ue0umD1sKpEDXNblsqK5cpil1w/2aIoyFPpXCT7mTsYR1JlSYUsJb ZhQ6ILGXzUik5pLLE6IdhdRhitJY5LBL5403aDem5viTnsfHP8ylh6WM1 w==; X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="338309984" X-IronPort-AV: E=Sophos;i="5.96,156,1665471600"; d="scan'208,223";a="338309984" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 23:17:59 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="631950953" X-IronPort-AV: E=Sophos;i="5.96,156,1665471600"; d="scan'208,223";a="631950953" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 23:17:57 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 1/3] test_plans/vhost_cbdma_test_plan: modify the dmas parameter Date: Fri, 11 Nov 2022 15:11:17 +0800 Message-Id: <20221111071117.2423344-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org From DPDK-22.11, the dmas parameter have changed from `lcore-dma=[lcore1@0000:00:04.0]` to `dmas=[txq0@0000:00:04.0]` by DPDK local patch,so modify the dmas parameter. Signed-off-by: Wei Ling --- test_plans/vhost_cbdma_test_plan.rst | 437 +++++++++++---------------- 1 file changed, 179 insertions(+), 258 deletions(-) diff --git a/test_plans/vhost_cbdma_test_plan.rst b/test_plans/vhost_cbdma_test_plan.rst index 0b7af0ea..cbeb7248 100644 --- a/test_plans/vhost_cbdma_test_plan.rst +++ b/test_plans/vhost_cbdma_test_plan.rst @@ -13,25 +13,28 @@ data path with CBDMA driver in the PVP topology environment with testpmd. CBDMA is a kind of DMA engine, Vhost asynchronous data path leverages DMA devices to offload memory copies from the CPU and it is implemented in an asynchronous way. -It enables applications, like OVS, to save CPU cycles and hide memory copy overhead, thus achieving higher throughput. -Vhost doesn't manage DMA devices and applications, like OVS, need to manage and configure CBDMA devices. -Applications need to tell vhost what CBDMA devices to use in every data path function call. -This design enables the flexibility for applications to dynamically use DMA channels in different -function modules, not limited in vhost. In addition, vhost supports M:N mapping between vrings -and DMA virtual channels. Specifically, one vring can use multiple different DMA channels -and one DMA channel can be shared by multiple vrings at the same time. +As a result, large packet copy can be accelerated by the DMA engine, and vhost can +free CPU cycles for higher level functions. -From DPDK22.07, this feature is implemented on both split and packed ring enqueue and dequeue data path. +Asynchronous data path is enabled per tx/rx queue, and users need +to specify the DMA device used by the tx/rx queue. Each tx/rx queue +only supports to use one DMA device, but one DMA device can be shared +among multiple tx/rx queues of different vhostpmd ports. + +Two PMD parameters are added: +- dmas: specify the used DMA device for a tx/rx queue +(Default: no queues enable asynchronous data path) +- dma-ring-size: DMA ring size. +(Default: 4096). + +Here is an example: +--vdev 'eth_vhost0,iface=./s0,dmas=[txq0@0000:00.01.0;rxq0@0000:00.01.1],dma-ring-size=4096' Note: -1. When CBDMA devices are bound to vfio driver, VA mode is the default and recommended. +1.When CBDMA device are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may exceed IOMMU's max capability, better to use 1G guest hugepage. -2. DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. -This patch supports to bind dma to each lcore in testpmd. To enable this feature, need to add -"--lcore-dma=[fwd-lcore-id@dma-bdf,...]" in testpmd. After set this parameter for all forwarding cores, -vhost will use dma belonging to lcore to offload copies. - -3. by default, the xl710 Intel NIC does not activate the ETH RSS IPv4/TCP data stream. So we need to execute `port config all rss ipv4-tcp` in testpmd. +2.by default, the xl710 Intel NIC does not activate the ETH RSS IPv4/TCP data stream. So we need to execute `port config all rss ipv4-tcp` in testpmd. +3.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. For more about dpdk-testpmd sample, please refer to the DPDK docments: https://doc.dpdk.org/guides/testpmd_app_ug/run_app.html @@ -43,107 +46,105 @@ Prerequisites Topology -------- - Test flow: TG-->NIC-->Vhost-->Virtio-->Vhost-->NIC-->TG + Test flow: TG-->NIC-->Vhost-->Virtio-->Vhost-->NIC-->TG Hardware -------- - Supportted NICs: ALL + Supportted NICs: ALL Software -------- - Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz + Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz General set up -------------- 1. Compile DPDK:: - # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static - # ninja -C -j 110 - For example: - CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc - ninja -C x86_64-native-linuxapp-gcc -j 110 + # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static + # ninja -C -j 110 + For example: + CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc + ninja -C x86_64-native-linuxapp-gcc -j 110 -2. Get the PCI device ID and DMA device ID of DUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 are DMA device IDs:: +2. Get the PCI device ID and DMA device ID of DUT, for example, 0000:ca:00.0 is PCI device ID, 0000:00:01.0, 0000:00:01.1 are DMA device IDs:: - # ./usertools/dpdk-devbind.py -s + # ./usertools/dpdk-devbind.py -s - Network devices using kernel driver - =================================== - 0000:18:00.0 'Device 159b' if=ens785f0 drv=ice unused=vfio-pci + Network devices using kernel driver + =================================== + 0000:ca:00.0 'Device 159b' if=ens785f0 drv=ice unused=vfio-pci - DMA devices using kernel driver - =============================== - 0000:00:04.0 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci - 0000:00:04.1 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci + DMA devices using kernel driver + =============================== + 0000:00:01.0 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci + 0000:00:01.1 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci Test case ========= Common steps ------------ -1. Bind 1 NIC port and CBDMA devices to vfio-pci:: +1. Bind 1 NIC port and CBDMA device to vfio-pci:: - # ./usertools/dpdk-devbind.py -b vfio-pci - # ./usertools/dpdk-devbind.py -b vfio-pci + # ./usertools/dpdk-devbind.py -b vfio-pci + # ./usertools/dpdk-devbind.py -b vfio-pci - For example, Bind 1 NIC port and 2 CBDMA devices:: - ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:18.0 - ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:04.0 0000:00:04.1 + For example, Bind 1 NIC port and 2 CBDMA device:: + ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:18.0 + ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:01.0 0000:00:01.1 -2. Send tcp imix packets [64,1518] to NIC by traffic generator:: +2. Send tcp packets [64, 128, 256, 512, 1024, 1518] to NIC by traffic generator:: - The imix packets include packet size [64, 128, 256, 512, 1024, 1518], and the format of packet is as follows. - +-------------+-------------+-------------+-------------+ - | MAC | MAC | IPV4 | IPV4 | - | Src address | Dst address | Src address | Dst address | - |-------------|-------------|-------------|-------------| - | Random MAC | Virtio mac | Random IP | Random IP | - +-------------+-------------+-------------+-------------+ - All the packets in this test plan use the Virtio mac: 00:11:22:33:44:10. + The imix packets include packet size [64, 128, 256, 512, 1024, 1518], and the format of packet is as follows. + +-------------+-------------+-------------+-------------+ + | MAC | MAC | IPV4 | IPV4 | + | Src address | Dst address | Src address | Dst address | + |-------------|-------------|-------------|-------------| + | Random MAC | Virtio mac | Random IP | Random IP | + +-------------+-------------+-------------+-------------+ + All the packets in this test plan use the Virtio mac: 00:11:22:33:44:10. -Test Case 1: PVP split ring all path multi-queues vhost async operation with 1 to 1 mapping between vring and CBDMA virtual channels ------------------------------------------------------------------------------------------------------------------------------------- +Test Case 1: PVP split ring all path multi-queues vhost async operation test with each tx/rx queue using one CBDMA device +-------------------------------------------------------------------------------------------------------------------------- This case tests split ring in each virtio path with multi-queues can work normally when vhost uses the asynchronous enqueue and dequeue operations -and the mapping between vrings and CBDMA virtual channels is 1:1. Both iova as VA and PA mode have been tested. +with each tx/rx queue using 1 CBDMA device. Both iova as VA and PA mode have been tested. -1. Bind 1 NIC port and 4 CBDMA devices to vfio-pci, as common step 1. +1. Bind 1 NIC port and 4 CBDMA device to vfio-pci, as common step 1. 2. Launch vhost by below command:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ - -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=2,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=va -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3] - testpmd> port config all rss ipv4-tcp + -a 0000:ca:00.0 -a 0000:00:01.0 -a 0000:00:01.1 -a 0000:00:01.2 -a 0000:00:01.3 --max-pkt-len=9600 --tx-offloads=0x00008000\ + --vdev 'net_vhost0,iface=./vhost_net0,queues=2,dmas=[txq0@0000:00:01.0;txq1@0000:00:01.1;rxq0@0000:00:01.2;rxq1@0000:00:01.3]' \ + --iova=va -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 testpmd> set fwd mac testpmd> start 3. Launch virtio-user with inorder mergeable path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=vhost-net0,mrg_rxbuf=1,in_order=1,queues=2 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost_net0,mrg_rxbuf=1,in_order=1,queues=2 \ -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 testpmd> set fwd csum testpmd> start -4. Send tcp imix packets [64,1518] from packet generator as common step2, and then check the throughput can get expected data:: +4. Send imix packets [64, 128, 256, 512, 1024, 1518] from packet generator as common step2, and then check the throughput can get expected data:: - testpmd> show port stats all + testpmd> show port stats all 5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: - testpmd> stop + testpmd> stop 6. Restart vhost port and send imix packets again, then check the throuhput can get expected data:: - testpmd> start - testpmd> show port stats all + testpmd> start + testpmd> show port stats all 7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=vhost-net0,mrg_rxbuf=1,in_order=0,queues=2 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost_net0,mrg_rxbuf=1,in_order=0,queues=2 \ -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 testpmd> set fwd csum testpmd> start @@ -151,7 +152,7 @@ and the mapping between vrings and CBDMA virtual channels is 1:1. Both iova as V 8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=vhost-net0,mrg_rxbuf=0,in_order=1,queues=2 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost_net0,mrg_rxbuf=0,in_order=1,queues=2 \ -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 testpmd> set fwd csum testpmd> start @@ -159,7 +160,7 @@ and the mapping between vrings and CBDMA virtual channels is 1:1. Both iova as V 9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=vhost-net0,mrg_rxbuf=0,in_order=0,queues=2 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost_net0,mrg_rxbuf=0,in_order=0,queues=2 \ -- -i --enable-hw-vlan-strip --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 testpmd> set fwd csum testpmd> start @@ -171,7 +172,7 @@ and the mapping between vrings and CBDMA virtual channels is 1:1. Both iova as V 10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=vhost-net0,mrg_rxbuf=0,in_order=0,vectorized=1,queues=2 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost_net0,mrg_rxbuf=0,in_order=0,vectorized=1,queues=2 \ -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 testpmd> set fwd csum testpmd> start @@ -179,42 +180,38 @@ and the mapping between vrings and CBDMA virtual channels is 1:1. Both iova as V 11. Quit all testpmd and relaunch vhost with iova=pa by below command:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ - -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=2,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=pa -- -i --nb-cores=2 --txq=2 --rxq=2 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore12@0000:00:04.2,lcore12@0000:00:04.3] - testpmd> port config all rss ipv4-tcp + -a 0000:ca:00.0 -a 0000:00:01.0 -a 0000:00:01.1 -a 0000:00:01.2 -a 0000:00:01.3 \ + --vdev 'net_vhost0,iface=./vhost_net0,queues=2,dmas=[txq0@0000:00:01.0;txq1@0000:00:01.1;rxq0@0000:00:01.2;rxq1@0000:00:01.3],dma-ring-size=4096' \ + --iova=pa -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 testpmd> set fwd mac testpmd> start -12. Rerun steps 3-6. +12. Rerun step 3-6. -Test Case 2: PVP split ring all path multi-queues vhost async operations with M to 1 mapping between vrings and CBDMA virtual channels --------------------------------------------------------------------------------------------------------------------------------------- +Test Case 2: PVP split ring all path multi-queues vhost async operations test with one CBDMA device being shared among multiple tx/rx queues +--------------------------------------------------------------------------------------------------------------------------------------------- This case tests split ring in each virtio path with multi-queues can work normally when vhost uses the asynchronous enqueue and dequeue operations -and the mapping between vrings and CBDMA virtual channels is M:1. Both iova as VA and PA mode have been tested. +with one CBDMA device being shared among multiple tx/rx queues. Both iova as VA and PA mode have been tested. -1. Bind 1 NIC port and 4 CBDMA devices to vfio-pci, as common step 1. +1. Bind 1 NIC port and 4 CBDMA device to vfio-pci, as common step 1. 2. Launch vhost by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.3] - testpmd> port config all rss ipv4-tcp + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:ca:00.0 -a 0000:00:01.0 -a 0000:00:01.1 \ + --vdev 'net_vhost0,iface=./vhost_net0,queues=8,dmas=[txq0@0000:00:01.0;txq1@0000:00:01.0;txq2@0000:00:01.0;txq3@0000:00:01.0;txq4@0000:00:01.1;txq5@0000:00:01.1;txq6@0000:00:01.1;txq7@0000:00:01.1;rxq0@0000:00:01.0;rxq1@0000:00:01.0;rxq2@0000:00:01.0;rxq3@0000:00:01.0;rxq4@0000:00:01.1;rxq5@0000:00:01.1;rxq6@0000:00:01.1;rxq7@0000:00:01.1]' \ + --iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 testpmd> set fwd mac testpmd> start 3. Launch virtio-user with inorder mergeable path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=vhost-net0,mrg_rxbuf=1,in_order=1,queues=8 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost_net0,mrg_rxbuf=1,in_order=1,queues=8 \ -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 testpmd> set fwd csum testpmd> start -3. Send tcp imix packets [64,1518] from packet generator as common step2, and then check the throughput can get expected data:: +4. Send imix packets [64, 128, 256, 512, 1024, 1518] from packet generator as common step2, and then check the throughput can get expected data:: testpmd> show port stats all @@ -230,7 +227,7 @@ and the mapping between vrings and CBDMA virtual channels is M:1. Both iova as V 7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=vhost-net0,mrg_rxbuf=1,in_order=0,queues=8 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost_net0,mrg_rxbuf=1,in_order=0,queues=8 \ -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 testpmd> set fwd csum testpmd> start @@ -238,7 +235,7 @@ and the mapping between vrings and CBDMA virtual channels is M:1. Both iova as V 8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=vhost-net0,mrg_rxbuf=0,in_order=1,queues=8 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost_net0,mrg_rxbuf=0,in_order=1,queues=8 \ -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 testpmd> set fwd csum testpmd> start @@ -246,7 +243,7 @@ and the mapping between vrings and CBDMA virtual channels is M:1. Both iova as V 9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=vhost-net0,mrg_rxbuf=0,in_order=0,queues=8 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost_net0,mrg_rxbuf=0,in_order=0,queues=8 \ -- -i --enable-hw-vlan-strip --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 testpmd> set fwd csum testpmd> start @@ -254,60 +251,54 @@ and the mapping between vrings and CBDMA virtual channels is M:1. Both iova as V 10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=vhost-net0,mrg_rxbuf=0,in_order=0,vectorized=1,queues=8 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost_net0,mrg_rxbuf=0,in_order=0,vectorized=1,queues=8 \ -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 testpmd> set fwd csum testpmd> start 11. Quit all testpmd and relaunch vhost by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.0,lcore14@0000:00:04.0] - testpmd> port config all rss ipv4-tcp + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:ca:00.0 -a 0000:00:01.0 -a 0000:00:01.1 -a 0000:00:01.2 -a 0000:00:01.3 \ + --vdev 'net_vhost0,iface=./vhost_net0,queues=8,dmas=[txq0@0000:00:01.0;txq1@0000:00:01.0;txq2@0000:00:01.1;txq3@0000:00:01.1;txq4@0000:00:01.2;txq5@0000:00:01.2;txq6@0000:00:01.3;txq7@0000:00:01.3;rxq0@0000:00:01.0;rxq1@0000:00:01.0;rxq2@0000:00:01.1;rxq3@0000:00:01.1;rxq4@0000:00:01.2;rxq5@0000:00:01.2;rxq6@0000:00:01.3;rxq7@0000:00:01.3]' \ + --iova=va -- -i --nb-cores=6 --txq=8 --rxq=8 --txd=1024 --rxd=1024 testpmd> set fwd mac testpmd> start -12. Rerun steps 7. +12. Rerun step 7. 13. Quit all testpmd and relaunch vhost with iova=pa by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=pa -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.0,lcore14@0000:00:04.0,lcore15@0000:00:04.0,lcore16@0000:00:04.0,lcore17@0000:00:04.0,lcore18@0000:00:04.0] - testpmd> port config all rss ipv4-tcp + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:ca:00.0 -a 0000:00:01.0 \ + --vdev 'net_vhost0,iface=./vhost_net0,queues=8,dmas=[txq0@0000:00:01.0;txq1@0000:00:01.0;txq2@0000:00:01.0;txq3@0000:00:01.0;txq4@0000:00:01.0;txq5@0000:00:01.0;txq6@0000:00:01.0;txq7@0000:00:01.0;rxq0@0000:00:01.0;rxq1@0000:00:01.0;rxq2@0000:00:01.0;rxq3@0000:00:01.0;rxq4@0000:00:01.0;rxq5@0000:00:01.0;rxq6@0000:00:01.0;rxq7@0000:00:01.0]' \ + --iova=pa -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 testpmd> set fwd mac testpmd> start -14. Rerun steps 8. +14. Rerun step 8. -Test Case 3: PVP split ring dynamic queue number vhost async operations with M to N mapping between vrings and CBDMA virtual channels -------------------------------------------------------------------------------------------------------------------------------------- +Test Case 3: PVP split ring dynamic queue number vhost async operations with cbdma +---------------------------------------------------------------------------------- This case tests if the vhost-user async operation with cbdma channels can work normally when the queue number of split ring dynamic change. Both iova as VA and PA mode have been tested. -1. Bind 1 NIC port and 8 CBDMA devices to vfio-pci, as common step 1. +1. Bind 1 NIC port and 8 CBDMA device to vfio-pci, as common step 1. -2. Launch vhost by below command(1:N mapping):: +2. Launch vhost by below command:: - #./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ - --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=va -- -i --nb-cores=2 --txq=2 --rxq=2 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore12@0000:00:04.4,lcore12@0000:00:04.5,lcore12@0000:00:04.6,lcore12@0000:00:04.7] - testpmd> port config all rss ipv4-tcp + #./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:ca:00.0 -a 0000:00:01.0 -a 0000:00:01.1 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=./vhost_net0,queues=8,client=1,dmas=[txq0@0000:00:01.0;txq1@0000:00:01.1],dma-ring-size=32' \ + --iova=va -- -i --nb-cores=2 --txq=2 --rxq=2 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 3. Launch virtio-user by below command:: #./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=vhost-net0,mrg_rxbuf=1,in_order=1,queues=8,server=1 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost_net0,mrg_rxbuf=1,in_order=1,queues=8,server=1 \ -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 testpmd>set fwd csum testpmd>start -4. Send tcp imix packets[64,1518] from packet generator with random ip, check perforamnce can get target. +4. Send imix packets [64, 128, 256, 512, 1024, 1518] from packet generator with random ip, check perforamnce can get target. 5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: @@ -315,10 +306,9 @@ This case tests if the vhost-user async operation with cbdma channels can work n 6. Quit and relaunch vhost without CBDMA:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:18:00.0 \ - --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1' \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:ca:00.0 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=./vhost_net0,queues=8,client=1' \ --iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 - testpmd> port config all rss ipv4-tcp testpmd>set fwd mac testpmd>start @@ -326,90 +316,60 @@ This case tests if the vhost-user async operation with cbdma channels can work n 8. Quit and relaunch vhost by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ - --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[rxq0;rxq1;rxq2;rxq3]' \ - --iova=va -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.3] - testpmd> port config all rss ipv4-tcp + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:ca:00.0 -a 0000:00:01.0 -a 0000:00:01.1 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=./vhost_net0,queues=8,client=1,dmas=[rxq0@0000:00:01.0;rxq1@0000:00:01.0;rxq2@0000:00:01.1;rxq3@0000:00:01.1],dma-ring-size=2048' \ + --iova=va -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 9. Rerun step 4-5. -10. Quit and relaunch vhost with M:N mapping between vrings and CBDMA virtual channels:: +10. Quit and relaunch vhost by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ - --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=2 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore12@0000:00:04.1,lcore12@0000:00:04.2,lcore12@0000:00:04.3,lcore12@0000:00:04.4,lcore12@0000:00:04.5,lcore12@0000:00:04.6,lcore12@0000:00:04.7] - testpmd> port config all rss ipv4-tcp + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:ca:00.0 -a 0000:00:01.0 -a 0000:00:01.1 \ + --vdev 'net_vhost0,iface=./vhost_net0,queues=8,client=1,dmas=[txq0@0000:00:01.0;txq1@0000:00:01.0;txq2@0000:00:01.0;txq3@0000:00:01.0;txq4@0000:00:01.0;txq5@0000:00:01.0;txq6@0000:00:01.0;rxq2@0000:00:01.1;rxq3@0000:00:01.1;rxq4@0000:00:01.1;rxq5@0000:00:01.1;rxq6@0000:00:01.1;rxq7@0000:00:01.1],dma-ring-size=32' \ + --iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 11. Rerun step 4-5. -12. Quit and relaunch vhost with diff mapping between vrings and CBDMA virtual channels:: +12. Quit and relaunch vhost by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore12@0000:00:04.1,lcore12@0000:00:04.2,lcore13@0000:00:04.3,lcore13@0000:00:04.4,lcore13@0000:00:04.5,lcore13@0000:00:04.6,lcore14@0000:00:04.4,lcore14@0000:00:04.5,lcore14@0000:00:04.6,lcore14@0000:00:04.7] - testpmd> port config all rss ipv4-tcp + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:ca:00.0 -a 0000:00:01.0 -a 0000:00:01.1 -a 0000:00:01.2 -a 0000:00:01.3 -a 0000:00:01.4 -a 0000:00:01.5 -a 0000:00:01.6 -a 0000:00:01.7 \ + -a 0000:80:01.0 -a 0000:80:01.1 -a 0000:80:01.2 -a 0000:80:01.3 -a 0000:80:01.4 -a 0000:80:01.5 -a 0000:80:01.6 -a 0000:80:01.7 \ + --vdev 'net_vhost0,iface=./vhost_net0,queues=8,client=1,dmas=[txq0@0000:00:01.0;txq1@0000:00:01.1;txq2@0000:00:01.2;txq3@0000:00:01.3;txq4@0000:00:01.4;txq5@0000:00:01.5;txq6@0000:00:01.6;rxq2@0000:80:01.2;rxq3@0000:80:01.3;rxq4@0000:80:01.4;rxq5@0000:80:01.5;rxq6@0000:80:01.6;rxq7@0000:80:01.7]' \ + --iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 13. Start vhost port and rerun step 4-5. -14. Quit and relaunch virtio-user by below command:: +Test Case 4: PVP packed ring all path multi-queues vhost async operation test with each tx/rx queue using one CBDMA device +--------------------------------------------------------------------------------------------------------------------------- +This case tests packed ring in each virtio path with multi-queues can work normally when vhost uses the asynchronous enqueue and dequeue operations. +Both iova as VA and PA mode have been tested. - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=vhost-net0,mrg_rxbuf=1,in_order=0,queues=8,server=1 \ - -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start - -15. Rerun step 4-5. - -16. Quit and relaunch vhost with iova=pa by below command:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 \ - -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=pa -- -i --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.1,lcore14@0000:00:04.2,lcore15@0000:00:04.1,lcore15@0000:00:04.2] - testpmd> port config all rss ipv4-tcp - testpmd>set fwd mac - testpmd>start - -17. Rerun step 4-5. - -Test Case 4: PVP packed ring all path multi-queues vhost async operations with 1 to 1 mapping between vrings and CBDMA virtual channels ---------------------------------------------------------------------------------------------------------------------------------------- -This case tests packed ring in each virtio path with multi-queues can work normally when vhost uses the asynchronous enqueue and dequeue operations -and the mapping between vrings and CBDMA virtual channels is 1:1. Both iova as VA and PA mode have been tested. - -1. Bind 1 NIC port and 4 CBDMA devices to vfio-pci, as common step 1. +1. Bind 1 NIC port and 4 CBDMA device to vfio-pci, as common step 1. 2. Launch vhost by below command:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ - -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=2,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=va -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3] - testpmd> port config all rss ipv4-tcp + -a 0000:ca:00.0 -a 0000:00:01.0 -a 0000:00:01.1 -a 0000:00:01.2 -a 0000:00:01.3 --max-pkt-len=9600 --tx-offloads=0x00008000\ + --vdev 'net_vhost0,iface=./vhost_net0,queues=2,dmas=[txq0@0000:00:01.0;txq1@0000:00:01.1;rxq0@0000:00:01.2;rxq1@0000:00:01.3]' \ + --iova=va -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 testpmd> set fwd mac testpmd> start 3. Launch virtio-user with inorder mergeable path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=vhost-net0,mrg_rxbuf=1,in_order=1,queues=2,packed_vq=1 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost_net0,mrg_rxbuf=1,in_order=1,queues=2,packed_vq=1 \ -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 testpmd> set fwd csum testpmd> start -4. Send tcp imix packets [64,1518] from packet generator as common step2, and then check the throughput can get expected data:: +4. Send imix packets [64, 128, 256, 512, 1024, 1518] from packet generator as common step2, and then check the throughput can get expected data:: testpmd> show port stats all @@ -425,7 +385,7 @@ and the mapping between vrings and CBDMA virtual channels is 1:1. Both iova as V 7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=vhost-net0,mrg_rxbuf=1,in_order=0,queues=2,packed_vq=1 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost_net0,mrg_rxbuf=1,in_order=0,queues=2,packed_vq=1 \ -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 testpmd> set fwd csum testpmd> start @@ -433,7 +393,7 @@ and the mapping between vrings and CBDMA virtual channels is 1:1. Both iova as V 8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=vhost-net0,mrg_rxbuf=0,in_order=1,queues=2,packed_vq=1 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost_net0,mrg_rxbuf=0,in_order=1,queues=2,packed_vq=1 \ -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 testpmd> set fwd csum testpmd> start @@ -441,7 +401,7 @@ and the mapping between vrings and CBDMA virtual channels is 1:1. Both iova as V 9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=vhost-net0,mrg_rxbuf=0,in_order=0,queues=2,packed_vq=1 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost_net0,mrg_rxbuf=0,in_order=0,queues=2,packed_vq=1 \ -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 testpmd> set fwd csum testpmd> start @@ -449,7 +409,7 @@ and the mapping between vrings and CBDMA virtual channels is 1:1. Both iova as V 10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=vhost-net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=2 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost_net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=2 \ -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 testpmd> set fwd csum testpmd> start @@ -462,7 +422,7 @@ and the mapping between vrings and CBDMA virtual channels is 1:1. Both iova as V 11. Relaunch virtio-user with vectorized path and ring size is not power of 2, then repeat step 4-6:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=vhost-net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=2,queue_size=1025 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost_net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=2,queue_size=1025 \ -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1025 --rxd=1025 testpmd> set fwd csum testpmd> start @@ -470,42 +430,38 @@ and the mapping between vrings and CBDMA virtual channels is 1:1. Both iova as V 12. Quit all testpmd and relaunch vhost with iova=pa by below command:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost \ - -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=2,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=pa -- -i --nb-cores=2 --txq=2 --rxq=2 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore12@0000:00:04.2,lcore12@0000:00:04.3] - testpmd> port config all rss ipv4-tcp + -a 0000:ca:00.0 -a 0000:00:01.0 -a 0000:00:01.1 -a 0000:00:01.2 -a 0000:00:01.3 \ + --vdev 'net_vhost0,iface=./vhost_net0,queues=2,dmas=[txq0@0000:00:01.0;txq1@0000:00:01.1;rxq0@0000:00:01.2;rxq1@0000:00:01.3]' \ + --iova=pa -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 testpmd> set fwd mac testpmd> start -13. Rerun steps 3-6. +13. Rerun step 11. -Test Case 5: PVP packed ring all path multi-queues vhost async operations with M to 1 mapping between vrings and CBDMA virtual channels ---------------------------------------------------------------------------------------------------------------------------------------- -This case tests packed ring in each virtio path with multi-queues can work normally when vhost uses the asynchronous enqueue and dequeue operations -and the mapping between vrings and CBDMA virtual channels is M:1. Both iova as VA and PA mode have been tested. +Test Case 5: PVP packed ring all path multi-queues vhost async operations test with one CBDMA device being shared among multiple tx/rx queues +---------------------------------------------------------------------------------------------------------------------------------------------- +This case tests packed ring in each virtio path with multi-queues can work normally when vhost uses the asynchronous enqueue and dequeue operations. +Both iova as VA and PA mode have been tested. -1. Bind 1 NIC port and 4 CBDMA devices to vfio-pci, as common step 1. +1. Bind 1 NIC port and 4 CBDMA device to vfio-pci, as common step 1. 2. Launch vhost by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.3] - testpmd> port config all rss ipv4-tcp + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:ca:00.0 -a 0000:00:01.0 -a 0000:00:01.1 \ + --vdev 'net_vhost0,iface=./vhost_net0,queues=8,dmas=[txq0@0000:00:01.0;txq1@0000:00:01.0;txq2@0000:00:01.0;txq3@0000:00:01.0;txq4@0000:00:01.1;txq5@0000:00:01.1;txq6@0000:00:01.1;txq7@0000:00:01.1;rxq0@0000:00:01.1;rxq1@0000:00:01.1;rxq2@0000:00:01.1;rxq3@0000:00:01.1;rxq4@0000:00:01.0;rxq5@0000:00:01.0;rxq6@0000:00:01.0;rxq7@0000:00:01.0]' \ + --iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 testpmd> set fwd mac testpmd> start 3. Launch virtio-user with inorder mergeable path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=vhost-net0,mrg_rxbuf=1,in_order=1,queues=8,packed_vq=1 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost_net0,mrg_rxbuf=1,in_order=1,queues=8,packed_vq=1 \ -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 testpmd> set fwd csum testpmd> start -4. Send tcp imix packets [64,1518] from packet generator as common step2, and then check the throughput can get expected data:: +4. Send imix packets [64, 128, 256, 512, 1024, 1518] from packet generator as common step2, and then check the throughput can get expected data:: testpmd> show port stats all @@ -521,7 +477,7 @@ and the mapping between vrings and CBDMA virtual channels is M:1. Both iova as V 7. Relaunch virtio-user with mergeable path, then repeat step 4-6:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=vhost-net0,mrg_rxbuf=1,in_order=0,queues=8,packed_vq=1 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost_net0,mrg_rxbuf=1,in_order=0,queues=8,packed_vq=1 \ -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 testpmd> set fwd csum testpmd> start @@ -529,7 +485,7 @@ and the mapping between vrings and CBDMA virtual channels is M:1. Both iova as V 8. Relaunch virtio-user with inorder non-mergeable path, then repeat step 4-6:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=vhost-net0,mrg_rxbuf=0,in_order=1,queues=8,packed_vq=1 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost_net0,mrg_rxbuf=0,in_order=1,queues=8,packed_vq=1 \ -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 testpmd> set fwd csum testpmd> start @@ -537,7 +493,7 @@ and the mapping between vrings and CBDMA virtual channels is M:1. Both iova as V 9. Relaunch virtio-user with non-mergeable path, then repeat step 4-6:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=vhost-net0,mrg_rxbuf=0,in_order=0,queues=8,packed_vq=1 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost_net0,mrg_rxbuf=0,in_order=0,queues=8,packed_vq=1 \ -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 testpmd> set fwd csum testpmd> start @@ -545,7 +501,7 @@ and the mapping between vrings and CBDMA virtual channels is M:1. Both iova as V 10. Relaunch virtio-user with vectorized path, then repeat step 4-6:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=vhost-net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=8 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost_net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=8 \ -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 testpmd> set fwd csum testpmd> start @@ -553,60 +509,54 @@ and the mapping between vrings and CBDMA virtual channels is M:1. Both iova as V 11. Relaunch virtio-user with vectorized path and ring size is not power of 2, then repeat step 4-6:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=vhost-net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=8,queue_size=1025 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost_net0,mrg_rxbuf=0,in_order=0,packed_vq=1,vectorized=1,queues=8,queue_size=1025 \ -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1025 --rxd=1025 testpmd> set fwd csum testpmd> start 12. Quit all testpmd and relaunch vhost by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.0,lcore14@0000:00:04.0] - testpmd> port config all rss ipv4-tcp + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:ca:00.0 -a 0000:00:01.0 -a 0000:00:01.1 -a 0000:00:01.2 -a 0000:00:01.3 \ + --vdev 'net_vhost0,iface=./vhost_net0,queues=8,dmas=[txq0@0000:00:01.0;txq1@0000:00:01.0;txq2@0000:00:01.1;txq3@0000:00:01.1;txq4@0000:00:01.2;txq5@0000:00:01.2;txq6@0000:00:01.3;txq7@0000:00:01.3;rxq0@0000:00:01.0;rxq1@0000:00:01.0;rxq2@0000:00:01.1;rxq3@0000:00:01.1;rxq4@0000:00:01.2;rxq5@0000:00:01.2;rxq6@0000:00:01.3;rxq7@0000:00:01.3]' \ + --iova=va -- -i --nb-cores=6 --txq=8 --rxq=8 --txd=1024 --rxd=1024 testpmd> set fwd mac testpmd> start -13. Rerun steps 7. +13. Rerun steps 11. 14. Quit all testpmd and relaunch vhost with iova=pa by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=pa -- -i --nb-cores=8 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.0,lcore14@0000:00:04.0,lcore15@0000:00:04.0,lcore16@0000:00:04.0,lcore17@0000:00:04.0,lcore18@0000:00:04.0] - testpmd> port config all rss ipv4-tcp + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:ca:00.0 -a 0000:00:01.0 \ + --vdev 'net_vhost0,iface=./vhost_net0,queues=8,dmas=[txq0@0000:00:01.0;txq1@0000:00:01.0;txq2@0000:00:01.0;txq3@0000:00:01.0;txq4@0000:00:01.0;txq5@0000:00:01.0;txq6@0000:00:01.0;txq7@0000:00:01.0;rxq0@0000:00:01.0;rxq1@0000:00:01.0;rxq2@0000:00:01.0;rxq3@0000:00:01.0;rxq4@0000:00:01.0;rxq5@0000:00:01.0;rxq6@0000:00:01.0;rxq7@0000:00:01.0]' \ + --iova=pa -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 testpmd> set fwd mac testpmd> start -15. Rerun steps 8. +15. Rerun step 11. -Test Case 6: PVP packed ring dynamic queue number vhost async operations with M to N mapping between vrings and CBDMA virtual channels --------------------------------------------------------------------------------------------------------------------------------------- +Test Case 6: PVP packed ring dynamic queue number vhost async operations with cbdma +-------------------------------------------------------------------------------------- This case tests if the vhost-user async operation with cbdma channles can work normally when the queue number of split ring dynamic change. Both iova as VA and PA mode have been tested. -1. Bind 1 NIC port and 8 CBDMA devices to vfio-pci, as common step 1. +1. Bind 1 NIC port and 8 CBDMA device to vfio-pci, as common step 1. 2. Launch vhost by below command(1:N mapping):: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ - --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=va -- -i --nb-cores=2 --txq=2 --rxq=2 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore12@0000:00:04.4,lcore12@0000:00:04.5,lcore12@0000:00:04.6,lcore12@0000:00:04.7] - testpmd> port config all rss ipv4-tcp + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:ca:00.0 -a 0000:00:01.0 -a 0000:00:01.1 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=./vhost_net0,queues=8,client=1,dmas=[txq0@0000:00:01.0;txq1@0000:00:01.1],dma-ring-size=32' \ + --iova=va -- -i --nb-cores=2 --txq=2 --rxq=2 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 3. Launch virtio-user by below command:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=vhost-net0,mrg_rxbuf=1,in_order=1,queues=8,server=1,packed_vq=1 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost_net0,mrg_rxbuf=1,in_order=1,queues=8,server=1,packed_vq=1 \ -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 testpmd> set fwd csum testpmd> start -4. Send tcp imix packets[64,1518] from packet generator with random ip, check perforamnce can get target. +4. Send imix packets [64, 128, 256, 512, 1024, 1518] from packet generator with random ip, check perforamnce can get target. 5. Stop vhost port, check that there are packets in both directions of RX and TX in each queue from vhost log:: @@ -614,10 +564,9 @@ This case tests if the vhost-user async operation with cbdma channles can work n 6. Quit and relaunch vhost without CBDMA:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:18:00.0 \ - --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1' \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:ca:00.0 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=./vhost_net0,queues=8,client=1' \ --iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=1024 --rxd=1024 - testpmd> port config all rss ipv4-tcp testpmd>set fwd mac testpmd>start @@ -625,59 +574,31 @@ This case tests if the vhost-user async operation with cbdma channles can work n 8. Quit and relaunch vhost by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ - --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[rxq0;rxq1;rxq2;rxq3]' \ - --iova=va -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.3] - testpmd> port config all rss ipv4-tcp + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:ca:00.0 -a 0000:00:01.0 -a 0000:00:01.1 -a 0000:00:01.2 -a 0000:00:01.3 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=./vhost_net0,queues=8,client=1,dmas=[rxq0@0000:00:01.0;rxq1@0000:00:01.1;rxq2@0000:00:01.0;rxq3@0000:00:01.1],dma-ring-size=2048' \ + --iova=va -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 9. Rerun step 4-5. -10. Quit and relaunch vhost with M:N mapping between vrings and CBDMA virtual channels:: +10. Quit and relaunch vhost by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ - --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=2 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore12@0000:00:04.1,lcore12@0000:00:04.2,lcore12@0000:00:04.3,lcore12@0000:00:04.4,lcore12@0000:00:04.5,lcore12@0000:00:04.6,lcore12@0000:00:04.7] - testpmd> port config all rss ipv4-tcp + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:ca:00.0 -a 0000:00:01.0 -a 0000:00:01.1 \ + --vdev 'net_vhost0,iface=./vhost_net0,queues=8,client=1,dmas=[txq0@0000:00:01.0;txq1@0000:00:01.0;txq2@0000:00:01.0;txq3@0000:00:01.0;txq4@0000:00:01.0;txq5@0000:00:01.0;txq6@0000:00:01.0;rxq2@0000:00:01.1;rxq3@0000:00:01.1;rxq4@0000:00:01.1;rxq5@0000:00:01.1;rxq6@0000:00:01.1;rxq7@0000:00:01.1],dma-ring-size=32' \ + --iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 11. Rerun step 4-5. -12. Quit and relaunch vhost with diff mapping between vrings and CBDMA virtual channels:: +12. Quit and relaunch vhost by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore12@0000:00:04.1,lcore12@0000:00:04.2,lcore13@0000:00:04.3,lcore13@0000:00:04.4,lcore13@0000:00:04.5,lcore13@0000:00:04.6,lcore14@0000:00:04.4,lcore14@0000:00:04.5,lcore14@0000:00:04.6,lcore14@0000:00:04.7] - testpmd> port config all rss ipv4-tcp + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:ca:00.0 -a 0000:00:01.0 -a 0000:00:01.1 -a 0000:00:01.2 -a 0000:00:01.3 -a 0000:00:01.4 -a 0000:00:01.5 -a 0000:00:01.6 -a 0000:00:01.7 \ + -a 0000:80:01.0 -a 0000:80:01.1 -a 0000:80:01.2 -a 0000:80:01.3 -a 0000:80:01.4 -a 0000:80:01.5 -a 0000:80:01.6 -a 0000:80:01.7 \ + --vdev 'net_vhost0,iface=./vhost_net0,queues=8,client=1,dmas=[txq0@0000:00:01.0;txq1@0000:00:01.1;txq2@0000:00:01.2;txq3@0000:00:01.3;txq4@0000:00:01.4;txq5@0000:00:01.5;txq6@0000:00:01.6;rxq2@0000:80:01.2;rxq3@0000:80:01.3;rxq4@0000:80:01.4;rxq5@0000:80:01.5;rxq6@0000:80:01.6;rxq7@0000:80:01.7]' \ + --iova=va -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 13. Start vhost port and rerun step 4-5. - -14. Quit and relaunch virtio-user by below command:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=vhost-net0,mrg_rxbuf=1,in_order=0,packed=on,queues=8,server=1 \ - -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start - -15. Rerun step 4-5. - -16. Quit and relaunch vhost with iova=pa by below command:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --file-prefix=vhost -a 0000:18:00.0 \ - -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=pa -- -i --nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.1,lcore14@0000:00:04.2,lcore15@0000:00:04.1,lcore15@0000:00:04.2] - testpmd> port config all rss ipv4-tcp - testpmd>set fwd mac - testpmd>start - -17. Rerun step 4-5. From patchwork Fri Nov 11 07:11:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 119765 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AFE97A0542; Fri, 11 Nov 2022 08:18:11 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AAADF40143; Fri, 11 Nov 2022 08:18:11 +0100 (CET) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id 9D005400EF for ; Fri, 11 Nov 2022 08:18:09 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1668151089; x=1699687089; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=PdwphW9t9BvvCsQPVs4E6kqoPShCyNSVzFLZyJF9zHQ=; b=nwfuurHvPnMExk9Zxo2t1GFFOztXWRiV2MqmjPzxPRg1ldDKjKVEQyDs 2D8Iap3C25XOqJ3Miby/X6/kIyCGJOkmLJryvlyNTM56IvGLlU8yOJWQK lWhsBwlq67mfTXzvZD0/Zop+JO1xC0N36wnirbvqyMfkyDo0V6cSqNEKi oGeaJJjpKuPgKnpROUct9mgkUpjJZYhOnk5uNq/PdPN3Sg0sK6TfXGgYA OFYRSTuLZorYHmT/B/PG+ne4kV4xVXtvyaMDnShH1ETIRHHqida1t5eu3 DMvbxbOhSetcVWo2oB2K29QpqkosDRcfdkoz9pgV2YN4iHizn8HN7wede A==; X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="309168860" X-IronPort-AV: E=Sophos;i="5.96,156,1665471600"; d="scan'208,223";a="309168860" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 23:18:08 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="631951003" X-IronPort-AV: E=Sophos;i="5.96,156,1665471600"; d="scan'208,223";a="631951003" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 23:18:06 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 2/3] tests/vhost_cbdma: modify the dmas parameter Date: Fri, 11 Nov 2022 15:11:27 +0800 Message-Id: <20221111071127.2423405-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org From DPDK-22.11, the dmas parameter have changed from `lcore-dma=[lcore1@0000:00:04.0]` to `dmas=[txq0@0000:00:04.0]` by DPDK local patch,so modify the dmas parameter. Signed-off-by: Wei Ling --- tests/TestSuite_vhost_cbdma.py | 859 ++++++++++++++++----------------- 1 file changed, 417 insertions(+), 442 deletions(-) diff --git a/tests/TestSuite_vhost_cbdma.py b/tests/TestSuite_vhost_cbdma.py index 88ee83df..ab2341f2 100644 --- a/tests/TestSuite_vhost_cbdma.py +++ b/tests/TestSuite_vhost_cbdma.py @@ -70,7 +70,7 @@ class TestVhostCbdma(TestCase): self.test_result = {} self.nb_desc = self.test_parameters.get(list(self.test_parameters.keys())[0])[0] self.dut.send_expect("killall -I %s" % self.testpmd_name, "#", 20) - self.dut.send_expect("rm -rf %s/vhost-net*" % self.base_dir, "#") + # self.dut.send_expect("rm -rf %s/vhost-net*" % self.base_dir, "#") self.mode_list = [] def get_cbdma_ports_info_and_bind_to_dpdk(self, cbdma_num, allow_diff_socket=False): @@ -189,29 +189,22 @@ class TestVhostCbdma(TestCase): Test Case 1: PVP split ring all path multi-queues vhost async operation with 1 to 1 mapping between vrings and CBDMA virtual channels """ self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=4) - lcore_dma = ( - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s" + dmas = ( + "txq0@%s;" + "txq1@%s;" + "rxq0@%s;" + "rxq1@%s" % ( - self.vhost_core_list[1], self.cbdma_list[0], - self.vhost_core_list[1], self.cbdma_list[1], - self.vhost_core_list[1], self.cbdma_list[2], - self.vhost_core_list[1], self.cbdma_list[3], ) ) vhost_eal_param = ( - "--vdev 'net_vhost0,iface=vhost-net0,queues=2,dmas=[txq0;txq1;rxq0;rxq1]'" - ) - vhost_param = ( - "--nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 --lcore-dma=[%s]" - % lcore_dma + "--vdev 'net_vhost0,iface=vhost-net0,queues=2,dmas=[%s]'" % dmas ) + vhost_param = "--nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024" allow_pci = [self.dut.ports_info[0]["pci"]] for i in self.cbdma_list: allow_pci.append(i) @@ -251,6 +244,7 @@ class TestVhostCbdma(TestCase): if not self.check_2M_env: self.vhost_user_pmd.quit() + vhost_eal_param += ",dma-ring-size=4096" self.start_vhost_testpmd( cores=self.vhost_core_list, param=vhost_param, @@ -294,30 +288,49 @@ class TestVhostCbdma(TestCase): self, ): """ - Test Case 2: PVP split ring all path multi-queues vhost async operations with M to 1 mapping between vrings and CBDMA virtual channels + Test Case 2: PVP split ring all path multi-queues vhost async operations test with one CBDMA device being shared among multiple tx/rx queues """ self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=4) - lcore_dma = ( - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s" + dmas = ( + "txq0@%s;" + "txq1@%s;" + "txq2@%s;" + "txq3@%s;" + "txq4@%s;" + "txq5@%s;" + "txq6@%s;" + "txq7@%s;" + "rxq0@%s;" + "rxq1@%s;" + "rxq2@%s;" + "rxq3@%s;" + "rxq4@%s;" + "rxq5@%s;" + "rxq6@%s;" + "rxq7@%s" % ( - self.vhost_core_list[1], self.cbdma_list[0], - self.vhost_core_list[2], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[1], + self.cbdma_list[1], + self.cbdma_list[1], + self.cbdma_list[1], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[1], + self.cbdma_list[1], + self.cbdma_list[1], self.cbdma_list[1], - self.vhost_core_list[3], - self.cbdma_list[2], - self.vhost_core_list[4], - self.cbdma_list[3], ) ) - vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" - vhost_param = ( - "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma=[%s]" - % lcore_dma + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[%s]'" % dmas ) + vhost_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" allow_pci = [self.dut.ports_info[0]["pci"]] for i in self.cbdma_list: allow_pci.append(i) @@ -355,40 +368,120 @@ class TestVhostCbdma(TestCase): self.check_each_queue_of_port_packets(queues=8) self.virtio_user_pmd.quit() + self.vhost_user_pmd.quit() + dmas = ( + "txq0@%s;" + "txq1@%s;" + "txq2@%s;" + "txq3@%s;" + "txq4@%s;" + "txq5@%s;" + "txq6@%s;" + "txq7@%s;" + "rxq0@%s;" + "rxq1@%s;" + "rxq2@%s;" + "rxq3@%s;" + "rxq4@%s;" + "rxq5@%s;" + "rxq6@%s;" + "rxq7@%s" + % ( + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[1], + self.cbdma_list[1], + self.cbdma_list[2], + self.cbdma_list[2], + self.cbdma_list[3], + self.cbdma_list[3], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[1], + self.cbdma_list[1], + self.cbdma_list[2], + self.cbdma_list[2], + self.cbdma_list[3], + self.cbdma_list[3], + ) + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=6 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=allow_pci, + iova_mode="va", + ) + for key, path in SPLIT_RING_PATH.items(): + if key == "mergeable": + virtio_eal_param = ( + "--vdev 'net_virtio_user0,mac=%s,path=./vhost-net0,%s,queues=8'" + % (self.virtio_mac, path) + ) + + mode = key + "_VA_diff" + self.mode_list.append(mode) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + mode += "_RestartVhost" + self.vhost_user_pmd.execute_cmd("start") + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + self.virtio_user_pmd.quit() + if not self.check_2M_env: self.vhost_user_pmd.quit() - lcore_dma = ( - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s" + dmas = ( + "txq0@%s;" + "txq1@%s;" + "txq2@%s;" + "txq3@%s;" + "txq4@%s;" + "txq5@%s;" + "txq6@%s;" + "txq7@%s;" + "rxq0@%s;" + "rxq1@%s;" + "rxq2@%s;" + "rxq3@%s;" + "rxq4@%s;" + "rxq5@%s;" + "rxq6@%s;" + "rxq7@%s" % ( - self.vhost_core_list[1], self.cbdma_list[0], - self.vhost_core_list[2], self.cbdma_list[0], - self.vhost_core_list[3], self.cbdma_list[0], - self.vhost_core_list[4], self.cbdma_list[0], - self.vhost_core_list[5], self.cbdma_list[0], - self.vhost_core_list[6], self.cbdma_list[0], - self.vhost_core_list[7], self.cbdma_list[0], - self.vhost_core_list[8], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], self.cbdma_list[0], ) ) - vhost_param = ( - "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma=[%s]" - % lcore_dma + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[%s]'" % dmas ) + vhost_param = "--nb-cores=6 --txq=8 --rxq=8 --txd=1024 --rxd=1024" self.start_vhost_testpmd( cores=self.vhost_core_list, param=vhost_param, @@ -397,7 +490,7 @@ class TestVhostCbdma(TestCase): iova_mode="pa", ) for key, path in SPLIT_RING_PATH.items(): - if key == "inorder_mergeable": + if key == "inorder_non_mergeable": virtio_eal_param = ( "--vdev 'net_virtio_user0,mac=%s,path=./vhost-net0,%s,queues=8'" % (self.virtio_mac, path) @@ -432,42 +525,18 @@ class TestVhostCbdma(TestCase): self, ): """ - Test Case 3: PVP split ring dynamic queue number vhost async operations with M to N mapping between vrings and CBDMA virtual channels + Test Case 3: PVP split ring dynamic queue number vhost async operations with cbdma """ self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=8) - lcore_dma = ( - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s" - % ( - self.vhost_core_list[1], - self.cbdma_list[0], - self.vhost_core_list[1], - self.cbdma_list[1], - self.vhost_core_list[1], - self.cbdma_list[2], - self.vhost_core_list[1], - self.cbdma_list[3], - self.vhost_core_list[2], - self.cbdma_list[4], - self.vhost_core_list[2], - self.cbdma_list[5], - self.vhost_core_list[2], - self.cbdma_list[6], - self.vhost_core_list[2], - self.cbdma_list[7], - ) + dmas = "txq0@%s;" "txq1@%s" % ( + self.cbdma_list[0], + self.cbdma_list[1], ) - vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;rxq0;rxq1]'" - vhost_param = ( - "--nb-cores=2 --txq=2 --rxq=2 --txd=1024 --rxd=1024 --lcore-dma=[%s]" - % lcore_dma + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s],dma-ring-size=64'" + % dmas ) + vhost_param = "--nb-cores=2 --txq=2 --rxq=2 --txd=1024 --rxd=1024" allow_pci = [self.dut.ports_info[0]["pci"]] for i in self.cbdma_list: allow_pci.append(i) @@ -511,27 +580,23 @@ class TestVhostCbdma(TestCase): self.check_each_queue_of_port_packets(queues=1) self.vhost_user_pmd.quit() - lcore_dma = ( - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s" + dmas = ( + "rxq0@%s;" + "rxq1@%s;" + "rxq2@%s;" + "rxq3@%s" % ( - self.vhost_core_list[1], self.cbdma_list[0], - self.vhost_core_list[2], + self.cbdma_list[0], + self.cbdma_list[1], self.cbdma_list[1], - self.vhost_core_list[3], - self.cbdma_list[2], - self.vhost_core_list[4], - self.cbdma_list[3], ) ) - vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[rxq0;rxq1;rxq2;rxq3]'" - vhost_param = ( - "--nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 --lcore-dma=[%s]" - % lcore_dma + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s],dma-ring-size=2048'" + % dmas ) + vhost_param = "--nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024" self.start_vhost_testpmd( cores=self.vhost_core_list, param=vhost_param, @@ -545,42 +610,41 @@ class TestVhostCbdma(TestCase): self.check_each_queue_of_port_packets(queues=4) self.vhost_user_pmd.quit() - lcore_dma = ( - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s" + dmas = ( + "txq0@%s;" + "txq1@%s;" + "txq2@%s;" + "txq3@%s;" + "txq4@%s;" + "txq5@%s;" + "txq6@%s;" + "rxq2@%s;" + "rxq3@%s;" + "rxq4@%s;" + "rxq5@%s;" + "rxq6@%s;" + "rxq7@%s" % ( - self.vhost_core_list[1], self.cbdma_list[0], - self.vhost_core_list[1], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[1], + self.cbdma_list[1], + self.cbdma_list[1], + self.cbdma_list[1], self.cbdma_list[1], - self.vhost_core_list[2], self.cbdma_list[1], - self.vhost_core_list[2], - self.cbdma_list[2], - self.vhost_core_list[2], - self.cbdma_list[3], - self.vhost_core_list[2], - self.cbdma_list[4], - self.vhost_core_list[2], - self.cbdma_list[5], - self.vhost_core_list[2], - self.cbdma_list[6], - self.vhost_core_list[2], - self.cbdma_list[7], ) ) - vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" - vhost_param = ( - "--nb-cores=2 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma=[%s]" - % lcore_dma + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s],dma-ring-size=64'" + % dmas ) + vhost_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" self.start_vhost_testpmd( cores=self.vhost_core_list, param=vhost_param, @@ -594,51 +658,40 @@ class TestVhostCbdma(TestCase): self.check_each_queue_of_port_packets(queues=8) self.vhost_user_pmd.quit() - lcore_dma = ( - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s" + dmas = ( + "txq0@%s;" + "txq1@%s;" + "txq2@%s;" + "txq3@%s;" + "txq4@%s;" + "txq5@%s;" + "txq6@%s;" + "rxq2@%s;" + "rxq3@%s;" + "rxq4@%s;" + "rxq5@%s;" + "rxq6@%s;" + "rxq7@%s" % ( - self.vhost_core_list[1], self.cbdma_list[0], - self.vhost_core_list[2], - self.cbdma_list[0], - self.vhost_core_list[2], self.cbdma_list[1], - self.vhost_core_list[2], self.cbdma_list[2], - self.vhost_core_list[3], self.cbdma_list[3], - self.vhost_core_list[3], self.cbdma_list[4], - self.vhost_core_list[3], self.cbdma_list[5], - self.vhost_core_list[3], self.cbdma_list[6], - self.vhost_core_list[4], + self.cbdma_list[2], + self.cbdma_list[3], self.cbdma_list[4], - self.vhost_core_list[4], self.cbdma_list[5], - self.vhost_core_list[4], self.cbdma_list[6], - self.vhost_core_list[4], self.cbdma_list[7], ) ) - vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" - vhost_param = ( - "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma=[%s]" - % lcore_dma + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas ) + vhost_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" self.start_vhost_testpmd( cores=self.vhost_core_list, param=vhost_param, @@ -651,70 +704,6 @@ class TestVhostCbdma(TestCase): self.send_imix_packets(mode=mode) self.check_each_queue_of_port_packets(queues=8) - self.virtio_user_pmd.quit() - virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" - for key, path in SPLIT_RING_PATH.items(): - if key == "mergeable": - virtio_eal_param = ( - "--vdev 'net_virtio_user0,mac=%s,path=vhost-net0,%s,queues=8,server=1'" - % (self.virtio_mac, path) - ) - mode = key + "_VA_M:N_diff" - self.mode_list.append(mode) - self.start_virtio_testpmd( - cores=self.virtio_core_list, - param=virtio_param, - eal_param=virtio_eal_param, - ) - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=8) - - self.vhost_user_pmd.quit() - lcore_dma = ( - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s" - % ( - self.vhost_core_list[1], - self.cbdma_list[0], - self.vhost_core_list[2], - self.cbdma_list[0], - self.vhost_core_list[3], - self.cbdma_list[1], - self.vhost_core_list[3], - self.cbdma_list[2], - self.vhost_core_list[4], - self.cbdma_list[1], - self.vhost_core_list[4], - self.cbdma_list[2], - self.vhost_core_list[5], - self.cbdma_list[1], - self.vhost_core_list[5], - self.cbdma_list[2], - ) - ) - vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" - vhost_param = ( - "--nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma=[%s]" - % lcore_dma - ) - self.start_vhost_testpmd( - cores=self.vhost_core_list, - param=vhost_param, - eal_param=vhost_eal_param, - ports=allow_pci, - iova_mode="pa", - ) - mode = "mergeable" + "_PA_M:N_diff" - self.mode_list.append(mode) - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=8) - self.test_target = self.running_case self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ self.test_target @@ -727,32 +716,25 @@ class TestVhostCbdma(TestCase): self, ): """ - Test Case 4: PVP packed ring all path multi-queues vhost async operations with 1 to 1 mapping between vrings and CBDMA virtual channels + Test Case 4: PVP packed ring all path multi-queues vhost async operation test with each tx/rx queue using one CBDMA device """ self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=4) - lcore_dma = ( - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s" + dmas = ( + "txq0@%s;" + "txq1@%s;" + "rxq0@%s;" + "rxq1@%s" % ( - self.vhost_core_list[1], self.cbdma_list[0], - self.vhost_core_list[1], self.cbdma_list[1], - self.vhost_core_list[1], self.cbdma_list[2], - self.vhost_core_list[1], self.cbdma_list[3], ) ) vhost_eal_param = ( - "--vdev 'net_vhost0,iface=vhost-net0,queues=2,dmas=[txq0;txq1;rxq0;rxq1]'" - ) - vhost_param = ( - "--nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024 --lcore-dma=[%s]" - % lcore_dma + "--vdev 'net_vhost0,iface=vhost-net0,queues=2,dmas=[%s]'" % dmas ) + vhost_param = "--nb-cores=1 --txq=2 --rxq=2 --txd=1024 --rxd=1024" allow_pci = [self.dut.ports_info[0]["pci"]] for i in self.cbdma_list: allow_pci.append(i) @@ -840,27 +822,46 @@ class TestVhostCbdma(TestCase): Test Case 5: PVP packed ring all path multi-queues vhost async operations with M to 1 mapping between vrings and CBDMA virtual channels """ self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=4) - lcore_dma = ( - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s" + dmas = ( + "txq0@%s;" + "txq1@%s;" + "txq2@%s;" + "txq3@%s;" + "txq4@%s;" + "txq5@%s;" + "txq6@%s;" + "txq7@%s;" + "rxq0@%s;" + "rxq1@%s;" + "rxq2@%s;" + "rxq3@%s;" + "rxq4@%s;" + "rxq5@%s;" + "rxq6@%s;" + "rxq7@%s" % ( - self.vhost_core_list[1], self.cbdma_list[0], - self.vhost_core_list[2], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], self.cbdma_list[1], - self.vhost_core_list[3], - self.cbdma_list[2], - self.vhost_core_list[4], - self.cbdma_list[3], + self.cbdma_list[1], + self.cbdma_list[1], + self.cbdma_list[1], + self.cbdma_list[1], + self.cbdma_list[1], + self.cbdma_list[1], + self.cbdma_list[1], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], ) ) - vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" - vhost_param = ( - "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma=[%s]" - % lcore_dma + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[%s]'" % dmas ) + vhost_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" allow_pci = [self.dut.ports_info[0]["pci"]] for i in self.cbdma_list: allow_pci.append(i) @@ -899,39 +900,118 @@ class TestVhostCbdma(TestCase): self.check_each_queue_of_port_packets(queues=8) self.virtio_user_pmd.quit() + self.vhost_user_pmd.quit() + dmas = ( + "txq0@%s;" + "txq1@%s;" + "txq2@%s;" + "txq3@%s;" + "txq4@%s;" + "txq5@%s;" + "txq6@%s;" + "txq7@%s;" + "rxq0@%s;" + "rxq1@%s;" + "rxq2@%s;" + "rxq3@%s;" + "rxq4@%s;" + "rxq5@%s;" + "rxq6@%s;" + "rxq7@%s" + % ( + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[1], + self.cbdma_list[1], + self.cbdma_list[2], + self.cbdma_list[2], + self.cbdma_list[3], + self.cbdma_list[3], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[1], + self.cbdma_list[1], + self.cbdma_list[2], + self.cbdma_list[2], + self.cbdma_list[3], + self.cbdma_list[3], + ) + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[%s]'" % dmas + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + param=vhost_param, + eal_param=vhost_eal_param, + ports=allow_pci, + iova_mode="va", + ) + virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" + for key, path in PACKED_RING_PATH.items(): + if key == "vectorized_path_not_power_of_2": + virtio_eal_param = ( + "--vdev 'net_virtio_user0,mac=%s,path=./vhost-net0,%s,queues=8'" + % (self.virtio_mac, path) + ) + + mode = key + "_VA_diff" + self.mode_list.append(mode) + self.start_virtio_testpmd( + cores=self.virtio_core_list, + param=virtio_param, + eal_param=virtio_eal_param, + ) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + + mode += "_RestartVhost" + self.vhost_user_pmd.execute_cmd("start") + self.mode_list.append(mode) + self.send_imix_packets(mode=mode) + self.check_each_queue_of_port_packets(queues=8) + self.virtio_user_pmd.quit() + if not self.check_2M_env: self.vhost_user_pmd.quit() - lcore_dma = ( - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s" + dmas = ( + "txq0@%s;" + "txq1@%s;" + "txq2@%s;" + "txq3@%s;" + "txq4@%s;" + "txq5@%s;" + "txq6@%s;" + "txq7@%s;" + "rxq0@%s;" + "rxq1@%s;" + "rxq2@%s;" + "rxq3@%s;" + "rxq4@%s;" + "rxq5@%s;" + "rxq6@%s;" + "rxq7@%s" % ( - self.vhost_core_list[1], self.cbdma_list[0], - self.vhost_core_list[2], self.cbdma_list[0], - self.vhost_core_list[3], self.cbdma_list[0], - self.vhost_core_list[4], self.cbdma_list[0], - self.vhost_core_list[5], self.cbdma_list[0], - self.vhost_core_list[6], self.cbdma_list[0], - self.vhost_core_list[7], self.cbdma_list[0], - self.vhost_core_list[8], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], self.cbdma_list[0], ) ) - vhost_param = ( - "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma=[%s]" - % lcore_dma + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[%s]'" % dmas ) self.start_vhost_testpmd( cores=self.vhost_core_list, @@ -942,7 +1022,7 @@ class TestVhostCbdma(TestCase): ) virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" for key, path in PACKED_RING_PATH.items(): - if key == "inorder_mergeable": + if key == "vectorized_path_not_power_of_2": virtio_eal_param = ( "--vdev 'net_virtio_user0,mac=%s,path=./vhost-net0,%s,queues=8'" % (self.virtio_mac, path) @@ -980,39 +1060,16 @@ class TestVhostCbdma(TestCase): Test Case 6: PVP packed ring dynamic queue number vhost async operations with M to N mapping between vrings and CBDMA virtual channels """ self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=8) - lcore_dma = ( - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s" - % ( - self.vhost_core_list[1], - self.cbdma_list[0], - self.vhost_core_list[1], - self.cbdma_list[1], - self.vhost_core_list[1], - self.cbdma_list[2], - self.vhost_core_list[1], - self.cbdma_list[3], - self.vhost_core_list[2], - self.cbdma_list[4], - self.vhost_core_list[2], - self.cbdma_list[5], - self.vhost_core_list[2], - self.cbdma_list[6], - self.vhost_core_list[2], - self.cbdma_list[7], - ) + dmas = "txq0@%s;" "txq1@%s" % ( + self.cbdma_list[0], + self.cbdma_list[1], ) - vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;rxq0;rxq1]'" - vhost_param = ( - "--nb-cores=2 --txq=2 --rxq=2 --txd=1024 --rxd=1024 --lcore-dma=[%s]" - % lcore_dma + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s],dma-ring-size=64'" + % dmas ) + vhost_param = "--nb-cores=2 --txq=2 --rxq=2 --txd=1024 --rxd=1024" + allow_pci = [self.dut.ports_info[0]["pci"]] for i in self.cbdma_list: allow_pci.append(i) @@ -1056,27 +1113,23 @@ class TestVhostCbdma(TestCase): self.check_each_queue_of_port_packets(queues=1) self.vhost_user_pmd.quit() - lcore_dma = ( - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s" + dmas = ( + "rxq0@%s;" + "rxq1@%s;" + "rxq2@%s;" + "rxq3@%s" % ( - self.vhost_core_list[1], self.cbdma_list[0], - self.vhost_core_list[2], self.cbdma_list[1], - self.vhost_core_list[3], - self.cbdma_list[2], - self.vhost_core_list[4], - self.cbdma_list[3], + self.cbdma_list[0], + self.cbdma_list[1], ) ) - vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[rxq0;rxq1;rxq2;rxq3]'" - vhost_param = ( - "--nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024 --lcore-dma=[%s]" - % lcore_dma + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s],dma-ring-size=2048'" + % dmas ) + vhost_param = "--nb-cores=4 --txq=4 --rxq=4 --txd=1024 --rxd=1024" self.start_vhost_testpmd( cores=self.vhost_core_list, param=vhost_param, @@ -1090,42 +1143,40 @@ class TestVhostCbdma(TestCase): self.check_each_queue_of_port_packets(queues=4) self.vhost_user_pmd.quit() - lcore_dma = ( - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s" + dmas = ( + "txq0@%s;" + "txq1@%s;" + "txq2@%s;" + "txq3@%s;" + "txq4@%s;" + "txq5@%s;" + "txq6@%s;" + "rxq2@%s;" + "rxq3@%s;" + "rxq4@%s;" + "rxq5@%s;" + "rxq6@%s;" + "rxq7@%s" % ( - self.vhost_core_list[1], self.cbdma_list[0], - self.vhost_core_list[1], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[0], + self.cbdma_list[1], + self.cbdma_list[1], + self.cbdma_list[1], + self.cbdma_list[1], self.cbdma_list[1], - self.vhost_core_list[2], self.cbdma_list[1], - self.vhost_core_list[2], - self.cbdma_list[2], - self.vhost_core_list[2], - self.cbdma_list[3], - self.vhost_core_list[2], - self.cbdma_list[4], - self.vhost_core_list[2], - self.cbdma_list[5], - self.vhost_core_list[2], - self.cbdma_list[6], - self.vhost_core_list[2], - self.cbdma_list[7], ) ) - vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" - vhost_param = ( - "--nb-cores=2 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma=[%s]" - % lcore_dma + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas ) + vhost_param = "--nb-cores=2 --txq=8 --rxq=8 --txd=1024 --rxd=1024" self.start_vhost_testpmd( cores=self.vhost_core_list, param=vhost_param, @@ -1139,50 +1190,38 @@ class TestVhostCbdma(TestCase): self.check_each_queue_of_port_packets(queues=8) self.vhost_user_pmd.quit() - lcore_dma = ( - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s" + dmas = ( + "txq0@%s;" + "txq1@%s;" + "txq2@%s;" + "txq3@%s;" + "txq4@%s;" + "txq5@%s;" + "txq6@%s;" + "rxq2@%s;" + "rxq3@%s;" + "rxq4@%s;" + "rxq5@%s;" + "rxq6@%s;" + "rxq7@%s" % ( - self.vhost_core_list[1], - self.cbdma_list[0], - self.vhost_core_list[2], self.cbdma_list[0], - self.vhost_core_list[2], self.cbdma_list[1], - self.vhost_core_list[2], self.cbdma_list[2], - self.vhost_core_list[3], self.cbdma_list[3], - self.vhost_core_list[3], self.cbdma_list[4], - self.vhost_core_list[3], self.cbdma_list[5], - self.vhost_core_list[3], self.cbdma_list[6], - self.vhost_core_list[4], + self.cbdma_list[2], + self.cbdma_list[3], self.cbdma_list[4], - self.vhost_core_list[4], self.cbdma_list[5], - self.vhost_core_list[4], self.cbdma_list[6], - self.vhost_core_list[4], self.cbdma_list[7], ) ) - vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" - vhost_param = ( - "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma=[%s]" - % lcore_dma + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas ) self.start_vhost_testpmd( cores=self.vhost_core_list, @@ -1196,70 +1235,6 @@ class TestVhostCbdma(TestCase): self.send_imix_packets(mode=mode) self.check_each_queue_of_port_packets(queues=8) - self.virtio_user_pmd.quit() - virtio_param = "--nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024" - for key, path in PACKED_RING_PATH.items(): - if key == "mergeable": - virtio_eal_param = ( - "--vdev 'net_virtio_user0,mac=%s,path=vhost-net0,%s,queues=8,server=1'" - % (self.virtio_mac, path) - ) - mode = key + "_VA_M:N_diff" - self.mode_list.append(mode) - self.start_virtio_testpmd( - cores=self.virtio_core_list, - param=virtio_param, - eal_param=virtio_eal_param, - ) - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=8) - - self.vhost_user_pmd.quit() - lcore_dma = ( - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s," - "lcore%s@%s" - % ( - self.vhost_core_list[1], - self.cbdma_list[0], - self.vhost_core_list[2], - self.cbdma_list[0], - self.vhost_core_list[3], - self.cbdma_list[1], - self.vhost_core_list[3], - self.cbdma_list[2], - self.vhost_core_list[4], - self.cbdma_list[1], - self.vhost_core_list[4], - self.cbdma_list[2], - self.vhost_core_list[5], - self.cbdma_list[1], - self.vhost_core_list[5], - self.cbdma_list[2], - ) - ) - vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]'" - vhost_param = ( - "--nb-cores=5 --txq=8 --rxq=8 --txd=1024 --rxd=1024 --lcore-dma=[%s]" - % lcore_dma - ) - self.start_vhost_testpmd( - cores=self.vhost_core_list, - param=vhost_param, - eal_param=vhost_eal_param, - ports=allow_pci, - iova_mode="pa", - ) - mode = "mergeable" + "_PA_M:N_diff" - self.mode_list.append(mode) - self.send_imix_packets(mode=mode) - self.check_each_queue_of_port_packets(queues=8) - self.test_target = self.running_case self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ self.test_target From patchwork Fri Nov 11 07:11:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 119766 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D19C9A0542; Fri, 11 Nov 2022 08:18:20 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CD18340697; Fri, 11 Nov 2022 08:18:20 +0100 (CET) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 8A8ED400EF for ; Fri, 11 Nov 2022 08:18:18 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1668151098; x=1699687098; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=jKzvOBs8oN3toTN/XqS7Hz4u3mDHC06svSwuJxX/Nys=; b=Za71Wn/dE7I7ToS2owmqHcVsSS41811dsWtTRbX6DUcTRk56N2BzCvT5 N1YBW8/CNn9P+3AsBdUawla22tgT0K1qJx7acHHi0jlF4TjiGcXMkzN+Z 9ycBqUwKZnp667czDaDRUwWebc5hG7se2qTCR1MLN/LbzvBdgUy5zLMMD atCOHU9ZYI62myR3NZhZctaQt8YnuZNkzo5Rk9vgO+M/JW74qvtTJvaI1 9ZknlsXuJAkFs9VmoFDVx/fMiLZ4IT3ETOi23Llah0Gnb34dQH76ZJdaE 02A0S5OO32Vw7wHAnvSI92DMkCSr9MQ+4XpqKTVyNd7cm8ZxeqHuXbWyZ Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="397839493" X-IronPort-AV: E=Sophos;i="5.96,156,1665471600"; d="scan'208";a="397839493" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 23:18:17 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="631951034" X-IronPort-AV: E=Sophos;i="5.96,156,1665471600"; d="scan'208";a="631951034" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 23:18:16 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 3/3] conf/vhost_cbdma: modify the suite config by case changed Date: Fri, 11 Nov 2022 15:11:37 +0800 Message-Id: <20221111071137.2423465-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Modify the suite config by case changed. Signed-off-by: Wei Ling --- conf/vhost_cbdma.cfg | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/conf/vhost_cbdma.cfg b/conf/vhost_cbdma.cfg index 9dbdb77f..d192facc 100644 --- a/conf/vhost_cbdma.cfg +++ b/conf/vhost_cbdma.cfg @@ -28,16 +28,16 @@ expected_throughput = { 'non_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, 'vectorized_VA': {'imix': {1024: 0.000}}, 'vectorized_VA_RestartVhost': {'imix': {1024: 0.000}}, - 'inorder_mergeable_PA': {'imix': {1024: 0.000}}, - 'inorder_mergeable_PA_RestartVhost': {'imix': {1024: 0.000}}}, + 'mergeable_VA_diff': {'imix': {1024: 0.000}}, + 'mergeable_VA_diff_RestartVhost': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_PA': {'imix': {1024: 0.000}}, + 'inorder_non_mergeable_PA_RestartVhost': {'imix': {1024: 0.000}}}, 'test_perf_pvp_split_ring_dynamic_queue_number_vhost_async_operation_with_M_to_N': { 'inorder_mergeable_VA_1:N': {'imix': {1024: 0.000}}, 'inorder_mergeable_VA_without_CBDMA': {'imix': {1024: 0.000}}, 'inorder_mergeable_VA_1:1': {'imix': {1024: 0.000}}, 'inorder_mergeable_VA_M:N': {'imix': {1024: 0.000}}, - 'inorder_mergeable_VA_M:N_diff': {'imix': {1024: 0.000}}, - 'mergeable_VA_M:N_diff': {'imix': {1024: 0.000}}, - 'mergeable_PA_M:N_diff': {'imix': {1024: 0.000}},}, + 'inorder_mergeable_VA_M:N_diff': {'imix': {1024: 0.000}}}, 'test_perf_pvp_packed_ring_all_path_multi_queues_vhost_async_operation_with_1_to_1': { 'inorder_mergeable_VA': {'imix': {1024: 0.000}}, 'inorder_mergeable_VA_RestartVhost': {'imix': {1024: 0.000}}, @@ -66,13 +66,13 @@ expected_throughput = { 'vectorized_VA_RestartVhost': {'imix': {1024: 0.000}}, 'vectorized_path_not_power_of_2_VA': {'imix': {1024: 0.000}}, 'vectorized_path_not_power_of_2_VA_RestartVhost': {'imix': {1024: 0.000}}, - 'inorder_mergeable_PA': {'imix': {1024: 0.000}}, - 'inorder_mergeable_PA_RestartVhost': {'imix': {1024: 0.000}}}, + 'vectorized_path_not_power_of_2_VA_diff': {'imix': {1024: 0.000}}, + 'vectorized_path_not_power_of_2_VA_diff_RestartVhost': {'imix': {1024: 0.000}}, + 'vectorized_path_not_power_of_2_PA': {'imix': {1024: 0.000}}, + 'vectorized_path_not_power_of_2_PA_RestartVhost': {'imix': {1024: 0.000}}}, 'test_perf_pvp_packed_ring_dynamic_queue_number_vhost_async_operation_with_M_to_N': { 'inorder_mergeable_VA_1:N': {'imix': {1024: 0.000}}, 'inorder_mergeable_VA_without_CBDMA': {'imix': {1024: 0.000}}, 'inorder_mergeable_VA_1:1': {'imix': {1024: 0.000}}, 'inorder_mergeable_VA_M:N': {'imix': {1024: 0.000}}, - 'inorder_mergeable_VA_M:N_diff': {'imix': {1024: 0.000}}, - 'mergeable_VA_M:N_diff': {'imix': {1024: 0.000}}, - 'mergeable_PA_M:N_diff': {'imix': {1024: 0.000}},},} + 'inorder_mergeable_VA_M:N_diff': {'imix': {1024: 0.000}}}}