From patchwork Fri Apr 22 05:48:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 110081 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9DE47A0093; Fri, 22 Apr 2022 07:48:33 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 97BBA4067C; Fri, 22 Apr 2022 07:48:33 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id 0758840040 for ; Fri, 22 Apr 2022 07:48:32 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650606513; x=1682142513; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=64K4mRGHoZwe/iu0DWvHuPQxxtGySSWqPnA2alCOUyQ=; b=cwymXE9wMxs5ahrqiwebdHVIaJ6L6rlZ5Zzp2jOEz9BjgUDVudoahKbL qRacv5IlcRXUGpi+Liu/E1i4rqsVgULpf6s9q9LVxxT4maujAaDZwRToD tOXXF4/m+WIjWBz6VjaMivl/lf5Y5Lk2yReqsQotb7toEWKkB6BICOtgn P1YHoBQjSfgSIGpXGdF2jTBrMWEgCCfjEVCSkNJM/GlTXr2e31Xnh+mYc rJdAo/iPhFin7MryyTdg4U4NR2IvIzij9MT2iAq7PNATJoIv9Wza4iQsz MizQtVo+bp9LEbdHDWfKZZtaDpeliqQE4XLaifUC9lZOay6CrcEEnq9MT w==; X-IronPort-AV: E=McAfee;i="6400,9594,10324"; a="262179388" X-IronPort-AV: E=Sophos;i="5.90,280,1643702400"; d="scan'208";a="262179388" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2022 22:48:32 -0700 X-IronPort-AV: E=Sophos;i="5.90,280,1643702400"; d="scan'208";a="728357340" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2022 22:48:30 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 1/6] test_plans/index: add new testsuite Date: Fri, 22 Apr 2022 13:48:26 +0800 Message-Id: <20220422054826.1559167-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path), add vswitch_pvp_multi_paths_performance_with_cbdma_test_plan into test_plans/index.rst. Signed-off-by: Wei Ling --- test_plans/index.rst | 1 + 1 file changed, 1 insertion(+) diff --git a/test_plans/index.rst b/test_plans/index.rst index f8118d14..f75e6995 100644 --- a/test_plans/index.rst +++ b/test_plans/index.rst @@ -233,6 +233,7 @@ The following are the test plans for the DPDK DTS automated test system. dpdk_gro_lib_test_plan dpdk_gso_lib_test_plan vswitch_sample_cbdma_test_plan + vswitch_pvp_multi_paths_performance_with_cbdma_test_plan vxlan_gpe_support_in_i40e_test_plan pvp_diff_qemu_version_test_plan pvp_share_lib_test_plan From patchwork Fri Apr 22 05:48:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 110082 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C0699A0093; Fri, 22 Apr 2022 07:48:47 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B9D2E410D5; Fri, 22 Apr 2022 07:48:47 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id 8C0A040040 for ; Fri, 22 Apr 2022 07:48:46 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650606526; x=1682142526; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=d6dZNF9FzcsyJ92q+xxQA6+cYQjncO6Pjmn/b3hCeCQ=; b=HBqZG5hLkvLgPSd/+e0d140Dmw9qyXiejkCw7HPC0ThqrkFxrTAysZzt rEaqQ6UTj3eXacynMWV+n1PSbk0tiMm+Q0llSvA0NhNMgl8OpIK3BoKJU KdzJRzuIhq38QofMok+oD8nlMpFN3g3U0TEGomJMWXfYJiWr9t6eQPCE0 93aecTKaFs4nlDG68o8NVqEloJCn1uPpMbqIFP2kPmr1hNbUURWps2UMS nn36Njjc9ZMYUClJAeyGEby9+85JCpf4B+U+81lwtizxmXuaAq4ve77MV JljukGXkN8vT1sPYDjlECxFKuUgLlNCcwXdKzDSL2vW65VZZyK3URGHfc g==; X-IronPort-AV: E=McAfee;i="6400,9594,10324"; a="262179417" X-IronPort-AV: E=Sophos;i="5.90,280,1643702400"; d="scan'208";a="262179417" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2022 22:48:45 -0700 X-IronPort-AV: E=Sophos;i="5.90,280,1643702400"; d="scan'208";a="728357398" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2022 22:48:43 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 2/6] test_plans/vswitch_sample_cbdma_test_plan: modify testplan with new format Date: Fri, 22 Apr 2022 13:48:38 +0800 Message-Id: <20220422054838.1559225-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Modify testplan with new format. Signed-off-by: Wei Ling --- test_plans/vswitch_sample_cbdma_test_plan.rst | 294 ++++++++++++------ 1 file changed, 193 insertions(+), 101 deletions(-) diff --git a/test_plans/vswitch_sample_cbdma_test_plan.rst b/test_plans/vswitch_sample_cbdma_test_plan.rst index af2e62d1..e6fabe32 100644 --- a/test_plans/vswitch_sample_cbdma_test_plan.rst +++ b/test_plans/vswitch_sample_cbdma_test_plan.rst @@ -37,68 +37,142 @@ Vswitch sample test with vhost async data path test plan Description =========== -Vswitch sample can leverage IOAT to accelerate vhost async data-path from dpdk 20.11. This plan test -vhost DMA operation callbacks for CBDMA PMD and vhost async data-path in vhost sample. +Vswitch sample can leverage IOAT to accelerate vhost async data-path from dpdk 20.11. +This plan test vhost DMA operation callbacks for CBDMA PMD and vhost async data-path in vhost sample. From 20.11 to 21.02, only split ring support cbdma copy with vhost enqueue direction; from 21.05,packed ring also can support cbdma copy with vhost enqueue direction. +For more about dpdk-testpmd sample, please refer to the DPDK docments: +https://doc.dpdk.org/guides/testpmd_app_ug/run_app.html + +For virtio-user vdev parameter, you can refer to the DPDK docments: +https://doc.dpdk.org/guides/nics/virtio.html#virtio-paths-selection-and-usage. + +For more about dpdk-vhost sample, please refer to the DPDK docments: +https://doc.dpdk.org/guides/sample_app_ug/vhost.html + Prerequisites ============= +Topology +-------- + Test flow: TG-->NIC-->VSwitch-->Virtio-->VSwitch-->NIC-->TG + +Hardware +-------- + Supportted NICs: ALL + +Software +-------- + Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz + +General set up +-------------- +1. Compile DPDK:: + + # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library= + # ninja -C -j 110 + +2. Get the PCI device ID and DMA device ID of DUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 is DMA device ID:: + + # ./usertools/dpdk-devbind.py -s + + Network devices using kernel driver + =================================== + 0000:18:00.0 'Device 159b' if=ens785f0 drv=ice unused=vfio-pci + + DMA devices using kernel driver + =============================== + 0000:00:04.0 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci + 0000:00:04.1 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci + +Test case +========= + +Common steps +------------ +1. Bind 1 NIC port and CBDMA channels to vfio-pci:: + + # ./usertools/dpdk-devbind.py -b vfio-pci + # ./usertools/dpdk-devbind.py -b vfio-pci + + For example, Bind 1 NIC port and 2 CBDMA channels:: + ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:18.0 + ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:04.0,0000:00:04.1 + +2. Inject imix packets to NIC by traffic generator:: + + The packet size include [64, 128, 256, 512, 1024, 1518], and the format of packet is as follows. + +-------------+-------------+-------------+-------------+ + | MAC | MAC | IPV4 | IPV4 | + | Src address | Dst address | Src address | Dst address | + |-------------|-------------|-------------|-------------| + | Any MAC | Virtio mac | Any IP | Any IP | + +-------------+-------------+-------------+-------------+ + All the packets in this test plan use the Virtio mac:00:11:22:33:44:10. Test Case1: PVP performance check with CBDMA channel using vhost async driver -============================================================================= +----------------------------------------------------------------------------- +This case uses vhost, testpmd and Traffic Generator(For example, Trex) send imix packets to test performance with 1 CBDMA channel when using vhost async driver. +Include packed ring vectorized path, packed ring size not power of 2 path and split ring vectorized path have been tested. -1. Bind physical port to vfio-pci and CBDMA channel to vfio-pci. +1. Bind 1 NIC port and 1 CBDMA channel to vfio-pci, as common step 1. 2. On host, launch dpdk-vhost by below command:: - ./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 31-32 -n 4 -- \ - -p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat --stats 1 --socket-file /tmp/vhost-net --dmas [txd0@0000:00:04.0] --client --total-num-mbufs 600000 + # ./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 31-32 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 \ + --stats 1 --socket-file /tmp/vhost-net --dmas [txd0@0000:00:04.0] --client --total-num-mbufs 600000 3. Launch virtio-user with packed ring:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 --force-max-simd-bitwidth=512 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,mrg_rxbuf=0,in_order=1,vectorized=1,packed_vq=1,server=1 -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,mrg_rxbuf=0,in_order=1,vectorized=1,packed_vq=1,server=1 \ + -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 4. Start pkts from virtio-user side to let vswitch know the mac addr:: - testpmd>set fwd mac - testpmd>start tx_first + testpmd> set fwd mac + testpmd> start tx_first 5. Inject pkts (packets length=64...1518) separately with dest_mac=virtio_mac_address (specific in above cmd with 00:11:22:33:44:10) to NIC using packet generator, record pvp (PG>nic>vswitch>virtio-user>vswitch>nic>PG) performance number can get expected. 6. Quit and re-launch virtio-user with packed ring size not power of 2:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 --force-max-simd-bitwidth=512 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,mrg_rxbuf=0,in_order=1,vectorized=1,packed_vq=1,server=1,queue_size=1025 -- -i --rxq=1 --txq=1 --txd=1025 --rxd=1025 --nb-cores=1 + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,mrg_rxbuf=0,in_order=1,vectorized=1,packed_vq=1,server=1,queue_size=1025 \ + -- -i --rxq=1 --txq=1 --txd=1025 --rxd=1025 --nb-cores=1 7. Re-test step 4-5, record performance of different packet length. 8. Quit and re-launch virtio-user with split ring:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,mrg_rxbuf=0,in_order=1,vectorized=1,server=1 -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,mrg_rxbuf=0,in_order=1,vectorized=1,server=1 \ + -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 9. Re-test step 4-5, record performance of different packet length. Test Case2: PVP test with two VM and two CBDMA channels using vhost async driver -================================================================================= +-------------------------------------------------------------------------------- +This case uses vhost, testpmd and Traffic Generator(For example, Trex) send imix packets to test 2 virtio-user performance with 2 CBDMA channels when using vhost async driver. +And also have tested relaunch vhost-user to send packets to get the performance. -1. Bind one physical ports to vfio-pci and two CBDMA channels to vfio-pci. +1. Bind 1 NIC port and 2 CBDMA channel to vfio-pci, as common step 1. 2. On host, launch dpdk-vhost by below command:: - ./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -- \ - -p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat --stats 1 --socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 --dmas [txd0@0000:00:01.0,txd1@0000:00:01.1] --client--total-num-mbufs 600000 + # ./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 \ + --stats 1 --socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 --dmas [txd0@0000:00:01.0,txd1@0000:00:01.1] --client--total-num-mbufs 600000 3. launch two virtio-user ports:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net0,queues=1,server=1,mrg_rxbuf=1,in_order=0,packed_vq=1 -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 31-32 -n 4 --no-pci --file-prefix=testpmd1 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:11,path=/tmp/vhost-net1,queues=1,server=1,mrg_rxbuf=1,in_order=1,vectorized=1 -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net0,queues=1,server=1,mrg_rxbuf=1,in_order=0,packed_vq=1 \ + -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 31-32 -n 4 --no-pci --file-prefix=testpmd1 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:11,path=/tmp/vhost-net1,queues=1,server=1,mrg_rxbuf=1,in_order=1,vectorized=1 \ + -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 4. Start pkts from two virtio-user side individually to let vswitch know the mac addr:: @@ -107,36 +181,41 @@ Test Case2: PVP test with two VM and two CBDMA channels using vhost async driver testpmd1>start tx_first testpmd1>start tx_first -5. Inject IMIX packets (64b...1518b) with dest_mac=virtio_mac_address (00:11:22:33:44:10 and 00:11:22:33:44:11) to NIC using packet generator,record performance number can get expected from Packet generator rx side. +5. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data. 6. Stop dpdk-vhost side and relaunch it with same cmd as step2. 7. Start pkts from two virtio-user side individually to let vswitch know the mac addr:: - testpmd0>stop - testpmd0>start tx_first - testpmd1>stop - testpmd1>start tx_first + testpmd0>stop + testpmd0>start tx_first + testpmd1>stop + testpmd1>start tx_first -8. Inject IMIX packets (64b...1518b) with dest_mac=virtio_mac_address (00:11:22:33:44:10 and 00:11:22:33:44:11) to NIC using packet generator, ensure get same throughput as step5. +8. Send imix packets [64,1518] from packet generator as common step2, and check the throughput can get expected data. Test Case3: VM2VM forwarding test with two CBDMA channels -========================================================= +--------------------------------------------------------- +This case uses vhost, testpmd to test virtio-user0 to virtio-user1 forwarding 64Byte/2000Byte/8000Byte packets by testpmd with 2 CBDMA channels. +Virtio-user0 start with packed ring mergeable path and virtio-user1 start with split ring vectorized path. +And also have tested relaunch vhost-user to send packets to get the performance. -1.Bind one physical ports to vfio-pci and two CBDMA channels to vfio-pci. +1. Bind 1 NIC port and 2 CBDMA channel to vfio-pci, as common step 1. 2. On host, launch dpdk-vhost by below command:: - ./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat \ + # ./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 \ --socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 --dmas [txd0@0000:00:04.0,txd1@0000:00:04.1] --client --total-num-mbufs 600000 3. Launch virtio-user:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net0,queues=1,server=1,mrg_rxbuf=1,in_order=0,packed_vq=1 -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net0,queues=1,server=1,mrg_rxbuf=1,in_order=0,packed_vq=1 \ + -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 31-32 -n 4 --no-pci --file-prefix=testpmd1 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:11,path=/tmp/vhost-net1,queues=1,server=1,mrg_rxbuf=1,in_order=1,vectorized=1 -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 + --vdev=net_virtio_user0,mac=00:11:22:33:44:11,path=/tmp/vhost-net1,queues=1,server=1,mrg_rxbuf=1,in_order=1,vectorized=1 \ + -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 4. Loop pkts between two virtio-user sides, record performance number with 64b/2000b/8000b/IMIX pkts can get expected:: @@ -168,40 +247,45 @@ Test Case3: VM2VM forwarding test with two CBDMA channels 6. Rerun step 4. Test Case4: VM2VM test with cbdma channels register/unregister stable check -============================================================================ +--------------------------------------------------------------------------- +This case uses vhost, QEMU to test VM0 to VM1 forwarding 64Byte/2000Byte/8000Byte packets by testpmd with 2 CBDMA channels. +2 VMs start with split ring mergeable path, and to test stable after re-bind PCI in VMs 50 times then forwarding +64Byte/2000Byte/8000Byte packets by testpmd. -1. Bind one physical ports to vfio-pci and two CBDMA channels to vfio-pci. +1. Bind 1 NIC port and 2 CBDMA channel to vfio-pci, as common step 1. 2. On host, launch dpdk-vhost by below command:: - ./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat \ + # ./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 \ --socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 --dmas [txd0@0000:00:04.0,txd1@0000:00:04.1] --client --total-num-mbufs 600000 3. Start VM0 with qemu-5.2.0:: qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 4 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=/tmp/vhost-net0,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=true,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10 + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=/tmp/vhost-net0,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=true,mrg_rxbuf=on,\ + csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10 4. Start VM1 with qemu-5.2.0:: qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 4 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=/tmp/vhost-net1,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=true,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12 + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=/tmp/vhost-net1,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=true,mrg_rxbuf=on,\ + csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12 5. Bind virtio port to vfio-pci in both two VMs:: @@ -212,7 +296,7 @@ Test Case4: VM2VM test with cbdma channels register/unregister stable check 6. Start testpmd in VMs seperately:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -- -i --rxq=1 --txq=1 --nb-cores=1 --txd=1024 --rxd=1024 + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -- -i --rxq=1 --txq=1 --nb-cores=1 --txd=1024 --rxd=1024 7. Loop pkts between two virtio-user sides, record performance number with 64b/2000b/8000b/IMIX pkts can get expected:: @@ -248,40 +332,44 @@ Test Case4: VM2VM test with cbdma channels register/unregister stable check 9. Restart vhost, then rerun step 7,check vhost can stable work and get expected throughput. Test Case5: VM2VM split ring test with iperf and reconnect stable check -======================================================================= +----------------------------------------------------------------------- +This case uses vhost, QEMU to test VM0 to VM1 forwarding packets by iperf and scp tools with 2 CBDMA channels. +2 VMs start with split ring non-mergeable path, and to test relaunch vhost-user stable. -1. Bind one physical ports to vfio-pci and two CBDMA channels to vfio-pci. +1. Bind 1 NIC port and 2 CBDMA channel to vfio-pci, as common step 1. 2. On host, launch dpdk-vhost by below command:: - ./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat \ + # ./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 \ --socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 --dmas [txd0@0000:00:04.0,txd1@0000:00:04.1] --client --total-num-mbufs 600000 3. Start VM0 with qemu-5.2.0:: qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 4 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=/tmp/vhost-net0,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=true,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10 + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=/tmp/vhost-net0,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=true,mrg_rxbuf=off,\ + csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10 4. Start VM1 with qemu-5.2.0:: qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 4 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=/tmp/vhost-net1,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=true,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12 + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=/tmp/vhost-net1,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=true,mrg_rxbuf=off,\ + csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12 5. On VM1, set virtio device IP and run arp protocal:: @@ -302,45 +390,49 @@ Test Case5: VM2VM split ring test with iperf and reconnect stable check 9. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp:: - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name 10. Relaunch vhost-dpdk, then rerun step 7-9 five times. Test Case6: VM2VM packed ring test with iperf and reconnect stable test -======================================================================= +----------------------------------------------------------------------- +This case uses vhost, QEMU to test VM0 to VM1 forwarding packets by iperf and scp tools with 2 CBDMA channels. +2 VMs start with packed ring non-mergeable path, and to test relaunch vhost-user stable. -1. Bind one physical ports to vfio-pci and two CBDMA channels to vfio-pci. +1. Bind 1 NIC port and 2 CBDMA channel to vfio-pci, as common step 1. 2. On host, launch dpdk-vhost by below command:: - ./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat \ + # ./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 \ --socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 --dmas [txd0@0000:00:04.0,txd1@0000:00:04.1] --total-num-mbufs 600000 3. Start VM0 with qemu-5.2.0:: qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 4 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=/tmp/vhost-net0 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=true,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10 + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=/tmp/vhost-net0 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=true,mrg_rxbuf=off,\ + csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10 4. Start VM1 with qemu-5.2.0:: qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 4 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=/tmp/vhost-net1 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=true,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12 + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=/tmp/vhost-net1 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=true,mrg_rxbuf=off,\ + csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12 5. On VM1, set virtio device IP and run arp protocal:: @@ -361,6 +453,6 @@ Test Case6: VM2VM packed ring test with iperf and reconnect stable test 9. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp:: - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name 10. Rerun step 7-9 five times. From patchwork Fri Apr 22 05:48:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 110083 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E41CFA0093; Fri, 22 Apr 2022 07:48:57 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DF0DF410E7; Fri, 22 Apr 2022 07:48:57 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 34AE340040 for ; Fri, 22 Apr 2022 07:48:56 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650606536; x=1682142536; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=XQCL36Ch15ZVwwxGwJK55cekl89364Ws6BeBNjBWCUA=; b=AKmXNU89yDVKihI7+46GhRTX62CU4mAeHsIhofmTTNPuzJBS9BX7Hrg2 Zzxd4P4B/Bo78pxxqR2WO56xufNBPmhUVdd/RIabzFJMh2OGpFuzbE2g7 EUZ9IL0OTuWb2y/1d1lRzXJoediqaArfHwjuTiN/oygw3GNu5Op7hAWud l71lP/U2UhTIGUEgmgeSEFTgDQDmsOkOgvgMS5BMm2kA5XhGY6ZRNjt7B SKm57XGGpZv2/vem7DOsECONILiVaslFXhpWR0d4BqUVddxg4QTZIvMBu u2h7SEROeHNbJDiqUeu/ARPv+9t6VUD5WlXWPh10lNS/Xl+rhmlFRIqC0 A==; X-IronPort-AV: E=McAfee;i="6400,9594,10324"; a="264749515" X-IronPort-AV: E=Sophos;i="5.90,280,1643702400"; d="scan'208";a="264749515" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2022 22:48:55 -0700 X-IronPort-AV: E=Sophos;i="5.90,280,1643702400"; d="scan'208";a="728357416" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2022 22:48:53 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 3/6] tests/vswitch_sample_cbdma: fix pcap header issue Date: Fri, 22 Apr 2022 13:48:50 +0800 Message-Id: <20220422054850.1559283-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Fix pcap header issue. Signed-off-by: Wei Ling --- tests/TestSuite_vswitch_sample_cbdma.py | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/tests/TestSuite_vswitch_sample_cbdma.py b/tests/TestSuite_vswitch_sample_cbdma.py index 503c8ba0..b455790a 100644 --- a/tests/TestSuite_vswitch_sample_cbdma.py +++ b/tests/TestSuite_vswitch_sample_cbdma.py @@ -43,7 +43,6 @@ import framework.utils as utils from framework.packet import Packet from framework.pktgen import PacketGeneratorHelper from framework.pmd_output import PmdOutput -from framework.settings import HEADER_SIZE from framework.test_case import TestCase from framework.virt_common import VM @@ -96,7 +95,6 @@ class TestVswitchSampleCBDMA(TestCase): self.random_string = string.ascii_letters + string.digits self.virtio_ip0 = "1.1.1.2" self.virtio_ip1 = "1.1.1.3" - self.headers_size = HEADER_SIZE["eth"] + HEADER_SIZE["ip"] def set_up(self): """ @@ -350,8 +348,7 @@ class TestVswitchSampleCBDMA(TestCase): rx_port = self.tester.get_local_port(self.dut_ports[0]) tx_port = self.tester.get_local_port(self.dut_ports[0]) for dst_mac in dst_mac_list: - payload_size = frame_size - self.headers_size - pkt = Packet(pkt_type="VLAN_UDP", pkt_len=payload_size) + pkt = Packet(pkt_type="VLAN_UDP", pkt_len=frame_size) pkt.config_layer("ether", {"dst": dst_mac}) pkt.config_layer("vlan", {"vlan": 1000}) pcap = os.path.join( @@ -504,14 +501,13 @@ class TestVswitchSampleCBDMA(TestCase): tx_port = self.tester.get_local_port(self.dut_ports[0]) for dst_mac in dst_mac_list: for frame_size in frame_sizes: - payload_size = frame_size - self.headers_size pkt = Packet() pkt.assign_layers(["ether", "ipv4", "raw"]) pkt.config_layers( [ ("ether", {"dst": "%s" % dst_mac}), ("ipv4", {"src": "1.1.1.1"}), - ("raw", {"payload": ["01"] * int("%d" % payload_size)}), + ("raw", {"payload": ["01"] * int("%d" % frame_size)}), ] ) pcap = os.path.join( From patchwork Fri Apr 22 05:49:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 110084 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1E668A0093; Fri, 22 Apr 2022 07:49:12 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 13D3540042; Fri, 22 Apr 2022 07:49:12 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 4A5F340040 for ; Fri, 22 Apr 2022 07:49:09 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650606550; x=1682142550; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=mrX9ABKSlr6y5gxZxy3BMRNYAFnDscX0I/BsjflGbq4=; b=GPuYUgkl1k/Hw5z+hhyCWfQK+Kpts75pfFO7/NlMATFZasybl99rYeqQ PRTzc41qhLzSKx17OWd7GjiUSG88IEAtX7RyAS2IupZ7vlWLqzyuXWZ4i /poHFO/GWTqnUW6leSyKf5WhKDqcHEw2Be+jtCmWq7VhyIZKYEa5RQneF ACuRz/6XMIZ6qytohRq5iucamjPNZwe9WzQSMBTFruzpqsYiqhAeWZtoI qU6zIobvfaMvvQfFgBJ6CaV4GsC6qGo1Th6RKvVEoLLczjBBipSEECyw6 eE18SQOP66yBTJLprl1bYKf7VDL2kHJyg/UmsyPwyF0dqKav+tYsuUReK A==; X-IronPort-AV: E=McAfee;i="6400,9594,10324"; a="264356235" X-IronPort-AV: E=Sophos;i="5.90,280,1643702400"; d="scan'208";a="264356235" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2022 22:49:06 -0700 X-IronPort-AV: E=Sophos;i="5.90,280,1643702400"; d="scan'208";a="728357434" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2022 22:49:04 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 4/6] test_plans/vswitch_pvp_multi_paths_performance_with_cbdma_test_plan: add new testplan of DPDK-22.03 Date: Fri, 22 Apr 2022 13:49:00 +0800 Message-Id: <20220422054900.1559341-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path), add new test_plans/vswitch_pvp_multi_paths_performance_with_cbdma_test_plan.rst. Signed-off-by: Wei Ling --- ...paths_performance_with_cbdma_test_plan.rst | 395 ++++++++++++++++++ 1 file changed, 395 insertions(+) create mode 100644 test_plans/vswitch_pvp_multi_paths_performance_with_cbdma_test_plan.rst diff --git a/test_plans/vswitch_pvp_multi_paths_performance_with_cbdma_test_plan.rst b/test_plans/vswitch_pvp_multi_paths_performance_with_cbdma_test_plan.rst new file mode 100644 index 00000000..2083ad1b --- /dev/null +++ b/test_plans/vswitch_pvp_multi_paths_performance_with_cbdma_test_plan.rst @@ -0,0 +1,395 @@ +.. Copyright (c) <2022>, Intel Corporation + All rights reserved. + + Redistribution and use in source and binary forms, with or without + modification, are permitted provided that the following conditions + are met: + + - Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + + - Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in + the documentation and/or other materials provided with the + distribution. + + - Neither the name of Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived + from this software without specific prior written permission. + + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS + FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE + COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, + INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES + (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR + SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) + ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED + OF THE POSSIBILITY OF SUCH DAMAGE. + +======================================================== +Vswitch PVP multi-paths performance with CBDMA test plan +======================================================== + +Description +=========== + +Benchmark PVP multi-paths performance with CBDMA in vhost sample, +include 10 tx/rx paths: inorder mergeable, inorder non-mergeable, +mergeable, non-mergeable, vectorized_rx, virtio 1.1 inorder mergeable, +virtio 1.1 inorder non-mergeable, virtio 1.1 mergeable, virtio 1.1 non-mergeable, +virtio1.1 vectorized path. Give 1 core for vhost and virtio respectively. +About vswitch sample, a new option --total-num-mbufs is added from dpdk-22.03, +for the user to set larger mbuf pool to avoid launch fail. For example, dpdk-vhost +will fail to launch with a 40G i40e port without setting larger mbuf pool. +For more about vhost switch sample, please refer to the dpdk docs: +http://doc.dpdk.org/guides/sample_app_ug/vhost.html +For virtio-user vdev parameter, you can refer to the dpdk doc: +https://doc.dpdk.org/guides/nics/virtio.html#virtio-paths-selection-and-usage + +Prerequisites +============= + +Topology +-------- + +Test flow: TG-->nic-->vswitch-->virtio-user-->vswitch-->nic-->TG + +Hardware +-------- +Supportted NICs: all except columbiaville that not support VMDQ + +Software +-------- +Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz + +General set up +-------------- +1. Compile DPDK and vhost example:: + + # meson + # meson configure -Dexamples=vhost + # ninja -C -j 110 + +2. Get the pci device id and DMA device id of DUT. + +For example, 0000:18:00.0 is pci device id, 0000:00:04.0 is DMA device id:: + + # ./usertools/dpdk-devbind.py -s + + Network devices using kernel driver + =================================== + 0000:18:00.0 'Device 159b' if=ens785f0 drv=ice unused=vfio-pci + + DMA devices using kernel driver + =============================== + 0000:00:04.0 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci + +Test case +========= + +Common steps +------------ +1. Bind one physical port and one CBDMA port to vfio-pci:: + + # ./usertools/dpdk-devbind.py -b vfio-pci + # ./usertools/dpdk-devbind.py -b vfio-pci + + For example:: + ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:18.0 + ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:04.0 + +2. Inject different size of packets to NIC by traffic generator:: + + The packet size include [64, 128, 256, 512, 1024, 1518], and the format of packet is as follows. + +-------------+-------------+-------------+-------------+ + | MAC | MAC | IPV4 | IPV4 | + | Src address | Dst address | Src address | Dst address | + |-------------|-------------|-------------|-------------| + | Any MAC | Virtio mac | Any IP | Any IP | + +-------------+-------------+-------------+-------------+ + All the packets in this test plan use the Virtio mac:00:11:22:33:44:10. + +Test Case 1: Vswitch PVP split ring inorder mergeable path performance with CBDMA +--------------------------------------------------------------------------------- +This case uses Vswitch and Traffic generator(For example, Trex) to test performance of split ring inorder mergeable path with CBDMA. + +1. Bind one physical port and one CBDMA port to vfio-pci as common step 1. + +2. Launch dpdk-vhost by below command:: + + #.//examples/dpdk-vhost -l 2-3 -n 4 -a 0000:18:00.0 -a 0000:00:04.0 \ + -- -p 0x1 --mergeable 1 --vm2vm 1 --stats 1 --socket-file /tmp/vhost-net --dmas [txd0@0000:00:04.0] --total-num-mbufs 600000 + +3. Launch virtio-user with split ring inorder mergeable path:: + + #.//app/dpdk-testpmd -l 5-6 -n 4 --no-pci --file-prefix=testpmd0 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=1 \ + -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 + +4. Send packets from virtio-user to let vswitch know the mac addr:: + + testpmd> set fwd mac + testpmd> start tx_first + testpmd> stop + testpmd> start + +5. Send packets by traffic generator as common step 2, and check the throughput with below command:: + + testpmd> show port stats all + +Test Case 2: Vswitch PVP split ring inorder non-mergeable path performance with CBDMA +------------------------------------------------------------------------------------- +This case uses Vswitch and Traffic generator(For example, Trex) to test performance of split ring inorder non-mergeable path with CBDMA. + +1. Bind one physical port and one CBDMA port to vfio-pci as common step 1. + +2. Launch dpdk-vhost by below command:: + + #.//examples/dpdk-vhost -l 2-3 -n 4 -a 0000:18:00.0 -a 0000:00:04.0 \ + -- -p 0x1 --mergeable 1 --vm2vm 1 --stats 1 --socket-file /tmp/vhost-net --dmas [txd0@0000:00:04.0] --total-num-mbufs 600000 + +3. Launch virtio-user with split ring non-mergeable path:: + + #.//app/dpdk-testpmd -l 5-6 -n 4 --no-pci --file-prefix=testpmd0 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=1 \ + -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 + +4. Send packets from virtio-user to let vswitch know the mac addr:: + + testpmd> set fwd mac + testpmd> start tx_first + testpmd> stop + testpmd> start + +5. Send packets by traffic generator as common step 2, and check the throughput with below command:: + + testpmd> show port stats all + +Test Case 3: Vswitch PVP split ring mergeable path performance with CBDMA +------------------------------------------------------------------------- +This case uses Vswitch and Traffic generator(For example, Trex) to test performance of split ring mergeable path with CBDMA. + +1. Bind one physical port and one CBDMA port to vfio-pci as common step 1. + +2. Launch dpdk-vhost by below command:: + + #.//examples/dpdk-vhost -l 2-3 -n 4 -a 0000:18:00.0 -a 0000:00:04.0 \ + -- -p 0x1 --mergeable 1 --vm2vm 1 --stats 1 --socket-file /tmp/vhost-net --dmas [txd0@0000:00:04.0] --total-num-mbufs 600000 + +3. Launch virtio-user with split ring mergeable path:: + + #.//app/dpdk-testpmd -l 5-6 -n 4 --no-pci --file-prefix=testpmd0 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=0 \ + -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 + +4. Send packets from virtio-user to let vswitch know the mac addr:: + + testpmd> set fwd mac + testpmd> start tx_first + testpmd> stop + testpmd> start + +5. Send packets by traffic generator as common step 2, and check the throughput with below command:: + + testpmd> show port stats all + +Test Case 4: Vswitch PVP split ring non-mergeable path performance with CBDMA +----------------------------------------------------------------------------- +This case uses Vswitch and Traffic generator(For example, Trex) to test performance of split ring non-mergeable path with CBDMA. + +1. Bind one physical port and one CBDMA port to vfio-pci as common step 1. + +2. Launch dpdk-vhost by below command:: + + #.//examples/dpdk-vhost -l 2-3 -n 4 -a 0000:18:00.0 -a 0000:00:04.0 \ + -- -p 0x1 --mergeable 1 --vm2vm 1 --stats 1 --socket-file /tmp/vhost-net --dmas [txd0@0000:00:04.0] --total-num-mbufs 600000 + +3. Launch virtio-user with split ring non-mergeable path:: + + #.//app/dpdk-testpmd -l 5-6 -n 4 --no-pci --file-prefix=testpmd0 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0 \ + -- -i --enable-hw-vlan-strip --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 + +4. Send packets from virtio-user to let vswitch know the mac addr:: + + testpmd> set fwd mac + testpmd> start tx_first + testpmd> stop + testpmd> start + +5. Send packets by traffic generator as common step 2, and check the throughput with below command:: + + testpmd> show port stats all + +Test Case 5: Vswitch PVP split ring vectorized path performance with CBDMA +-------------------------------------------------------------------------- +This case uses Vswitch and Traffic generator(For example, Trex) to test performance of split ring vectorized path with CBDMA. + +1. Bind one physical port and one CBDMA port to vfio-pci as common step 1. + +2. Launch dpdk-vhost by below command:: + + #.//examples/dpdk-vhost -l 2-3 -n 4 -a 0000:18:00.0 -a 0000:00:04.0 \ + -- -p 0x1 --mergeable 1 --vm2vm 1 --stats 1 --socket-file /tmp/vhost-net --dmas [txd0@0000:00:04.0] --total-num-mbufs 600000 + +3. Launch virtio-user with split ring vectorized path:: + + #.//app/dpdk-testpmd -l 5-6 -n 4 --no-pci --file-prefix=testpmd0 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=1,vectorized=1 \ + -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 + +4. Send packets from virtio-user to let vswitch know the mac addr:: + + testpmd> set fwd mac + testpmd> start tx_first + testpmd> stop + testpmd> start + +5. Send packets by traffic generator as common step 2, and check the throughput with below command:: + + testpmd> show port stats all + + +Test Case 6: Vswitch PVP packed ring inorder mergeable path performance with CBDMA +---------------------------------------------------------------------------------- +This case uses Vswitch and Traffic generator(For example, Trex) to test performance of packed ring inorder mergeable path with CBDMA. + +1. Bind one physical port and one CBDMA port to vfio-pci as common step 1. + +2. Launch dpdk-vhost by below command:: + + #.//examples/dpdk-vhost -l 2-3 -n 4 -a 0000:18:00.0 -a 0000:00:04.0 \ + -- -p 0x1 --mergeable 1 --vm2vm 1 --stats 1 --socket-file /tmp/vhost-net --dmas [txd0@0000:00:04.0] --total-num-mbufs 600000 + +3. Launch virtio-user with packed ring inorder mergeable path:: + + #.//app/dpdk-testpmd -l 5-6 -n 4 --no-pci --file-prefix=testpmd0 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=1 \ + -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 + +4. Send packets from virtio-user to let vswitch know the mac addr:: + + testpmd> set fwd mac + testpmd> start tx_first + testpmd> stop + testpmd> start + +5. Send packets by traffic generator as common step 2, and check the throughput with below command:: + + testpmd> show port stats all + +Test Case 7: Vswitch PVP packed ring inorder non-mergeable path performance with CBDMA +-------------------------------------------------------------------------------------- +This case uses Vswitch and Traffic generator(For example, Trex) to test performance of packed ring inorder non-mergeable path with CBDMA. + +1. Bind one physical port and one CBDMA port to vfio-pci as common step 1. + +2. Launch dpdk-vhost by below command:: + + #.//examples/dpdk-vhost -l 2-3 -n 4 -a 0000:18:00.0 -a 0000:00:04.0 \ + -- -p 0x1 --mergeable 1 --vm2vm 1 --stats 1 --socket-file /tmp/vhost-net --dmas [txd0@0000:00:04.0] --total-num-mbufs 600000 + +3. Launch virtio-user with packed ring inorder non-mergeable path:: + + #.//app/dpdk-testpmd -l 5-6 -n 4 --no-pci --file-prefix=testpmd0 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1 \ + -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 + +4. Send packets from virtio-user to let vswitch know the mac addr:: + + testpmd> set fwd mac + testpmd> start tx_first + testpmd> stop + testpmd> start + +5. Send packets by traffic generator as common step 2, and check the throughput with below command:: + + testpmd> show port stats all + +Test Case 8: Vswitch PVP packed ring mergeable path performance with CBDMA +-------------------------------------------------------------------------- +This case uses Vswitch and Traffic generator(For example, Trex) to test performance of packed ring mergeable path with CBDMA. + +1. Bind one physical port and one CBDMA port to vfio-pci as common step 1. + +2. Launch dpdk-vhost by below command:: + + #.//examples/dpdk-vhost -l 2-3 -n 4 -a 0000:18:00.0 -a 0000:00:04.0 \ + -- -p 0x1 --mergeable 1 --vm2vm 1 --stats 1 --socket-file /tmp/vhost-net --dmas [txd0@0000:00:04.0] --total-num-mbufs 600000 + +3. Launch virtio-user with packed ring mergeable path:: + + #.//app/dpdk-testpmd -l 5-6 -n 4 --no-pci --file-prefix=testpmd0 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=0 \ + -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 + +4. Send packets from virtio-user to let vswitch know the mac addr:: + + testpmd> set fwd mac + testpmd> start tx_first + testpmd> stop + testpmd> start + +5. Send packets by traffic generator as common step 2, and check the throughput with below command:: + + testpmd> show port stats all + +Test Case 9: Vswitch PVP packed ring non-mergeable path performance with CBDMA +------------------------------------------------------------------------------ +This case uses Vswitch and Traffic generator(For example, Trex) to test performance of packed ring non-mergeable path with CBDMA. + +1. Bind one physical port and one CBDMA port to vfio-pci as common step 1. + +2. Launch dpdk-vhost by below command:: + + #.//examples/dpdk-vhost -l 2-3 -n 4 -a 0000:18:00.0 -a 0000:00:04.0 \ + -- -p 0x1 --mergeable 1 --vm2vm 1 --stats 1 --socket-file /tmp/vhost-net --dmas [txd0@0000:00:04.0] --total-num-mbufs 600000 + +3. Launch virtio-user with packed ring non-mergeable path:: + + #.//app/dpdk-testpmd -l 5-6 -n 4 --no-pci --file-prefix=testpmd0 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=0 \ + -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 + +4. Send packets from virtio-user to let vswitch know the mac addr:: + + testpmd> set fwd mac + testpmd> start tx_first + testpmd> stop + testpmd> start + +5. Send packets by traffic generator as common step 2, and check the throughput with below command:: + + testpmd> show port stats all + +Test Case 10: Vswitch PVP packed ring vectorized path performance with CBDMA +---------------------------------------------------------------------------- +This case uses Vswitch and Traffic generator(For example, Trex) to test performance of packed ring vectorized path with CBDMA. + +1. Bind one physical port and one CBDMA port to vfio-pci as common step 1. + +2. Launch dpdk-vhost by below command:: + + #.//examples/dpdk-vhost -l 2-3 -n 4 -a 0000:18:00.0 -a 0000:00:04.0 \ + -- -p 0x1 --mergeable 1 --vm2vm 1 --stats 1 --socket-file /tmp/vhost-net --dmas [txd0@0000:00:04.0] --total-num-mbufs 600000 + +3. Launch virtio-user with packed ring vectorized path:: + + #.//app/dpdk-testpmd -l 5-6 -n 4 --no-pci --file-prefix=testpmd0 --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \ + -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 + +4. Send packets from virtio-user to let vswitch know the mac addr:: + + testpmd> set fwd mac + testpmd> start tx_first + testpmd> stop + testpmd> start + +5. Send packets by traffic generator as common step 2, and check the throughput with below command:: + + testpmd> show port stats all From patchwork Fri Apr 22 05:49:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 110085 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 43F10A0093; Fri, 22 Apr 2022 07:49:21 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3CAF14067C; Fri, 22 Apr 2022 07:49:21 +0200 (CEST) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 1869C40040 for ; Fri, 22 Apr 2022 07:49:18 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650606559; x=1682142559; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=B8uV71QRc/15iBDBr1NgFkWxM1ZbBwNN3gaYR7Tavbs=; b=X/FkL9iUjeQeXx2wx4ZeNZcYcgYxX8C3kCL+iPEOEpaCWFLPUa15q69F V/jay/E+sZLwJfwhnf/0pLIbM9//0M2uDXM4DNj9gvr0sSLWqUKHYFqtY aUgaxPpo3CmZ/PBL0SRMnh+RlJ1QB4YdTIu4D26aXrsR2c9dKOZg4xauT 1bOJM7kMuUSEq9k3EkZyL8qsEqMwlEyHqRZcgFTDzer8g7cDTbtUukqAu OYQozhI65bZ9BdwgPGQg2t8ZSZNdUg1dLauxX/4KuNKp2Wa4SRCyb7l5q QELej/OoHHNbtVfw9HXVT2CtxcOAf+OtW0sjocQNv8NzPHFFqcpKPjoPU A==; X-IronPort-AV: E=McAfee;i="6400,9594,10324"; a="325031633" X-IronPort-AV: E=Sophos;i="5.90,280,1643702400"; d="scan'208";a="325031633" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2022 22:49:17 -0700 X-IronPort-AV: E=Sophos;i="5.90,280,1643702400"; d="scan'208";a="728357453" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2022 22:49:15 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 5/6] tests/vswitch_pvp_multi_paths_performance_with_cbdma: add new testsuite of DPDK-22.03 Date: Fri, 22 Apr 2022 13:49:11 +0800 Message-Id: <20220422054911.1559399-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path), add new tests/TestSuite_vswitch_pvp_multi_paths_performance_with_cbdma.py. Signed-off-by: Wei Ling --- ..._pvp_multi_paths_performance_with_cbdma.py | 644 ++++++++++++++++++ 1 file changed, 644 insertions(+) create mode 100644 tests/TestSuite_vswitch_pvp_multi_paths_performance_with_cbdma.py diff --git a/tests/TestSuite_vswitch_pvp_multi_paths_performance_with_cbdma.py b/tests/TestSuite_vswitch_pvp_multi_paths_performance_with_cbdma.py new file mode 100644 index 00000000..d92263a6 --- /dev/null +++ b/tests/TestSuite_vswitch_pvp_multi_paths_performance_with_cbdma.py @@ -0,0 +1,644 @@ +# BSD LICENSE +# +# Copyright(c) <2022> Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +""" +DPDK Test suite. +""" + +import json +import os +import re +import time +from copy import deepcopy + +import framework.rst as rst +import framework.utils as utils +from framework.packet import Packet +from framework.pktgen import PacketGeneratorHelper +from framework.pmd_output import PmdOutput +from framework.settings import UPDATE_EXPECTED, load_global_setting +from framework.test_case import TestCase + + +class TestVswitchPvpMultiPathsPerformanceWithCbdma(TestCase): + def set_up_all(self): + """ + Run at the start of each test suite. + """ + self.build_vhost_app() + self.dut_ports = self.dut.get_ports() + self.number_of_ports = 1 + self.verify(len(self.dut_ports) >= 1, "Insufficient ports for testing") + self.ports_socket = self.dut.get_numa_id(self.dut_ports[0]) + self.cores = self.dut.get_core_list("all", socket=self.ports_socket) + self.vhost_core_list = self.cores[0:2] + self.vuser0_core_list = self.cores[2:4] + self.vhost_core_mask = utils.create_mask(self.vhost_core_list) + self.mem_channels = self.dut.get_memory_channels() + # get cbdma device + self.cbdma_dev_infos = [] + self.dmas_info = None + self.device_str = None + self.out_path = "/tmp" + out = self.tester.send_expect("ls -d %s" % self.out_path, "# ") + if "No such file or directory" in out: + self.tester.send_expect("mkdir -p %s" % self.out_path, "# ") + self.base_dir = self.dut.base_dir.replace("~", "/root") + txport = self.tester.get_local_port(self.dut_ports[0]) + self.txItf = self.tester.get_interface(txport) + self.virtio_user0_mac = "00:11:22:33:44:10" + self.vm_num = 2 + self.app_testpmd_path = self.dut.apps_name["test-pmd"] + self.pktgen_helper = PacketGeneratorHelper() + self.vhost_user = self.dut.new_session(suite="vhost-user") + self.virtio_user0 = self.dut.new_session(suite="virtio-user0") + self.virtio_user0_pmd = PmdOutput(self.dut, self.virtio_user0) + self.frame_size = [64, 128, 256, 512, 1024, 1518] + self.save_result_flag = True + self.json_obj = {} + + def set_up(self): + """ + Run before each test case. + """ + self.dut.send_expect("rm -rf %s/vhost-net*" % self.base_dir, "#") + self.dut.send_expect("killall -I dpdk-vhost", "#", 20) + self.dut.send_expect("killall -I dpdk-testpmd", "#", 20) + self.dut.send_expect("killall -I qemu-system-x86_64", "#", 20) + + # Prepare the result table + self.table_header = ["Frame"] + self.table_header.append("Mode/RXD-TXD") + self.table_header.append("Mpps") + self.table_header.append("% linerate") + self.result_table_create(self.table_header) + + self.test_parameters = self.get_suite_cfg()["test_parameters"] + # test parameters include: frames size, descriptor numbers + self.test_parameters = self.get_suite_cfg()["test_parameters"] + + # traffic duraion in second + self.test_duration = self.get_suite_cfg()["test_duration"] + + # initilize throughput attribution + # {'$framesize':{"$nb_desc": 'throughput'} + self.throughput = {} + + # Accepted tolerance in Mpps + self.gap = self.get_suite_cfg()["accepted_tolerance"] + self.test_result = {} + self.nb_desc = self.test_parameters[64][0] + + def build_vhost_app(self): + out = self.dut.build_dpdk_apps("./examples/vhost") + self.verify("Error" not in out, "compilation vhost error") + + def check_2M_env(self): + out = self.dut.send_expect( + "cat /proc/meminfo |grep Hugepagesize|awk '{print($2)}'", "# " + ) + return True if out == "2048" else False + + def start_vhost_app(self, allow_pci): + """ + launch the vhost app on vhost side + """ + self.app_path = self.dut.apps_name["vhost"] + socket_file_param = "--socket-file ./vhost-net" + allow_option = "" + for item in allow_pci: + allow_option += " -a {}".format(item) + params = ( + " -c {} -n {} {} -- -p 0x1 --mergeable 1 --vm2vm 1 --stats 1 " + + socket_file_param + + " --dmas [{}] --total-num-mbufs 600000" + ).format( + self.vhost_core_mask, + self.mem_channels, + allow_option, + self.dmas_info, + ) + self.command_line = self.app_path + params + self.vhost_user.send_command(self.command_line) + time.sleep(3) + + def start_virtio_testpmd( + self, + virtio_path, + vlan_strip=False, + force_max_simd_bitwidth=False, + ): + """ + launch the testpmd as virtio with vhost_net0 + """ + eal_params = ( + " --vdev=net_virtio_user0,mac={},path=./vhost-net,queues=1,{}".format( + self.virtio_user0_mac, virtio_path + ) + ) + if self.check_2M_env(): + eal_params += " --single-file-segments" + if force_max_simd_bitwidth: + eal_params += " --force-max-simd-bitwidth=512" + params = "--rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1" + if vlan_strip: + params = "--rx-offloads=0x0 --enable-hw-vlan-strip " + params + self.virtio_user0_pmd.start_testpmd( + cores=self.vuser0_core_list, + param=params, + eal_param=eal_params, + no_pci=True, + ports=[], + prefix="virtio-user0", + fixed_prefix=True, + ) + self.virtio_user0_pmd.execute_cmd("set fwd mac") + self.virtio_user0_pmd.execute_cmd("start tx_first") + self.virtio_user0_pmd.execute_cmd("stop") + self.virtio_user0_pmd.execute_cmd("start") + + def get_cbdma_ports_info_and_bind_to_dpdk(self, cbdma_num): + """ + get all cbdma ports + """ + out = self.dut.send_expect( + "./usertools/dpdk-devbind.py --status-dev dma", "# ", 30 + ) + device_info = out.split("\n") + for device in device_info: + pci_info = re.search("\s*(0000:\S*:\d*.\d*)", device) + if pci_info is not None: + dev_info = pci_info.group(1) + # the numa id of ioat dev, only add the device which on same socket with nic dev + bus = int(dev_info[5:7], base=16) + if bus >= 128: + cur_socket = 1 + else: + cur_socket = 0 + if self.ports_socket == cur_socket: + self.cbdma_dev_infos.append(pci_info.group(1)) + self.verify( + len(self.cbdma_dev_infos) >= cbdma_num, + "There no enough cbdma device to run this suite", + ) + used_cbdma = self.cbdma_dev_infos[0:cbdma_num] + dmas_info = "" + for dmas in used_cbdma: + number = used_cbdma.index(dmas) + dmas = "txd{}@{},".format(number, dmas) + dmas_info += dmas + self.dmas_info = dmas_info[:-1] + self.device_str = " ".join(used_cbdma) + self.dut.send_expect( + "./usertools/dpdk-devbind.py --force --bind=%s %s" + % (self.drivername, self.device_str), + "# ", + 60, + ) + + def bind_cbdma_device_to_kernel(self): + if self.device_str is not None: + self.dut.send_expect("modprobe ioatdma", "# ") + self.dut.send_expect( + "./usertools/dpdk-devbind.py -u %s" % self.device_str, "# ", 30 + ) + self.dut.send_expect( + "./usertools/dpdk-devbind.py --force --bind=ioatdma %s" + % self.device_str, + "# ", + 60, + ) + + def config_stream(self, frame_size): + tgen_input = [] + rx_port = self.tester.get_local_port(self.dut_ports[0]) + tx_port = self.tester.get_local_port(self.dut_ports[0]) + pkt = Packet(pkt_type="UDP", pkt_len=frame_size) + pkt.config_layer("ether", {"dst": self.virtio_user0_mac}) + pcap = os.path.join( + self.out_path, "vswitch_pvp_multi_path_%s.pcap" % (frame_size) + ) + pkt.save_pcapfile(self.tester, pcap) + tgen_input.append((rx_port, tx_port, pcap)) + return tgen_input + + def perf_test(self, case_info): + for frame_size in self.frame_size: + self.throughput[frame_size] = dict() + self.logger.info( + "Test running at parameters: " + "framesize: {}".format(frame_size) + ) + tgenInput = self.config_stream(frame_size) + # clear streams before add new streams + self.tester.pktgen.clear_streams() + # run packet generator + streams = self.pktgen_helper.prepare_stream_from_tginput( + tgenInput, 100, None, self.tester.pktgen + ) + # set traffic option + traffic_opt = {"duration": 5} + _, pps = self.tester.pktgen.measure_throughput( + stream_ids=streams, options=traffic_opt + ) + Mpps = pps / 1000000.0 + linerate = ( + Mpps + * 100 + / float(self.wirespeed(self.nic, frame_size, self.number_of_ports)) + ) + self.throughput[frame_size][self.nb_desc] = Mpps + results_row = [frame_size] + results_row.append(case_info) + results_row.append(Mpps) + results_row.append(linerate) + self.result_table_add(results_row) + self.result_table_print() + + def handle_expected(self): + """ + Update expected numbers to configurate file: $DTS_CFG_FOLDER/$suite_name.cfg + """ + if load_global_setting(UPDATE_EXPECTED) == "yes": + for frame_size in self.test_parameters.keys(): + for nb_desc in self.test_parameters[frame_size]: + self.expected_throughput[frame_size][nb_desc] = round( + self.throughput[frame_size][nb_desc], 3 + ) + + def handle_results(self): + """ + results handled process: + 1, save to self.test_results + 2, create test results table + 3, save to json file for Open Lab + """ + header = self.table_header + header.append("Expected Throughput") + header.append("Throughput Difference") + for frame_size in self.test_parameters.keys(): + wirespeed = self.wirespeed(self.nic, frame_size, self.number_of_ports) + ret_datas = {} + for nb_desc in self.test_parameters[frame_size]: + ret_data = {} + ret_data[header[0]] = frame_size + ret_data[header[1]] = nb_desc + ret_data[header[2]] = "{:.3f} Mpps".format( + self.throughput[frame_size][nb_desc] + ) + ret_data[header[3]] = "{:.3f}%".format( + self.throughput[frame_size][nb_desc] * 100 / wirespeed + ) + ret_data[header[4]] = "{:.3f} Mpps".format( + self.expected_throughput[frame_size][nb_desc] + ) + ret_data[header[5]] = "{:.3f} Mpps".format( + self.throughput[frame_size][nb_desc] + - self.expected_throughput[frame_size][nb_desc] + ) + ret_datas[nb_desc] = deepcopy(ret_data) + self.test_result[frame_size] = deepcopy(ret_datas) + # Create test results table + self.result_table_create(header) + for frame_size in self.test_parameters.keys(): + for nb_desc in self.test_parameters[frame_size]: + table_row = list() + for i in range(len(header)): + table_row.append(self.test_result[frame_size][nb_desc][header[i]]) + self.result_table_add(table_row) + # present test results to screen + self.result_table_print() + # save test results as a file + if self.save_result_flag: + self.save_result(self.test_result) + + def save_result(self, data): + """ + Saves the test results as a separated file named with + self.nic+_perf_virtio_user_pvp.json in output folder + if self.save_result_flag is True + """ + case_name = self.running_case + self.json_obj[case_name] = list() + status_result = [] + for frame_size in self.test_parameters.keys(): + for nb_desc in self.test_parameters[frame_size]: + row_in = self.test_result[frame_size][nb_desc] + row_dict0 = dict() + row_dict0["performance"] = list() + row_dict0["parameters"] = list() + row_dict0["parameters"] = list() + result_throughput = float(row_in["Mpps"].split()[0]) + expected_throughput = float(row_in["Expected Throughput"].split()[0]) + # delta value and accepted tolerance in percentage + delta = result_throughput - expected_throughput + gap = expected_throughput * -self.gap * 0.01 + delta = float(delta) + gap = float(gap) + self.logger.info("Accept tolerance are (Mpps) %f" % gap) + self.logger.info("Throughput Difference are (Mpps) %f" % delta) + if result_throughput > expected_throughput + gap: + row_dict0["status"] = "PASS" + else: + row_dict0["status"] = "FAIL" + row_dict1 = dict( + name="Throughput", value=result_throughput, unit="Mpps", delta=delta + ) + row_dict2 = dict( + name="Txd/Rxd", value=row_in["Mode/RXD-TXD"], unit="descriptor" + ) + row_dict3 = dict(name="frame_size", value=row_in["Frame"], unit="bytes") + row_dict0["performance"].append(row_dict1) + row_dict0["parameters"].append(row_dict2) + row_dict0["parameters"].append(row_dict3) + self.json_obj[case_name].append(row_dict0) + status_result.append(row_dict0["status"]) + with open( + os.path.join( + rst.path2Result, "{0:s}_{1}.json".format(self.nic, self.suite_name) + ), + "w", + ) as fp: + json.dump(self.json_obj, fp) + self.verify("FAIL" not in status_result, "Exceeded Gap") + + def test_perf_vswitch_pvp_split_ring_inorder_mergeable_path_performance_with_cbdma( + self, + ): + """ + Test Case 1: Vswitch PVP split ring inorder mergeable path performance with CBDMA + """ + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] + + cbdma_num = 1 + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=cbdma_num) + allow_pci = [self.dut.ports_info[0]["pci"]] + for item in range(cbdma_num): + allow_pci.append(self.cbdma_dev_infos[item]) + self.start_vhost_app(allow_pci=allow_pci) + virtio_path = "packed_vq=0,mrg_rxbuf=1,in_order=1" + self.start_virtio_testpmd(virtio_path=virtio_path) + case_info = "split ring inorder mergeable" + self.perf_test(case_info) + self.handle_expected() + self.handle_results() + + def test_perf_vswitch_pvp_split_ring_inorder_no_mergeable_path_performance_with_cbdma( + self, + ): + """ + Test Case 2: Vswitch PVP split ring inorder non-mergeable path performance with CBDMA + """ + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] + + cbdma_num = 1 + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=cbdma_num) + allow_pci = [self.dut.ports_info[0]["pci"]] + for item in range(cbdma_num): + allow_pci.append(self.cbdma_dev_infos[item]) + self.start_vhost_app(allow_pci=allow_pci) + virtio_path = "packed_vq=0,mrg_rxbuf=0,in_order=1" + self.start_virtio_testpmd(virtio_path=virtio_path) + case_info = "split ring inorder non-mergeable" + self.perf_test(case_info) + self.handle_expected() + self.handle_results() + + def test_perf_vswitch_pvp_split_ring_mergeable_path_performance_with_cbdma(self): + """ + Test Case 3: Vswitch PVP split ring mergeable path performance with CBDMA + """ + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] + + cbdma_num = 1 + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=cbdma_num) + allow_pci = [self.dut.ports_info[0]["pci"]] + for item in range(cbdma_num): + allow_pci.append(self.cbdma_dev_infos[item]) + self.start_vhost_app(allow_pci=allow_pci) + virtio_path = "packed_vq=0,mrg_rxbuf=1,in_order=0" + self.start_virtio_testpmd(virtio_path=virtio_path) + case_info = "split ring mergeable" + self.perf_test(case_info) + self.handle_expected() + self.handle_results() + + def test_perf_vswitch_pvp_split_ring_non_mergeable_path_performance_with_cbdma( + self, + ): + """ + Test Case 4: Vswitch PVP split ring non-mergeable path performance with CBDMA + """ + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] + + cbdma_num = 1 + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=cbdma_num) + allow_pci = [self.dut.ports_info[0]["pci"]] + for item in range(cbdma_num): + allow_pci.append(self.cbdma_dev_infos[item]) + self.start_vhost_app(allow_pci=allow_pci) + virtio_path = "packed_vq=0,mrg_rxbuf=0,in_order=0" + self.start_virtio_testpmd(virtio_path=virtio_path, vlan_strip=True) + case_info = "split ring non-mergeable" + self.perf_test(case_info) + self.handle_expected() + self.handle_results() + + def test_perf_vswitch_pvp_split_ring_vectorized_path_performance_with_cbdma(self): + """ + Test Case 5: Vswitch PVP split ring vectorized path performance with CBDMA + """ + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] + + cbdma_num = 1 + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=cbdma_num) + allow_pci = [self.dut.ports_info[0]["pci"]] + for item in range(cbdma_num): + allow_pci.append(self.cbdma_dev_infos[item]) + self.start_vhost_app(allow_pci=allow_pci) + virtio_path = "packed_vq=0,mrg_rxbuf=0,in_order=1,vectorized=1" + self.start_virtio_testpmd(virtio_path=virtio_path) + case_info = "split ring vectorized" + self.perf_test(case_info) + self.handle_expected() + self.handle_results() + + def test_perf_vswitch_pvp_packed_ring_inorder_mergeable_path_performance_with_cbdma( + self, + ): + """ + Test Case 6: Vswitch PVP virtio 1.1 inorder mergeable path performance with CBDMA + """ + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] + + cbdma_num = 1 + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=cbdma_num) + allow_pci = [self.dut.ports_info[0]["pci"]] + for item in range(cbdma_num): + allow_pci.append(self.cbdma_dev_infos[item]) + self.start_vhost_app(allow_pci=allow_pci) + virtio_path = "packed_vq=1,mrg_rxbuf=1,in_order=1" + self.start_virtio_testpmd(virtio_path=virtio_path) + case_info = "split ring inorder mergeable" + self.perf_test(case_info) + self.handle_expected() + self.handle_results() + + def test_perf_vswitch_pvp_packed_ring_inorder_no_mergeable_path_performance_with_cbdma( + self, + ): + """ + Test Case 7: Vswitch PVP virtio 1.1 inorder non-mergeable path performance with CBDMA + """ + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] + + cbdma_num = 1 + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=cbdma_num) + allow_pci = [self.dut.ports_info[0]["pci"]] + for item in range(cbdma_num): + allow_pci.append(self.cbdma_dev_infos[item]) + self.start_vhost_app(allow_pci=allow_pci) + virtio_path = "packed_vq=1,mrg_rxbuf=0,in_order=1" + self.start_virtio_testpmd(virtio_path=virtio_path) + case_info = "split ring inorder non-mergeable" + self.perf_test(case_info) + self.handle_expected() + self.handle_results() + + def test_perf_vswitch_pvp_packed_ring_mergeable_path_performance_with_cbdma(self): + """ + Test Case 8: Vswitch PVP virtio 1.1 mergeable path performance with CBDMA + """ + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] + + cbdma_num = 1 + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=cbdma_num) + allow_pci = [self.dut.ports_info[0]["pci"]] + for item in range(cbdma_num): + allow_pci.append(self.cbdma_dev_infos[item]) + self.start_vhost_app(allow_pci=allow_pci) + virtio_path = "packed_vq=1,mrg_rxbuf=1,in_order=0" + self.start_virtio_testpmd(virtio_path=virtio_path) + case_info = "split ring mergeable" + self.perf_test(case_info) + self.handle_expected() + self.handle_results() + + def test_perf_vswitch_pvp_packed_ring_non_mergeable_path_performance_with_cbdma( + self, + ): + """ + Test Case 9: Vswitch PVP virtio 1.1 non-mergeable path performance with CBDMA + """ + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] + + cbdma_num = 1 + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=cbdma_num) + allow_pci = [self.dut.ports_info[0]["pci"]] + for item in range(cbdma_num): + allow_pci.append(self.cbdma_dev_infos[item]) + self.start_vhost_app(allow_pci=allow_pci) + virtio_path = "packed_vq=1,mrg_rxbuf=0,in_order=0" + self.start_virtio_testpmd(virtio_path=virtio_path) + case_info = "split ring non-mergeable" + self.perf_test(case_info) + self.handle_expected() + self.handle_results() + + def test_perf_vswitch_pvp_packed_ring_vectorized_path_performance_with_cbdma(self): + """ + Test Case 10: Vswitch PVP virtio 1.1 vectorized path performance with CBDMA + """ + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] + + cbdma_num = 1 + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=cbdma_num) + allow_pci = [self.dut.ports_info[0]["pci"]] + for item in range(cbdma_num): + allow_pci.append(self.cbdma_dev_infos[item]) + self.start_vhost_app(allow_pci=allow_pci) + virtio_path = "packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1" + self.start_virtio_testpmd(virtio_path=virtio_path) + case_info = "split ring vectorized" + self.perf_test(case_info) + self.handle_expected() + self.handle_results() + + def close_all_session(self): + if getattr(self, "vhost_user", None): + self.dut.close_session(self.vhost_user) + if getattr(self, "virtio-user0", None): + self.dut.close_session(self.virtio_user0) + if getattr(self, "virtio-user1", None): + self.dut.close_session(self.virtio_user1) + + def tear_down(self): + """ + Run after each test case. + """ + self.virtio_user0_pmd.quit() + self.vhost_user.send_expect("^C", "# ", 20) + self.bind_cbdma_device_to_kernel() + + def tear_down_all(self): + """ + Run after each test suite. + """ + self.close_all_session() From patchwork Fri Apr 22 05:49:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 110086 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 64968A0093; Fri, 22 Apr 2022 07:49:31 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5FB094067B; Fri, 22 Apr 2022 07:49:31 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 093FA40040 for ; Fri, 22 Apr 2022 07:49:29 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650606570; x=1682142570; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=mKjkyvnSuii0ErDFtMyfp2qCdgChEkjRwvx0tJXlvhg=; b=NLYpAz6MlS3vXevw1vKgdp/4U50d4mkuJzB30qyZ60nPe3OduHS6qwjG osST9Xg1wVgGsH0K1C60py1JEU9S7wWCfw++hhF9NT58I/4pAvfDvE7Ik Rt35XNuUNJeuYlfomTzRSKtuRkMNiHPuRFgNsfDhgrSJM+WhJfaCkECBC St8pIypvxgzG42z7/CHF/+ef8htpDkIcuL9CQtpJHqegSOsOpY2a/se6D 571IweCqtrPYoRGxJBamhBl+SQLNBm/BkgwT63b4VbjFJOK3pIL/RAMwc 2JtWCc4acGJvHRWLWkHk8uGBAW0gAbIdx2hK59nFz4gAHhcA0Hc4/N622 Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10324"; a="263433070" X-IronPort-AV: E=Sophos;i="5.90,280,1643702400"; d="scan'208";a="263433070" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2022 22:49:28 -0700 X-IronPort-AV: E=Sophos;i="5.90,280,1643702400"; d="scan'208";a="728357486" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2022 22:49:27 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 6/6] conf/vswitch_pvp_multi_paths_performance_with_cbdma: add new testsuite config file Date: Fri, 22 Apr 2022 13:49:23 +0800 Message-Id: <20220422054923.1559457-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add new testsuite config file into dts/conf. Signed-off-by: Wei Ling Tested-by: Chenyu Huang --- conf/vswitch_pvp_multi_paths_performance_with_cbdma.cfg | 7 +++++++ 1 file changed, 7 insertions(+) create mode 100644 conf/vswitch_pvp_multi_paths_performance_with_cbdma.cfg diff --git a/conf/vswitch_pvp_multi_paths_performance_with_cbdma.cfg b/conf/vswitch_pvp_multi_paths_performance_with_cbdma.cfg new file mode 100644 index 00000000..295029fa --- /dev/null +++ b/conf/vswitch_pvp_multi_paths_performance_with_cbdma.cfg @@ -0,0 +1,7 @@ +[suite] +update_expected = True +test_parameters = {64: [1024], 128: [1024], 256: [1024], 512: [1024], 1024: [1024], 1518: [1024]} +test_duration = 60 +accepted_tolerance = 10 +expected_throughput = {'test_perf_vswitch_pvp_split_ring_inorder_mergeable_path_performance_with_cbdma': {64: {1024: 0.000}, 128: {1024: 0.000}, 256: {1024: 0.000}, 512: {1024: 0.000}, 1024: {1024: 0.000}, 1518: {1024: 0.000}}, 'test_perf_vswitch_pvp_split_ring_inorder_no_mergeable_path_performance_with_cbdma': {64: {1024: 0.000}, 128: {1024: 0.000}, 256: {1024: 0.000}, 512: {1024: 0.000}, 1024: {1024: 0.000}, 1518: {1024: 0.000}}, 'test_perf_vswitch_pvp_split_ring_mergeable_path_performance_with_cbdma': {64: {1024: 0.000}, 128: {1024: 0.000}, 256: {1024: 0.000}, 512: {1024: 0.000}, 1024: {1024: 0.000}, 1518: {1024: 0.000}}, 'test_perf_vswitch_pvp_split_ring_non_mergeable_path_performance_with_cbdma': {64: {1024: 0.000}, 128: {1024: 0.000}, 256: {1024: 0.000}, 512: {1024: 0.000}, 1024: {1024: 0.000}, 1518: {1024: 0.000}}, 'test_perf_vswitch_pvp_split_ring_vectorized_path_performance_with_cbdma': {64: {1024: 0.000}, 128: {1024: 0.000}, 256: {1024: 0.000}, 512: {1024: 0.000}, 1024: {1024: 0.000}, 1518: {1024: 0.000}}, 'test_perf_vswitch_pvp_packed_ring_inorder_mergeable_path_performance_with_cbdma': {64: {1024: 0.000}, 128: {1024: 0.000}, 256: {1024: 0.000}, 512: {1024: 0.000}, 1024: {1024: 0.000}, 1518: {1024: 0.000}}, 'test_perf_vswitch_pvp_packed_ring_inorder_no_mergeable_path_performance_with_cbdma': {64: {1024: 0.000}, 128: {1024: 0.000}, 256: {1024: 0.000}, 512: {1024: 0.000}, 1024: {1024: 0.000}, 1518: {1024: 0.000}}, 'test_perf_vswitch_pvp_packed_ring_mergeable_path_performance_with_cbdma': {64: {1024: 0.000}, 128: {1024: 0.000}, 256: {1024: 0.000}, 512: {1024: 0.000}, 1024: {1024: 0.000}, 1518: {1024: 0.000}}, 'test_perf_vswitch_pvp_packed_ring_non_mergeable_path_performance_with_cbdma': {64: {1024: 0.000}, 128: {1024: 0.000}, 256: {1024: 0.000}, 512: {1024: 0.000}, 1024: {1024: 0.000}, 1518: {1024: 0.000}}, 'test_perf_vswitch_pvp_packed_ring_vectorized_path_performance_with_cbdma': {64: {1024: 0.000}, 128: {1024: 0.000}, 256: {1024: 0.000}, 512: {1024: 0.000}, 1024: {1024: 0.000}, 1518: {1024: 0.000}}} +