From patchwork Tue May 23 02:30:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 127186 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 74D4942B7B; Tue, 23 May 2023 04:30:31 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 61E8640EE5; Tue, 23 May 2023 04:30:31 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 55CAB40A80 for ; Tue, 23 May 2023 04:30:29 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684809029; x=1716345029; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=JzuQRMF6mJqkb+GnIaYh3xUF6QMzXy28U47CDMjPYFU=; b=Sd6uIxrzGspYAjh667ow/ONmqZOZQj0yAZpgtgY7FMpir/PaE0y8W/R4 ySOdi2tIc+FILoi9QdKb+imRB2Kw+twlghtEfyW3MGw47QSJI6zfXFeG0 cauRthBqlnEKmcrSBNbm10ZdcLsGsM51SbDzM4HjS3qx4XsH8Goc+dFLd oBWr6ljjAOmxotz8BmcBBDPdsHAev6fmm13ad7/eFuklOIJvGqp8z0vNo exCUis+XEnZMssB4YOdca9DVAs+Tn2kYKDem9v4ylIRvoAt5+w7jwPuTf h/315im2vxPkmB/UmUmh9FAEGhuT2HkRnx21srSoibgOJcPC2q0y8lQCW g==; X-IronPort-AV: E=McAfee;i="6600,9927,10718"; a="342561553" X-IronPort-AV: E=Sophos;i="6.00,185,1681196400"; d="scan'208";a="342561553" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 May 2023 19:30:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10718"; a="950353365" X-IronPort-AV: E=Sophos;i="6.00,185,1681196400"; d="scan'208";a="950353365" Received: from unknown (HELO dut222..) ([10.239.252.222]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 May 2023 19:30:26 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1] vswitch_pvp_multi_paths_performance_with_cbdma: delete testsuite Date: Tue, 23 May 2023 10:30:21 +0800 Message-Id: <20230523023021.167266-1-weix.ling@intel.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As the dpdk-vhost app not applicable for performance testing with CBDMA channel for PVP multi paths, so delete this test suite. Signed-off-by: Wei Ling --- test_plans/index.rst | 1 - ...paths_performance_with_cbdma_test_plan.rst | 366 ------------ ..._pvp_multi_paths_performance_with_cbdma.py | 563 ------------------ 3 files changed, 930 deletions(-) delete mode 100644 test_plans/vswitch_pvp_multi_paths_performance_with_cbdma_test_plan.rst delete mode 100644 tests/TestSuite_vswitch_pvp_multi_paths_performance_with_cbdma.py diff --git a/test_plans/index.rst b/test_plans/index.rst index a0c056cd..b907a1db 100644 --- a/test_plans/index.rst +++ b/test_plans/index.rst @@ -248,7 +248,6 @@ The following are the test plans for the DPDK DTS automated test system. dpdk_gso_lib_test_plan vswitch_sample_cbdma_test_plan vswitch_sample_dsa_test_plan - vswitch_pvp_multi_paths_performance_with_cbdma_test_plan vxlan_gpe_support_in_i40e_test_plan pvp_diff_qemu_version_test_plan pvp_share_lib_test_plan diff --git a/test_plans/vswitch_pvp_multi_paths_performance_with_cbdma_test_plan.rst b/test_plans/vswitch_pvp_multi_paths_performance_with_cbdma_test_plan.rst deleted file mode 100644 index 98f4dcea..00000000 --- a/test_plans/vswitch_pvp_multi_paths_performance_with_cbdma_test_plan.rst +++ /dev/null @@ -1,366 +0,0 @@ -.. SPDX-License-Identifier: BSD-3-Clause - Copyright(c) 2022 Intel Corporation - -======================================================== -Vswitch PVP multi-paths performance with CBDMA test plan -======================================================== - -Description -=========== - -Benchmark PVP multi-paths performance with CBDMA in vhost sample, -include 10 tx/rx paths: inorder mergeable, inorder non-mergeable, -mergeable, non-mergeable, vectorized_rx, virtio 1.1 inorder mergeable, -virtio 1.1 inorder non-mergeable, virtio 1.1 mergeable, virtio 1.1 non-mergeable, -virtio1.1 vectorized path. Give 1 core for vhost and virtio respectively. -About vswitch sample, a new option --total-num-mbufs is added from dpdk-22.03, -for the user to set larger mbuf pool to avoid launch fail. For example, dpdk-vhost -will fail to launch with a 40G i40e port without setting larger mbuf pool. -For more about vhost switch sample, please refer to the dpdk docs: -http://doc.dpdk.org/guides/sample_app_ug/vhost.html -For virtio-user vdev parameter, you can refer to the dpdk doc: -https://doc.dpdk.org/guides/nics/virtio.html#virtio-paths-selection-and-usage - -Prerequisites -============= - -Topology --------- - -Test flow: TG-->nic-->vswitch-->virtio-user-->vswitch-->nic-->TG - -Hardware --------- -Supportted NICs: all except IntelĀ® Ethernet 800 Series that not support VMDQ - -Software --------- -Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz - -General set up --------------- -1. Compile DPDK and vhost example:: - - # meson - # meson configure -Dexamples=vhost - # ninja -C -j 110 - -2. Get the pci device id and DMA device id of DUT. - -For example, 0000:18:00.0 is pci device id, 0000:00:04.0 is DMA device id:: - - # ./usertools/dpdk-devbind.py -s - - Network devices using kernel driver - =================================== - 0000:18:00.0 'Device 159b' if=ens785f0 drv=ice unused=vfio-pci - - DMA devices using kernel driver - =============================== - 0000:00:04.0 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci - -Test case -========= - -Common steps ------------- -1. Bind one physical port and one CBDMA port to vfio-pci:: - - # ./usertools/dpdk-devbind.py -b vfio-pci - # ./usertools/dpdk-devbind.py -b vfio-pci - - For example:: - ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:18.0 - ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:04.0 - -2. Inject different size of packets to NIC by traffic generator:: - - The packet size include [64, 128, 256, 512, 1024, 1518], and the format of packet is as follows. - +-------------+-------------+-------------+-------------+ - | MAC | MAC | IPV4 | IPV4 | - | Src address | Dst address | Src address | Dst address | - |-------------|-------------|-------------|-------------| - | Any MAC | Virtio mac | Any IP | Any IP | - +-------------+-------------+-------------+-------------+ - All the packets in this test plan use the Virtio mac:00:11:22:33:44:10. - -Test Case 1: Vswitch PVP split ring inorder mergeable path performance with CBDMA ---------------------------------------------------------------------------------- -This case uses Vswitch and Traffic generator(For example, Trex) to test performance of split ring inorder mergeable path with CBDMA. - -1. Bind one physical port and one CBDMA port to vfio-pci as common step 1. - -2. Launch dpdk-vhost by below command:: - - #.//examples/dpdk-vhost -l 2-3 -n 4 -a 0000:18:00.0 -a 0000:00:04.0 \ - -- -p 0x1 --mergeable 1 --vm2vm 1 --stats 1 --socket-file /tmp/vhost-net --dmas [txd0@0000:00:04.0] --total-num-mbufs 600000 - -3. Launch virtio-user with split ring inorder mergeable path:: - - #.//app/dpdk-testpmd -l 5-6 -n 4 --no-pci --file-prefix=testpmd0 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=1 \ - -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 - -4. Send packets from virtio-user to let vswitch know the mac addr:: - - testpmd> set fwd mac - testpmd> start tx_first - testpmd> stop - testpmd> start - -5. Send packets by traffic generator as common step 2, and check the throughput with below command:: - - testpmd> show port stats all - -Test Case 2: Vswitch PVP split ring inorder non-mergeable path performance with CBDMA -------------------------------------------------------------------------------------- -This case uses Vswitch and Traffic generator(For example, Trex) to test performance of split ring inorder non-mergeable path with CBDMA. - -1. Bind one physical port and one CBDMA port to vfio-pci as common step 1. - -2. Launch dpdk-vhost by below command:: - - #.//examples/dpdk-vhost -l 2-3 -n 4 -a 0000:18:00.0 -a 0000:00:04.0 \ - -- -p 0x1 --mergeable 1 --vm2vm 1 --stats 1 --socket-file /tmp/vhost-net --dmas [txd0@0000:00:04.0] --total-num-mbufs 600000 - -3. Launch virtio-user with split ring non-mergeable path:: - - #.//app/dpdk-testpmd -l 5-6 -n 4 --no-pci --file-prefix=testpmd0 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=1 \ - -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 - -4. Send packets from virtio-user to let vswitch know the mac addr:: - - testpmd> set fwd mac - testpmd> start tx_first - testpmd> stop - testpmd> start - -5. Send packets by traffic generator as common step 2, and check the throughput with below command:: - - testpmd> show port stats all - -Test Case 3: Vswitch PVP split ring mergeable path performance with CBDMA -------------------------------------------------------------------------- -This case uses Vswitch and Traffic generator(For example, Trex) to test performance of split ring mergeable path with CBDMA. - -1. Bind one physical port and one CBDMA port to vfio-pci as common step 1. - -2. Launch dpdk-vhost by below command:: - - #.//examples/dpdk-vhost -l 2-3 -n 4 -a 0000:18:00.0 -a 0000:00:04.0 \ - -- -p 0x1 --mergeable 1 --vm2vm 1 --stats 1 --socket-file /tmp/vhost-net --dmas [txd0@0000:00:04.0] --total-num-mbufs 600000 - -3. Launch virtio-user with split ring mergeable path:: - - #.//app/dpdk-testpmd -l 5-6 -n 4 --no-pci --file-prefix=testpmd0 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=0 \ - -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 - -4. Send packets from virtio-user to let vswitch know the mac addr:: - - testpmd> set fwd mac - testpmd> start tx_first - testpmd> stop - testpmd> start - -5. Send packets by traffic generator as common step 2, and check the throughput with below command:: - - testpmd> show port stats all - -Test Case 4: Vswitch PVP split ring non-mergeable path performance with CBDMA ------------------------------------------------------------------------------ -This case uses Vswitch and Traffic generator(For example, Trex) to test performance of split ring non-mergeable path with CBDMA. - -1. Bind one physical port and one CBDMA port to vfio-pci as common step 1. - -2. Launch dpdk-vhost by below command:: - - #.//examples/dpdk-vhost -l 2-3 -n 4 -a 0000:18:00.0 -a 0000:00:04.0 \ - -- -p 0x1 --mergeable 1 --vm2vm 1 --stats 1 --socket-file /tmp/vhost-net --dmas [txd0@0000:00:04.0] --total-num-mbufs 600000 - -3. Launch virtio-user with split ring non-mergeable path:: - - #.//app/dpdk-testpmd -l 5-6 -n 4 --no-pci --file-prefix=testpmd0 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0 \ - -- -i --enable-hw-vlan-strip --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 - -4. Send packets from virtio-user to let vswitch know the mac addr:: - - testpmd> set fwd mac - testpmd> start tx_first - testpmd> stop - testpmd> start - -5. Send packets by traffic generator as common step 2, and check the throughput with below command:: - - testpmd> show port stats all - -Test Case 5: Vswitch PVP split ring vectorized path performance with CBDMA --------------------------------------------------------------------------- -This case uses Vswitch and Traffic generator(For example, Trex) to test performance of split ring vectorized path with CBDMA. - -1. Bind one physical port and one CBDMA port to vfio-pci as common step 1. - -2. Launch dpdk-vhost by below command:: - - #.//examples/dpdk-vhost -l 2-3 -n 4 -a 0000:18:00.0 -a 0000:00:04.0 \ - -- -p 0x1 --mergeable 1 --vm2vm 1 --stats 1 --socket-file /tmp/vhost-net --dmas [txd0@0000:00:04.0] --total-num-mbufs 600000 - -3. Launch virtio-user with split ring vectorized path:: - - #.//app/dpdk-testpmd -l 5-6 -n 4 --no-pci --file-prefix=testpmd0 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=1,vectorized=1 \ - -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 - -4. Send packets from virtio-user to let vswitch know the mac addr:: - - testpmd> set fwd mac - testpmd> start tx_first - testpmd> stop - testpmd> start - -5. Send packets by traffic generator as common step 2, and check the throughput with below command:: - - testpmd> show port stats all - - -Test Case 6: Vswitch PVP packed ring inorder mergeable path performance with CBDMA ----------------------------------------------------------------------------------- -This case uses Vswitch and Traffic generator(For example, Trex) to test performance of packed ring inorder mergeable path with CBDMA. - -1. Bind one physical port and one CBDMA port to vfio-pci as common step 1. - -2. Launch dpdk-vhost by below command:: - - #.//examples/dpdk-vhost -l 2-3 -n 4 -a 0000:18:00.0 -a 0000:00:04.0 \ - -- -p 0x1 --mergeable 1 --vm2vm 1 --stats 1 --socket-file /tmp/vhost-net --dmas [txd0@0000:00:04.0] --total-num-mbufs 600000 - -3. Launch virtio-user with packed ring inorder mergeable path:: - - #.//app/dpdk-testpmd -l 5-6 -n 4 --no-pci --file-prefix=testpmd0 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=1 \ - -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 - -4. Send packets from virtio-user to let vswitch know the mac addr:: - - testpmd> set fwd mac - testpmd> start tx_first - testpmd> stop - testpmd> start - -5. Send packets by traffic generator as common step 2, and check the throughput with below command:: - - testpmd> show port stats all - -Test Case 7: Vswitch PVP packed ring inorder non-mergeable path performance with CBDMA --------------------------------------------------------------------------------------- -This case uses Vswitch and Traffic generator(For example, Trex) to test performance of packed ring inorder non-mergeable path with CBDMA. - -1. Bind one physical port and one CBDMA port to vfio-pci as common step 1. - -2. Launch dpdk-vhost by below command:: - - #.//examples/dpdk-vhost -l 2-3 -n 4 -a 0000:18:00.0 -a 0000:00:04.0 \ - -- -p 0x1 --mergeable 1 --vm2vm 1 --stats 1 --socket-file /tmp/vhost-net --dmas [txd0@0000:00:04.0] --total-num-mbufs 600000 - -3. Launch virtio-user with packed ring inorder non-mergeable path:: - - #.//app/dpdk-testpmd -l 5-6 -n 4 --no-pci --file-prefix=testpmd0 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1 \ - -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 - -4. Send packets from virtio-user to let vswitch know the mac addr:: - - testpmd> set fwd mac - testpmd> start tx_first - testpmd> stop - testpmd> start - -5. Send packets by traffic generator as common step 2, and check the throughput with below command:: - - testpmd> show port stats all - -Test Case 8: Vswitch PVP packed ring mergeable path performance with CBDMA --------------------------------------------------------------------------- -This case uses Vswitch and Traffic generator(For example, Trex) to test performance of packed ring mergeable path with CBDMA. - -1. Bind one physical port and one CBDMA port to vfio-pci as common step 1. - -2. Launch dpdk-vhost by below command:: - - #.//examples/dpdk-vhost -l 2-3 -n 4 -a 0000:18:00.0 -a 0000:00:04.0 \ - -- -p 0x1 --mergeable 1 --vm2vm 1 --stats 1 --socket-file /tmp/vhost-net --dmas [txd0@0000:00:04.0] --total-num-mbufs 600000 - -3. Launch virtio-user with packed ring mergeable path:: - - #.//app/dpdk-testpmd -l 5-6 -n 4 --no-pci --file-prefix=testpmd0 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=0 \ - -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 - -4. Send packets from virtio-user to let vswitch know the mac addr:: - - testpmd> set fwd mac - testpmd> start tx_first - testpmd> stop - testpmd> start - -5. Send packets by traffic generator as common step 2, and check the throughput with below command:: - - testpmd> show port stats all - -Test Case 9: Vswitch PVP packed ring non-mergeable path performance with CBDMA ------------------------------------------------------------------------------- -This case uses Vswitch and Traffic generator(For example, Trex) to test performance of packed ring non-mergeable path with CBDMA. - -1. Bind one physical port and one CBDMA port to vfio-pci as common step 1. - -2. Launch dpdk-vhost by below command:: - - #.//examples/dpdk-vhost -l 2-3 -n 4 -a 0000:18:00.0 -a 0000:00:04.0 \ - -- -p 0x1 --mergeable 1 --vm2vm 1 --stats 1 --socket-file /tmp/vhost-net --dmas [txd0@0000:00:04.0] --total-num-mbufs 600000 - -3. Launch virtio-user with packed ring non-mergeable path:: - - #.//app/dpdk-testpmd -l 5-6 -n 4 --no-pci --file-prefix=testpmd0 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=0 \ - -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 - -4. Send packets from virtio-user to let vswitch know the mac addr:: - - testpmd> set fwd mac - testpmd> start tx_first - testpmd> stop - testpmd> start - -5. Send packets by traffic generator as common step 2, and check the throughput with below command:: - - testpmd> show port stats all - -Test Case 10: Vswitch PVP packed ring vectorized path performance with CBDMA ----------------------------------------------------------------------------- -This case uses Vswitch and Traffic generator(For example, Trex) to test performance of packed ring vectorized path with CBDMA. - -1. Bind one physical port and one CBDMA port to vfio-pci as common step 1. - -2. Launch dpdk-vhost by below command:: - - #.//examples/dpdk-vhost -l 2-3 -n 4 -a 0000:18:00.0 -a 0000:00:04.0 \ - -- -p 0x1 --mergeable 1 --vm2vm 1 --stats 1 --socket-file /tmp/vhost-net --dmas [txd0@0000:00:04.0] --total-num-mbufs 600000 - -3. Launch virtio-user with packed ring vectorized path:: - - #.//app/dpdk-testpmd -l 5-6 -n 4 --no-pci --file-prefix=testpmd0 --force-max-simd-bitwidth=512 \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \ - -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 - -4. Send packets from virtio-user to let vswitch know the mac addr:: - - testpmd> set fwd mac - testpmd> start tx_first - testpmd> stop - testpmd> start - -5. Send packets by traffic generator as common step 2, and check the throughput with below command:: - - testpmd> show port stats all diff --git a/tests/TestSuite_vswitch_pvp_multi_paths_performance_with_cbdma.py b/tests/TestSuite_vswitch_pvp_multi_paths_performance_with_cbdma.py deleted file mode 100644 index c2d505e3..00000000 --- a/tests/TestSuite_vswitch_pvp_multi_paths_performance_with_cbdma.py +++ /dev/null @@ -1,563 +0,0 @@ -# SPDX-License-Identifier: BSD-3-Clause -# Copyright(c) 2022 Intel Corporation -# - -""" -DPDK Test suite. -""" - -import json -import os -import re -import time -from copy import deepcopy - -import framework.rst as rst -import framework.utils as utils -from framework.packet import Packet -from framework.pktgen import PacketGeneratorHelper -from framework.pmd_output import PmdOutput -from framework.settings import UPDATE_EXPECTED, load_global_setting -from framework.test_case import TestCase -from tests.virtio_common import basic_common as BC -from tests.virtio_common import cbdma_common as CC - - -class TestVswitchPvpMultiPathsPerformanceWithCbdma(TestCase): - def set_up_all(self): - """ - Run at the start of each test suite. - """ - self.build_vhost_app() - self.dut_ports = self.dut.get_ports() - self.number_of_ports = 1 - self.verify(len(self.dut_ports) >= 1, "Insufficient ports for testing") - self.ports_socket = self.dut.get_numa_id(self.dut_ports[0]) - self.cores = self.dut.get_core_list("all", socket=self.ports_socket) - self.vhost_core_list = self.cores[0:2] - self.vuser0_core_list = self.cores[2:4] - self.vhost_core_mask = utils.create_mask(self.vhost_core_list) - self.mem_channels = self.dut.get_memory_channels() - # get cbdma device - self.cbdma_dev_infos = [] - self.dmas_info = None - self.device_str = None - self.out_path = "/tmp" - out = self.tester.send_expect("ls -d %s" % self.out_path, "# ") - if "No such file or directory" in out: - self.tester.send_expect("mkdir -p %s" % self.out_path, "# ") - self.base_dir = self.dut.base_dir.replace("~", "/root") - txport = self.tester.get_local_port(self.dut_ports[0]) - self.txItf = self.tester.get_interface(txport) - self.virtio_user0_mac = "00:11:22:33:44:10" - self.vm_num = 2 - self.app_testpmd_path = self.dut.apps_name["test-pmd"] - self.pktgen_helper = PacketGeneratorHelper() - self.vhost_user = self.dut.new_session(suite="vhost-user") - self.virtio_user0 = self.dut.new_session(suite="virtio-user0") - self.virtio_user0_pmd = PmdOutput(self.dut, self.virtio_user0) - self.frame_size = [64, 128, 256, 512, 1024, 1518] - self.save_result_flag = True - self.json_obj = {} - self.CC = CC(self) - self.BC = BC(self) - - def set_up(self): - """ - Run before each test case. - """ - self.dut.send_expect("rm -rf %s/vhost-net*" % self.base_dir, "#") - self.dut.send_expect("killall -I dpdk-vhost", "#", 20) - self.dut.send_expect("killall -I dpdk-testpmd", "#", 20) - self.dut.send_expect("killall -I qemu-system-x86_64", "#", 20) - - # Prepare the result table - self.table_header = ["Frame"] - self.table_header.append("Mode/RXD-TXD") - self.table_header.append("Mpps") - self.table_header.append("% linerate") - self.result_table_create(self.table_header) - - self.test_parameters = self.get_suite_cfg()["test_parameters"] - # test parameters include: frames size, descriptor numbers - self.test_parameters = self.get_suite_cfg()["test_parameters"] - - # traffic duraion in second - self.test_duration = self.get_suite_cfg()["test_duration"] - - # initilize throughput attribution - # {'$framesize':{"$nb_desc": 'throughput'} - self.throughput = {} - - # Accepted tolerance in Mpps - self.gap = self.get_suite_cfg()["accepted_tolerance"] - self.test_result = {} - self.nb_desc = self.test_parameters[64][0] - - def build_vhost_app(self): - out = self.dut.build_dpdk_apps("./examples/vhost") - self.verify("Error" not in out, "compilation vhost error") - - def start_vhost_app(self, allow_pci, cbdmas): - """ - launch the vhost app on vhost side - """ - self.app_path = self.dut.apps_name["vhost"] - socket_file_param = "--socket-file ./vhost-net" - allow_option = "" - for item in allow_pci: - allow_option += " -a {}".format(item) - params = ( - " -c {} -n {} {} -- -p 0x1 --mergeable 1 --vm2vm 1 --stats 1 " - + socket_file_param - + " --dmas [txd0@{}] --total-num-mbufs 600000" - ).format( - self.vhost_core_mask, - self.mem_channels, - allow_option, - cbdmas[0], - ) - self.command_line = self.app_path + params - self.vhost_user.send_command(self.command_line) - time.sleep(3) - - def start_virtio_testpmd( - self, - virtio_path, - vlan_strip=False, - force_max_simd_bitwidth=False, - ): - """ - launch the testpmd as virtio with vhost_net0 - """ - eal_params = ( - " --vdev=net_virtio_user0,mac={},path=./vhost-net,queues=1,{}".format( - self.virtio_user0_mac, virtio_path - ) - ) - if self.BC.check_2M_hugepage_size(): - eal_params += " --single-file-segments" - if force_max_simd_bitwidth: - eal_params += " --force-max-simd-bitwidth=512" - params = "--rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1" - if vlan_strip: - params = "--rx-offloads=0x0 --enable-hw-vlan-strip " + params - self.virtio_user0_pmd.start_testpmd( - cores=self.vuser0_core_list, - param=params, - eal_param=eal_params, - no_pci=True, - ports=[], - prefix="virtio-user0", - fixed_prefix=True, - ) - self.virtio_user0_pmd.execute_cmd("set fwd mac") - self.virtio_user0_pmd.execute_cmd("start tx_first") - self.virtio_user0_pmd.execute_cmd("stop") - self.virtio_user0_pmd.execute_cmd("start") - - def config_stream(self, frame_size): - tgen_input = [] - rx_port = self.tester.get_local_port(self.dut_ports[0]) - tx_port = self.tester.get_local_port(self.dut_ports[0]) - pkt = Packet(pkt_type="UDP", pkt_len=frame_size) - pkt.config_layer("ether", {"dst": self.virtio_user0_mac}) - pcap = os.path.join( - self.out_path, "vswitch_pvp_multi_path_%s.pcap" % (frame_size) - ) - pkt.save_pcapfile(self.tester, pcap) - tgen_input.append((rx_port, tx_port, pcap)) - return tgen_input - - def perf_test(self, case_info): - for frame_size in self.frame_size: - self.throughput[frame_size] = dict() - self.logger.info( - "Test running at parameters: " + "framesize: {}".format(frame_size) - ) - tgenInput = self.config_stream(frame_size) - # clear streams before add new streams - self.tester.pktgen.clear_streams() - # run packet generator - streams = self.pktgen_helper.prepare_stream_from_tginput( - tgenInput, 100, None, self.tester.pktgen - ) - # set traffic option - traffic_opt = {"duration": 5} - _, pps = self.tester.pktgen.measure_throughput( - stream_ids=streams, options=traffic_opt - ) - Mpps = pps / 1000000.0 - linerate = ( - Mpps - * 100 - / float(self.wirespeed(self.nic, frame_size, self.number_of_ports)) - ) - self.throughput[frame_size][self.nb_desc] = Mpps - results_row = [frame_size] - results_row.append(case_info) - results_row.append(Mpps) - results_row.append(linerate) - self.result_table_add(results_row) - self.result_table_print() - - def handle_expected(self): - """ - Update expected numbers to configurate file: $DTS_CFG_FOLDER/$suite_name.cfg - """ - if load_global_setting(UPDATE_EXPECTED) == "yes": - for frame_size in self.test_parameters.keys(): - for nb_desc in self.test_parameters[frame_size]: - self.expected_throughput[frame_size][nb_desc] = round( - self.throughput[frame_size][nb_desc], 3 - ) - - def handle_results(self): - """ - results handled process: - 1, save to self.test_results - 2, create test results table - 3, save to json file for Open Lab - """ - header = self.table_header - header.append("Expected Throughput") - header.append("Throughput Difference") - for frame_size in self.test_parameters.keys(): - wirespeed = self.wirespeed(self.nic, frame_size, self.number_of_ports) - ret_datas = {} - for nb_desc in self.test_parameters[frame_size]: - ret_data = {} - ret_data[header[0]] = frame_size - ret_data[header[1]] = nb_desc - ret_data[header[2]] = "{:.3f} Mpps".format( - self.throughput[frame_size][nb_desc] - ) - ret_data[header[3]] = "{:.3f}%".format( - self.throughput[frame_size][nb_desc] * 100 / wirespeed - ) - ret_data[header[4]] = "{:.3f} Mpps".format( - self.expected_throughput[frame_size][nb_desc] - ) - ret_data[header[5]] = "{:.3f} Mpps".format( - self.throughput[frame_size][nb_desc] - - self.expected_throughput[frame_size][nb_desc] - ) - ret_datas[nb_desc] = deepcopy(ret_data) - self.test_result[frame_size] = deepcopy(ret_datas) - # Create test results table - self.result_table_create(header) - for frame_size in self.test_parameters.keys(): - for nb_desc in self.test_parameters[frame_size]: - table_row = list() - for i in range(len(header)): - table_row.append(self.test_result[frame_size][nb_desc][header[i]]) - self.result_table_add(table_row) - # present test results to screen - self.result_table_print() - # save test results as a file - if self.save_result_flag: - self.save_result(self.test_result) - - def save_result(self, data): - """ - Saves the test results as a separated file named with - self.nic+_perf_virtio_user_pvp.json in output folder - if self.save_result_flag is True - """ - case_name = self.running_case - self.json_obj[case_name] = list() - status_result = [] - for frame_size in self.test_parameters.keys(): - for nb_desc in self.test_parameters[frame_size]: - row_in = self.test_result[frame_size][nb_desc] - row_dict0 = dict() - row_dict0["performance"] = list() - row_dict0["parameters"] = list() - row_dict0["parameters"] = list() - result_throughput = float(row_in["Mpps"].split()[0]) - expected_throughput = float(row_in["Expected Throughput"].split()[0]) - # delta value and accepted tolerance in percentage - delta = result_throughput - expected_throughput - gap = expected_throughput * -self.gap * 0.01 - delta = float(delta) - gap = float(gap) - self.logger.info("Accept tolerance are (Mpps) %f" % gap) - self.logger.info("Throughput Difference are (Mpps) %f" % delta) - if result_throughput > expected_throughput + gap: - row_dict0["status"] = "PASS" - else: - row_dict0["status"] = "FAIL" - row_dict1 = dict( - name="Throughput", value=result_throughput, unit="Mpps", delta=delta - ) - row_dict2 = dict( - name="Txd/Rxd", value=row_in["Mode/RXD-TXD"], unit="descriptor" - ) - row_dict3 = dict(name="frame_size", value=row_in["Frame"], unit="bytes") - row_dict0["performance"].append(row_dict1) - row_dict0["parameters"].append(row_dict2) - row_dict0["parameters"].append(row_dict3) - self.json_obj[case_name].append(row_dict0) - status_result.append(row_dict0["status"]) - with open( - os.path.join( - rst.path2Result, "{0:s}_{1}.json".format(self.nic, self.suite_name) - ), - "w", - ) as fp: - json.dump(self.json_obj, fp) - self.verify("FAIL" not in status_result, "Exceeded Gap") - - def test_perf_vswitch_pvp_split_ring_inorder_mergeable_path_performance_with_cbdma( - self, - ): - """ - Test Case 1: Vswitch PVP split ring inorder mergeable path performance with CBDMA - """ - self.test_target = self.running_case - self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ - self.test_target - ] - - cbdmas = self.CC.bind_cbdma_to_dpdk_driver( - cbdma_num=1, driver_name="vfio-pci", socket=self.ports_socket - ) - ports = cbdmas - ports.append(self.dut.ports_info[0]["pci"]) - self.start_vhost_app(allow_pci=ports, cbdmas=cbdmas) - virtio_path = "packed_vq=0,mrg_rxbuf=1,in_order=1" - self.start_virtio_testpmd(virtio_path=virtio_path) - case_info = "split ring inorder mergeable" - self.perf_test(case_info) - self.handle_expected() - self.handle_results() - - def test_perf_vswitch_pvp_split_ring_inorder_no_mergeable_path_performance_with_cbdma( - self, - ): - """ - Test Case 2: Vswitch PVP split ring inorder non-mergeable path performance with CBDMA - """ - self.test_target = self.running_case - self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ - self.test_target - ] - - cbdmas = self.CC.bind_cbdma_to_dpdk_driver( - cbdma_num=1, driver_name="vfio-pci", socket=self.ports_socket - ) - ports = cbdmas - ports.append(self.dut.ports_info[0]["pci"]) - self.start_vhost_app(allow_pci=ports, cbdmas=cbdmas) - virtio_path = "packed_vq=0,mrg_rxbuf=0,in_order=1" - self.start_virtio_testpmd(virtio_path=virtio_path) - case_info = "split ring inorder non-mergeable" - self.perf_test(case_info) - self.handle_expected() - self.handle_results() - - def test_perf_vswitch_pvp_split_ring_mergeable_path_performance_with_cbdma(self): - """ - Test Case 3: Vswitch PVP split ring mergeable path performance with CBDMA - """ - self.test_target = self.running_case - self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ - self.test_target - ] - - cbdmas = self.CC.bind_cbdma_to_dpdk_driver( - cbdma_num=1, driver_name="vfio-pci", socket=self.ports_socket - ) - ports = cbdmas - ports.append(self.dut.ports_info[0]["pci"]) - self.start_vhost_app(allow_pci=ports, cbdmas=cbdmas) - virtio_path = "packed_vq=0,mrg_rxbuf=1,in_order=0" - self.start_virtio_testpmd(virtio_path=virtio_path) - case_info = "split ring mergeable" - self.perf_test(case_info) - self.handle_expected() - self.handle_results() - - def test_perf_vswitch_pvp_split_ring_non_mergeable_path_performance_with_cbdma( - self, - ): - """ - Test Case 4: Vswitch PVP split ring non-mergeable path performance with CBDMA - """ - self.test_target = self.running_case - self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ - self.test_target - ] - - cbdmas = self.CC.bind_cbdma_to_dpdk_driver( - cbdma_num=1, driver_name="vfio-pci", socket=self.ports_socket - ) - ports = cbdmas - ports.append(self.dut.ports_info[0]["pci"]) - self.start_vhost_app(allow_pci=ports, cbdmas=cbdmas) - virtio_path = "packed_vq=0,mrg_rxbuf=0,in_order=0" - self.start_virtio_testpmd(virtio_path=virtio_path, vlan_strip=True) - case_info = "split ring non-mergeable" - self.perf_test(case_info) - self.handle_expected() - self.handle_results() - - def test_perf_vswitch_pvp_split_ring_vectorized_path_performance_with_cbdma(self): - """ - Test Case 5: Vswitch PVP split ring vectorized path performance with CBDMA - """ - self.test_target = self.running_case - self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ - self.test_target - ] - - cbdmas = self.CC.bind_cbdma_to_dpdk_driver( - cbdma_num=1, driver_name="vfio-pci", socket=self.ports_socket - ) - ports = cbdmas - ports.append(self.dut.ports_info[0]["pci"]) - self.start_vhost_app(allow_pci=ports, cbdmas=cbdmas) - virtio_path = "packed_vq=0,mrg_rxbuf=0,in_order=1,vectorized=1" - self.start_virtio_testpmd(virtio_path=virtio_path) - case_info = "split ring vectorized" - self.perf_test(case_info) - self.handle_expected() - self.handle_results() - - def test_perf_vswitch_pvp_packed_ring_inorder_mergeable_path_performance_with_cbdma( - self, - ): - """ - Test Case 6: Vswitch PVP virtio 1.1 inorder mergeable path performance with CBDMA - """ - self.test_target = self.running_case - self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ - self.test_target - ] - - cbdmas = self.CC.bind_cbdma_to_dpdk_driver( - cbdma_num=1, driver_name="vfio-pci", socket=self.ports_socket - ) - ports = cbdmas - ports.append(self.dut.ports_info[0]["pci"]) - self.start_vhost_app(allow_pci=ports, cbdmas=cbdmas) - virtio_path = "packed_vq=1,mrg_rxbuf=1,in_order=1" - self.start_virtio_testpmd(virtio_path=virtio_path) - case_info = "split ring inorder mergeable" - self.perf_test(case_info) - self.handle_expected() - self.handle_results() - - def test_perf_vswitch_pvp_packed_ring_inorder_no_mergeable_path_performance_with_cbdma( - self, - ): - """ - Test Case 7: Vswitch PVP virtio 1.1 inorder non-mergeable path performance with CBDMA - """ - self.test_target = self.running_case - self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ - self.test_target - ] - - cbdmas = self.CC.bind_cbdma_to_dpdk_driver( - cbdma_num=1, driver_name="vfio-pci", socket=self.ports_socket - ) - ports = cbdmas - ports.append(self.dut.ports_info[0]["pci"]) - self.start_vhost_app(allow_pci=ports, cbdmas=cbdmas) - virtio_path = "packed_vq=1,mrg_rxbuf=0,in_order=1" - self.start_virtio_testpmd(virtio_path=virtio_path) - case_info = "split ring inorder non-mergeable" - self.perf_test(case_info) - self.handle_expected() - self.handle_results() - - def test_perf_vswitch_pvp_packed_ring_mergeable_path_performance_with_cbdma(self): - """ - Test Case 8: Vswitch PVP virtio 1.1 mergeable path performance with CBDMA - """ - self.test_target = self.running_case - self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ - self.test_target - ] - - cbdmas = self.CC.bind_cbdma_to_dpdk_driver( - cbdma_num=1, driver_name="vfio-pci", socket=self.ports_socket - ) - ports = cbdmas - ports.append(self.dut.ports_info[0]["pci"]) - self.start_vhost_app(allow_pci=ports, cbdmas=cbdmas) - virtio_path = "packed_vq=1,mrg_rxbuf=1,in_order=0" - self.start_virtio_testpmd(virtio_path=virtio_path) - case_info = "split ring mergeable" - self.perf_test(case_info) - self.handle_expected() - self.handle_results() - - def test_perf_vswitch_pvp_packed_ring_non_mergeable_path_performance_with_cbdma( - self, - ): - """ - Test Case 9: Vswitch PVP virtio 1.1 non-mergeable path performance with CBDMA - """ - self.test_target = self.running_case - self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ - self.test_target - ] - - cbdmas = self.CC.bind_cbdma_to_dpdk_driver( - cbdma_num=1, driver_name="vfio-pci", socket=self.ports_socket - ) - ports = cbdmas - ports.append(self.dut.ports_info[0]["pci"]) - self.start_vhost_app(allow_pci=ports, cbdmas=cbdmas) - virtio_path = "packed_vq=1,mrg_rxbuf=0,in_order=0" - self.start_virtio_testpmd(virtio_path=virtio_path) - case_info = "split ring non-mergeable" - self.perf_test(case_info) - self.handle_expected() - self.handle_results() - - def test_perf_vswitch_pvp_packed_ring_vectorized_path_performance_with_cbdma(self): - """ - Test Case 10: Vswitch PVP virtio 1.1 vectorized path performance with CBDMA - """ - self.test_target = self.running_case - self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ - self.test_target - ] - - cbdmas = self.CC.bind_cbdma_to_dpdk_driver( - cbdma_num=1, driver_name="vfio-pci", socket=self.ports_socket - ) - ports = cbdmas - ports.append(self.dut.ports_info[0]["pci"]) - self.start_vhost_app(allow_pci=ports, cbdmas=cbdmas) - virtio_path = "packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1" - self.start_virtio_testpmd(virtio_path=virtio_path) - case_info = "split ring vectorized" - self.perf_test(case_info) - self.handle_expected() - self.handle_results() - - def close_all_session(self): - if getattr(self, "vhost_user", None): - self.dut.close_session(self.vhost_user) - if getattr(self, "virtio-user0", None): - self.dut.close_session(self.virtio_user0) - if getattr(self, "virtio-user1", None): - self.dut.close_session(self.virtio_user1) - - def tear_down(self): - """ - Run after each test case. - """ - self.virtio_user0_pmd.quit() - self.vhost_user.send_expect("^C", "# ", 20) - - def tear_down_all(self): - """ - Run after each test suite. - """ - self.CC.bind_cbdma_to_kernel_driver(cbdma_idxs="all") - self.close_all_session()