From patchwork Mon Feb 6 04:01:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 123079 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E742341BE4; Mon, 6 Feb 2023 05:13:42 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DF1EC427E9; Mon, 6 Feb 2023 05:13:42 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id BC30740A7D for ; Mon, 6 Feb 2023 05:13:41 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675656821; x=1707192821; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=Mcam9OZbRe4R716GyhX5hjh42nUWaYRj3z8c+zk28UU=; b=K/iN00QFPfiuU7w3hoKuLGD3qavpW1WvOnEY50l9nseCwEAVx9tc46yK xx6NeMVlez8tTIebvhzmaocYbKQBDy5PWQ1lIZjuPQ732yek1/lasUaU3 FmVsdYcnreelgpD5qKdI9R4Oj0Kwbqj2nrnvSXuANcHmn9hFnYJSNKdGj w4x9tsCiNW9VVXvi4qnCPEPuI55iW/An9+x40MgYYvS3WwfRM3rU6HM9W 1qHg822A4zHydzWlpz+Nejt/tWZhKyABH/h7VjztYDk1zAON7cQFwlS1z Z5Gl88mcuOUYMgIzcpOAGgp8ktvAfhGSkK3HSvbyAV7ud9kVPEhrgCSY4 w==; X-IronPort-AV: E=McAfee;i="6500,9779,10612"; a="393726629" X-IronPort-AV: E=Sophos;i="5.97,276,1669104000"; d="scan'208";a="393726629" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Feb 2023 20:13:32 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10612"; a="698721828" X-IronPort-AV: E=Sophos;i="5.97,276,1669104000"; d="scan'208";a="698721828" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Feb 2023 20:13:31 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 1/4] test_plans/index: add pvp_vhost_async_multi_paths_performance_dsa_test_plan Date: Mon, 6 Feb 2023 12:01:30 +0800 Message-Id: <20230206040130.3641670-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add pvp_vhost_async_multi_paths_performance_dsa_test_plan.rst in testplans/index.rst. Signed-off-by: Wei Ling --- test_plans/index.rst | 1 + 1 file changed, 1 insertion(+) diff --git a/test_plans/index.rst b/test_plans/index.rst index ea13fc8e..9aa97881 100644 --- a/test_plans/index.rst +++ b/test_plans/index.rst @@ -132,6 +132,7 @@ The following are the test plans for the DPDK DTS automated test system. pvp_multi_paths_performance_test_plan pvp_multi_paths_vhost_single_core_performance_test_plan pvp_multi_paths_virtio_single_core_performance_test_plan + pvp_vhost_async_multi_paths_performance_dsa_test_plan qinq_filter_test_plan qos_api_test_plan qos_meter_test_plan From patchwork Mon Feb 6 04:01:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 123080 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1F7AF41BE4; Mon, 6 Feb 2023 05:13:51 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 12AAB42D0D; Mon, 6 Feb 2023 05:13:51 +0100 (CET) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id 22AF040A7D for ; Mon, 6 Feb 2023 05:13:48 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675656829; x=1707192829; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=OAnaD4isvGLDpyATF+j7k0kLHbFLr6Pf807ePHqWoHk=; b=Mpk8YdRI8v0/HI7QO/U6S5x/Ix32BSOPHE6uIsM9stuAlUMV5CF8X7zC BjLZ1SfW7/xLwvbeL8pwVlzhb9vzdCNWVPepOvUD0OfJcTYkVp+xffQou tPQpnxfVvlPp8saiaNcpm7zThI2r0IjIUTCnBYf3UC8PJDl8DhNWSjrHy R/kGMWm1gT4/Kt7jHu+9flVItL/v3k+vaNuubcucPgnM9Jiufws0oMDxz 3AhblKnXYt9WzKEtZtVcBQu3pD2jjets21mskW9r+z5x1zRcWX4uHrlMc PMGOZtePQtd4QHane/3f/adAGHXV2wf+OqXtPWCBiQf5fcyIw+E9Q2rel A==; X-IronPort-AV: E=McAfee;i="6500,9779,10612"; a="312783160" X-IronPort-AV: E=Sophos;i="5.97,276,1669104000"; d="scan'208";a="312783160" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Feb 2023 20:13:48 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10612"; a="729884685" X-IronPort-AV: E=Sophos;i="5.97,276,1669104000"; d="scan'208";a="729884685" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Feb 2023 20:13:46 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 2/4] test_plans/pvp_vhost_async_multi_paths_performance_dsa: add new testplan Date: Mon, 6 Feb 2023 12:01:45 +0800 Message-Id: <20230206040145.3641733-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add pvp_vhost_async_multi_paths_performance_dsa testplan to test pvp vhost/virtio multi-paths async data-path performance use DSA device with dpdk and kernel driver. Signed-off-by: Wei Ling --- ..._multi_paths_performance_dsa_test_plan.rst | 586 ++++++++++++++++++ 1 file changed, 586 insertions(+) create mode 100644 test_plans/pvp_vhost_async_multi_paths_performance_dsa_test_plan.rst diff --git a/test_plans/pvp_vhost_async_multi_paths_performance_dsa_test_plan.rst b/test_plans/pvp_vhost_async_multi_paths_performance_dsa_test_plan.rst new file mode 100644 index 00000000..e68b2375 --- /dev/null +++ b/test_plans/pvp_vhost_async_multi_paths_performance_dsa_test_plan.rst @@ -0,0 +1,586 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2019 Intel Corporation + +================================================================== +PVP vhost/virtio multi-paths async data-path performance test plan +================================================================== + +Benchmark PVP multi-paths performance with 10 tx/rx paths when vhost uses the asynchronous operation. Includes mergeable, +non-mergeable, vectorized_rx, inorder mergeable, inorder non-mergeable, packed ring mergeable, packed ring non-mergeableļ¼Œ +packed ring inorder mergeable, packed ring inorder non-mergeable, virtio1.1 vectorized path. Give 1 core for vhost and virtio +respectively. Packed ring vectorized path need: + + AVX512F and required extensions are supported by compiler and host + VERSION_1 and IN_ORDER features are negotiated + mergeable feature is not negotiated + LRO offloading is disabled + +Split ring vectorized rx path need: + mergeable and IN_ORDER features are not negotiated + LRO, chksum and vlan strip offloadings are disabled + +Test flow +========= + +TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG + +Test Case 1: pvp vhost async test with split ring inorder mergeable path using IDXD kernel driver +------------------------------------------------------------------------------------------------- + +1. Bind one nic port to vfio-pci and 1 dsa device to idxd, then generate 1wq by below command:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:27:00.0 + ./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0 + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0 + +2. Launch vhost by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 -a 0000:27:00.0 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0@wq0.0;rxq0@wq0.1]' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,mrg_rxbuf=1,in_order=1 \ + -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command:: + + testpmd>show port stats all + +Test Case 2: pvp vhost async test with split ring inorder non-mergeable path using IDXD kernel driver +----------------------------------------------------------------------------------------------------- + +1. Bind one nic port to vfio-pci and 1 dsa device to idxd, then generate 1wq by below command:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:27:00.0 + ./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0 + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0 + +2. Launch vhost by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 -a 0000:27:00.0 -a wq0.0 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0@wq0.0;rxq0@wq0.1]' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=1,mrg_rxbuf=0 \ + -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command:: + + testpmd>show port stats all + +Test Case 3: pvp vhost async test with split ring mergeable path using IDXD kernel driver +----------------------------------------------------------------------------------------- + +1. Bind one nic port to vfio-pci and 1 dsa device to idxd, then generate 1wq by below command:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:27:00.0 + ./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0 + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0 + +2. Launch vhost by below command:: + + rm -rf vhost-net* + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 -a 0000:27:00.0 -a wq0.0 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0@wq0.0;rxq0@wq0.1]' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=1 \ + -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command:: + + testpmd>show port stats all + +Test Case 4: pvp vhost async test with split ring non-mergeable path using IDXD kernel driver +--------------------------------------------------------------------------------------------- + +1. Bind one nic port to vfio-pci and 1 dsa device to idxd, then generate 1wq by below command:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:27:00.0 + ./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0 + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0 + +2. Launch vhost by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 -a 0000:27:00.0 -a wq0.0 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0@wq0.0;rxq0@wq0.1]' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=0,vectorized=1 \ + -- -i --rx-offloads=0x10 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command:: + + testpmd>show port stats all + +Test Case 5: pvp vhost async test with split ring vectorized path using IDXD kernel driver +------------------------------------------------------------------------------------------ + +1. Bind one nic port to vfio-pci and 1 dsa device to idxd, then generate 1wq by below command:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:27:00.0 + ./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0 + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0 + +2. Launch vhost by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 -a 0000:27:00.0 -a wq0.0 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0@wq0.0;rxq0@wq0.1]' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=0,vectorized=1 \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command:: + + testpmd>show port stats all + +Test Case 6: pvp vhost async test with packed ring inorder mergeable path using IDXD kernel driver +-------------------------------------------------------------------------------------------------- + +1. Bind one nic port to vfio-pci and 1 dsa device to idxd, then generate 1wq by below command:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:27:00.0 + ./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0 + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0 + +2. Launch vhost by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 -a 0000:27:00.0 -a wq0.0 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0@wq0.0;rxq0@wq0.1]' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=1,in_order=1 \ + -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command:: + + testpmd>show port stats all + +Test Case 7: pvp vhost async test with packed ring inorder non-mergeable path using IDXD kernel driver +------------------------------------------------------------------------------------------------------ + +1. Bind one nic port to vfio-pci and 1 dsa device to idxd, then generate 1wq by below command:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:27:00.0 + ./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0 + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0 + +2. Launch vhost by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 -a 0000:27:00.0 -a wq0.0 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0@wq0.0;rxq0@wq0.1]' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=0,in_order=1 \ + -- -i --rx-offloads=0x10 --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command:: + + testpmd>show port stats all + +Test Case 8: pvp vhost async test with packed ring mergeable path using IDXD kernel driver +------------------------------------------------------------------------------------------ + +1. Bind one nic port to vfio-pci and 1 dsa device to idxd, then generate 1wq by below command:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:27:00.0 + ./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0 + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0 + +2. Launch vhost by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 -a 0000:27:00.0 -a wq0.0 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0@wq0.0;rxq0@wq0.1]' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=1,in_order=0 \ + -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command:: + + testpmd>show port stats all + +Test Case 9: pvp vhost async test with packed ring non-mergeable path using IDXD kernel driver +---------------------------------------------------------------------------------------------- + +1. Bind one nic port to vfio-pci and 1 dsa device to idxd, then generate 1wq by below command:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:27:00.0 + ./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0 + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0 + +2. Launch vhost by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 -a 0000:27:00.0 -a wq0.0 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0@wq0.0;rxq0@wq0.1]' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=0,in_order=0 \ + -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command:: + + testpmd>show port stats all + +Test Case 10: pvp vhost async test with packed ring vectorized path using IDXD kernel driver +-------------------------------------------------------------------------------------------- + +1. Bind one nic port to vfio-pci and 1 dsa device to idxd, then generate 1wq by below command:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:27:00.0 + ./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0 + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0 + +2. Launch vhost by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 -a 0000:27:00.0 -a wq0.0 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0@wq0.0;rxq0@wq0.1]' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command:: + + testpmd>show port stats all + +Test Case 11: pvp vhost async test with split ring inorder mergeable path using vfio-pci driver +----------------------------------------------------------------------------------------------- + +1. Bind one nic port and 1 dsa device to vfio-pci:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:27:00.0 0000:6f:01.0 + +2. Launch vhost by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 -a 0000:27:00.0 -a 0000:6f:01.0,max_queues=2 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0@0000:6f:01.0-q0;rxq0@0000:6f:01.0-q1]' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,mrg_rxbuf=1,in_order=1 \ + -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command:: + + testpmd>show port stats all + +Test Case 12: pvp vhost async test with split ring inorder non-mergeable path using vfio-pci driver +--------------------------------------------------------------------------------------------------- + +1. Bind one nic port and 1 dsa device to vfio-pci:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:27:00.0 0000:6f:01.0 + +2. Launch vhost by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 -a 0000:27:00.0 -a 0000:6f:01.0,max_queues=2 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0@0000:6f:01.0-q0;rxq0@0000:6f:01.0-q1]' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=1,mrg_rxbuf=0 \ + -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command:: + + testpmd>show port stats all + +Test Case 13: pvp vhost async test with split ring mergeable path using vfio-pci driver +--------------------------------------------------------------------------------------- + +1. Bind one nic port and 1 dsa device to vfio-pci:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:27:00.0 0000:6f:01.0 + +2. Launch vhost by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 -a 0000:27:00.0 -a 0000:6f:01.0,max_queues=2 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0@0000:6f:01.0-q0;rxq0@0000:6f:01.0-q1]' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=1 \ + -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command:: + + testpmd>show port stats all + +Test Case 14: pvp vhost async test with split ring non-mergeable path using vfio-pci driver +------------------------------------------------------------------------------------------- + +1. Bind one nic port and 1 dsa device to vfio-pci:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:27:00.0 0000:6f:01.0 + +2. Launch vhost by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 -a 0000:27:00.0 -a 0000:6f:01.0,max_queues=2 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0@0000:6f:01.0-q0;rxq0@0000:6f:01.0-q1]' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=0,vectorized=1 \ + -- -i --rx-offloads=0x10 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command:: + + testpmd>show port stats all + +Test Case 15: pvp vhost async test with split ring vectorized_rx path using vfio-pci driver +------------------------------------------------------------------------------------------- + +1. Bind one nic port and 1 dsa device to vfio-pci:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:27:00.0 0000:6f:01.0 + +2. Launch vhost by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 -a 0000:27:00.0 -a 0000:6f:01.0,max_queues=2 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0@0000:6f:01.0-q0;rxq0@0000:6f:01.0-q1]' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=0,vectorized=1 \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command:: + + testpmd>show port stats all + +Test Case 16: pvp vhost async test with packed ring inorder mergeable path using vfio-pci driver +------------------------------------------------------------------------------------------------ + +1. Bind one nic port and 1 dsa device to vfio-pci:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:27:00.0 0000:6f:01.0 + +2. Launch vhost by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 -a 0000:27:00.0 -a 0000:6f:01.0,max_queues=2 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0@0000:6f:01.0-q0;rxq0@0000:6f:01.0-q1]' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=1,in_order=1 \ + -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command:: + + testpmd>show port stats all + +Test Case 17: pvp vhost async test with packed ring inorder non-mergeable path using vfio-pci driver +---------------------------------------------------------------------------------------------------- + +1. Bind one nic port and 1 dsa device to vfio-pci:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:27:00.0 0000:6f:01.0 + +2. Launch vhost by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 -a 0000:27:00.0 -a 0000:6f:01.0,max_queues=2 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0@0000:6f:01.0-q0;rxq0@0000:6f:01.0-q1]' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=0,in_order=1 \ + -- -i --rx-offloads=0x10 --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command:: + + testpmd>show port stats all + +Test Case 18: pvp vhost async test with packed ring mergeable path using vfio-pci driver +---------------------------------------------------------------------------------------- + +1. Bind one nic port and 1 dsa device to vfio-pci:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:27:00.0 0000:6f:01.0 + +2. Launch vhost by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 -a 0000:27:00.0 -a 0000:6f:01.0,max_queues=2 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0@0000:6f:01.0-q0;rxq0@0000:6f:01.0-q1]' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=1,in_order=0 \ + -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command:: + + testpmd>show port stats all + +Test Case 19: pvp vhost async test with packed ring non-mergeable path using vfio-pci driver +-------------------------------------------------------------------------------------------- + +1. Bind one nic port and 1 dsa device to vfio-pci:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:27:00.0 0000:6f:01.0 + +2. Launch vhost by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 -a 0000:27:00.0 -a 0000:6f:01.0,max_queues=2 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0@0000:6f:01.0-q0;rxq0@0000:6f:01.0-q1]' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=0,in_order=0 \ + -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command:: + + testpmd>show port stats all + +Test Case 20: pvp vhost async test with packed ring vectorized path using vfio-pci driver +----------------------------------------------------------------------------------------- + +1. Bind one nic port and 1 dsa device to vfio-pci:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:27:00.0 0000:6f:01.0 + +2. Launch vhost by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 -a 0000:27:00.0 -a 0000:6f:01.0,max_queues=2 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0@0000:6f:01.0-q0;rxq0@0000:6f:01.0-q1]' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +3. Launch virtio-user by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command:: + + testpmd>show port stats all \ No newline at end of file From patchwork Mon Feb 6 04:01:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 123081 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4ECBA41BE4; Mon, 6 Feb 2023 05:14:03 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4586641611; Mon, 6 Feb 2023 05:14:03 +0100 (CET) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id D84F540A7D for ; Mon, 6 Feb 2023 05:14:01 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675656842; x=1707192842; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=vNRZDgPUGOYGcAXvAnvq9eeE8olg5zwCUN0Yrwo2Pzs=; b=BiAYbvCpPFh/P1TTTdQIrAp5UmXHUIdsSu3gQdzbZudoBNs1vFVA+0yY 4MsGXXFf/tn1w2B+sbYK5yCShOUls7Z9wdI0KW77055PwcENMBgikCFu9 fRw2u3av1CFRjLqqnavzQYkOyfZ/iajhDWPi0nirRGYnBf0UJpAY1krW+ Lh3YBoz/F6BFqyeGcHrljEeJAcG/x8IBbMshO7Qxi/8n0mPvVyUSGQtIX ejDR0bxQrR9czldVLdSoE9wBqwOvlSHdXr7+ehg9tYjp93OeAMmktqlTf 8XgLvN/ZwVYMRYNUhwRK/dHoOYSpVanKYOBaKRfv3c5Sgfbz+NFtDAW5Y g==; X-IronPort-AV: E=McAfee;i="6500,9779,10612"; a="312783173" X-IronPort-AV: E=Sophos;i="5.97,276,1669104000"; d="scan'208";a="312783173" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Feb 2023 20:14:00 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10612"; a="729884700" X-IronPort-AV: E=Sophos;i="5.97,276,1669104000"; d="scan'208";a="729884700" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Feb 2023 20:13:58 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 3/4] tests/pvp_vhost_async_multi_paths_performance_dsa: add new testsuite Date: Mon, 6 Feb 2023 12:01:58 +0800 Message-Id: <20230206040158.3641794-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add pvp_vhost_async_multi_paths_performance_dsa testsuite to test pvp vhost/virtio multi-paths async data-path performance use DSA device with dpdk and kernel driver. Signed-off-by: Wei Ling --- ...vhost_async_multi_paths_performance_dsa.py | 923 ++++++++++++++++++ 1 file changed, 923 insertions(+) create mode 100644 tests/TestSuite_pvp_vhost_async_multi_paths_performance_dsa.py diff --git a/tests/TestSuite_pvp_vhost_async_multi_paths_performance_dsa.py b/tests/TestSuite_pvp_vhost_async_multi_paths_performance_dsa.py new file mode 100644 index 00000000..51dfdb76 --- /dev/null +++ b/tests/TestSuite_pvp_vhost_async_multi_paths_performance_dsa.py @@ -0,0 +1,923 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2019 Intel Corporation +# + +import json +import os +from copy import deepcopy + +import framework.rst as rst +from framework.packet import Packet +from framework.pktgen import PacketGeneratorHelper +from framework.pmd_output import PmdOutput +from framework.settings import UPDATE_EXPECTED, load_global_setting +from framework.test_case import TestCase + +from .virtio_common import dsa_common as DC + + +class TestPVPVhostAsyncMultiPathsPerformanceDsa(TestCase): + def set_up_all(self): + """ + Run at the start of each test suite. + """ + self.vhost_user = self.dut.new_session(suite="vhost-user") + self.virtio_user = self.dut.new_session(suite="virtio-user") + self.vhost_user_pmd = PmdOutput(self.dut, self.vhost_user) + self.virtio_user_pmd = PmdOutput(self.dut, self.virtio_user) + self.dut_ports = self.dut.get_ports() + self.ports_socket = self.dut.get_numa_id(self.dut_ports[0]) + self.core_list = self.dut.get_core_list(config="all", socket=self.ports_socket) + self.vhost_user_core = self.core_list[0:2] + self.virtio_user_core = self.core_list[2:4] + self.frame_sizes = [64, 128, 256, 512, 1024, 1518] + self.number_of_ports = 1 + self.testpmd_path = self.dut.apps_name["test-pmd"] + self.testpmd_name = self.testpmd_path.split("/")[-1] + self.out_path = "/tmp" + out = self.tester.send_expect("ls -d %s" % self.out_path, "# ") + if "No such file or directory" in out: + self.tester.send_expect("mkdir -p %s" % self.out_path, "# ") + self.base_dir = self.dut.base_dir.replace("~", "/root") + # create an instance to set stream field setting + self.pktgen_helper = PacketGeneratorHelper() + self.save_result_flag = True + self.json_obj = {} + self.DC = DC(self) + + def set_up(self): + """ + Run before each test case. + """ + self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#") + self.dut.send_expect("rm -rf %s/vhost-net*" % self.base_dir, "#") + self.DC.reset_all_work_queue() + self.DC.bind_all_dsa_to_kernel() + # Prepare the result table + self.table_header = ["Frame"] + self.table_header.append("Mode/RXD-TXD") + self.table_header.append("Mpps") + self.table_header.append("% linerate") + self.result_table_create(self.table_header) + + self.test_parameters = self.get_suite_cfg()["test_parameters"] + # test parameters include: frames size, descriptor numbers + self.test_parameters = self.get_suite_cfg()["test_parameters"] + + # traffic duraion in second + self.test_duration = self.get_suite_cfg()["test_duration"] + + # initilize throughput attribution + # {'$framesize':{"$nb_desc": 'throughput'} + self.throughput = {} + + # Accepted tolerance in Mpps + self.gap = self.get_suite_cfg()["accepted_tolerance"] + self.test_result = {} + self.nb_desc = self.test_parameters[64][0] + + def start_vhost_user_testpmd(self, eal_param, param, ports, port_options=""): + """ + start testpmd on vhost-user + """ + if port_options: + self.vhost_user_pmd.start_testpmd( + cores=self.vhost_user_core, + eal_param=eal_param, + param=param, + ports=ports, + port_options=port_options, + prefix="vhost-user", + fixed_prefix=True, + ) + else: + self.vhost_user_pmd.start_testpmd( + cores=self.vhost_user_core, + eal_param=eal_param, + param=param, + ports=ports, + prefix="vhost-user", + fixed_prefix=True, + ) + self.vhost_user_pmd.execute_cmd("set fwd mac") + self.vhost_user_pmd.execute_cmd("start") + + def start_virtio_user_testpmd(self, eal_param, param): + """ + start testpmd on virtio-user + """ + self.virtio_user_pmd.start_testpmd( + cores=self.virtio_user_core, + eal_param=eal_param, + param=param, + no_pci=True, + prefix="virtio-user", + fixed_prefix=True, + ) + self.virtio_user_pmd.execute_cmd("set fwd mac") + self.virtio_user_pmd.execute_cmd("start") + + def send_and_verify(self, case_info): + """ + Send packet with packet generator and verify + """ + for frame_size in self.frame_sizes: + tgen_input = [] + self.throughput[frame_size] = dict() + self.logger.info( + "Test running at parameters: " + + "framesize: {}, rxd/txd: {}".format(frame_size, self.nb_desc) + ) + rx_port = self.tester.get_local_port(self.dut_ports[0]) + tx_port = self.tester.get_local_port(self.dut_ports[0]) + destination_mac = self.dut.get_mac_address(self.dut_ports[0]) + pkt = Packet(pkt_type="TCP", pkt_len=frame_size) + pkt.config_layer("ether", {"dst": "%s" % destination_mac}) + pkt.save_pcapfile( + self.tester, "%s/multi_path_%s.pcap" % (self.out_path, frame_size) + ) + tgen_input.append( + ( + tx_port, + rx_port, + "%s/multi_path_%s.pcap" % (self.out_path, frame_size), + ) + ) + self.tester.pktgen.clear_streams() + streams = self.pktgen_helper.prepare_stream_from_tginput( + tgen_input, 100, None, self.tester.pktgen + ) + # set traffic option + traffic_opt = {"delay": 30, "duration": 30} + _, pps = self.tester.pktgen.measure_throughput( + stream_ids=streams, options=traffic_opt + ) + Mpps = pps / 1000000.0 + self.throughput[frame_size][self.nb_desc] = Mpps + linerate = ( + Mpps + * 100 + / float(self.wirespeed(self.nic, frame_size, self.number_of_ports)) + ) + results_row = [frame_size] + results_row.append(case_info) + results_row.append(Mpps) + results_row.append(linerate) + self.result_table_add(results_row) + + def handle_expected(self): + """ + Update expected numbers to configurate file: $DTS_CFG_FOLDER/$suite_name.cfg + """ + if load_global_setting(UPDATE_EXPECTED) == "yes": + for frame_size in self.test_parameters.keys(): + for nb_desc in self.test_parameters[frame_size]: + self.expected_throughput[frame_size][nb_desc] = round( + self.throughput[frame_size][nb_desc], 3 + ) + + def handle_results(self): + """ + results handled process: + 1, save to self.test_results + 2, create test results table + 3, save to json file for Open Lab + """ + header = self.table_header + header.append("Expected Throughput") + header.append("Throughput Difference") + for frame_size in self.test_parameters.keys(): + wirespeed = self.wirespeed(self.nic, frame_size, self.number_of_ports) + ret_datas = {} + for nb_desc in self.test_parameters[frame_size]: + ret_data = {} + ret_data[header[0]] = frame_size + ret_data[header[1]] = nb_desc + ret_data[header[2]] = "{:.3f} Mpps".format( + self.throughput[frame_size][nb_desc] + ) + ret_data[header[3]] = "{:.3f}%".format( + self.throughput[frame_size][nb_desc] * 100 / wirespeed + ) + ret_data[header[4]] = "{:.3f} Mpps".format( + self.expected_throughput[frame_size][nb_desc] + ) + ret_data[header[5]] = "{:.3f} Mpps".format( + self.throughput[frame_size][nb_desc] + - self.expected_throughput[frame_size][nb_desc] + ) + ret_datas[nb_desc] = deepcopy(ret_data) + self.test_result[frame_size] = deepcopy(ret_datas) + # Create test results table + self.result_table_create(header) + for frame_size in self.test_parameters.keys(): + for nb_desc in self.test_parameters[frame_size]: + table_row = list() + for i in range(len(header)): + table_row.append(self.test_result[frame_size][nb_desc][header[i]]) + self.result_table_add(table_row) + # present test results to screen + self.result_table_print() + # save test results as a file + if self.save_result_flag: + self.save_result(self.test_result) + + def save_result(self, data): + """ + Saves the test results as a separated file named with + self.nic+_perf_virtio_user_pvp.json in output folder + if self.save_result_flag is True + """ + case_name = self.running_case + self.json_obj[case_name] = list() + status_result = [] + for frame_size in self.test_parameters.keys(): + for nb_desc in self.test_parameters[frame_size]: + row_in = self.test_result[frame_size][nb_desc] + row_dict0 = dict() + row_dict0["performance"] = list() + row_dict0["parameters"] = list() + row_dict0["parameters"] = list() + result_throughput = float(row_in["Mpps"].split()[0]) + expected_throughput = float(row_in["Expected Throughput"].split()[0]) + # delta value and accepted tolerance in percentage + delta = result_throughput - expected_throughput + gap = expected_throughput * -self.gap * 0.01 + delta = float(delta) + gap = float(gap) + self.logger.info("Accept tolerance are (Mpps) %f" % gap) + self.logger.info("Throughput Difference are (Mpps) %f" % delta) + if result_throughput > expected_throughput + gap: + row_dict0["status"] = "PASS" + else: + row_dict0["status"] = "FAIL" + row_dict1 = dict( + name="Throughput", value=result_throughput, unit="Mpps", delta=delta + ) + row_dict2 = dict( + name="Txd/Rxd", value=row_in["Mode/RXD-TXD"], unit="descriptor" + ) + row_dict3 = dict(name="frame_size", value=row_in["Frame"], unit="bytes") + row_dict0["performance"].append(row_dict1) + row_dict0["parameters"].append(row_dict2) + row_dict0["parameters"].append(row_dict3) + self.json_obj[case_name].append(row_dict0) + status_result.append(row_dict0["status"]) + with open( + os.path.join( + rst.path2Result, "{0:s}_{1}.json".format(self.nic, self.suite_name) + ), + "w", + ) as fp: + json.dump(self.json_obj, fp) + self.verify("FAIL" not in status_result, "Exceeded Gap") + + def test_perf_pvp_split_ring_inorder_mergeable_idxd(self): + """ + Test Case 1: pvp vhost async test with split ring inorder mergeable path using IDXD kernel driver + """ + self.DC.create_work_queue(work_queue_number=1, dsa_index=0) + dmas = "txq0@wq0.0;rxq0@wq0.0" + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=1 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + self.start_vhost_user_testpmd( + eal_param=vhost_eal_param, param=vhost_param, ports=ports + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=1,mrg_rxbuf=1" + virtio_param = "--tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024" + self.start_virtio_user_testpmd(eal_param=virtio_eal_param, param=virtio_param) + + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.running_case + ] + case_info = "split_ring_inorder_mergeable_idxd" + self.send_and_verify(case_info) + self.result_table_print() + self.quit_all_testpmd() + self.handle_expected() + self.handle_results() + + def test_perf_pvp_split_ring_inorder_non_mergeable_idxd(self): + """ + Test Case 2: pvp vhost async test with split ring inorder non-mergeable path using IDXD kernel driver + """ + self.DC.create_work_queue(work_queue_number=1, dsa_index=0) + dmas = "txq0@wq0.0;rxq0@wq0.0" + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=1 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + self.start_vhost_user_testpmd( + eal_param=vhost_eal_param, param=vhost_param, ports=ports + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=1,mrg_rxbuf=0," + virtio_param = "--tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024" + self.start_virtio_user_testpmd(eal_param=virtio_eal_param, param=virtio_param) + + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.running_case + ] + case_info = "split_ring_inorder_non_mergeable_idxd" + self.send_and_verify(case_info) + self.result_table_print() + self.quit_all_testpmd() + self.handle_expected() + self.handle_results() + + def test_perf_pvp_split_ring_mergeable_idxd(self): + """ + Test Case 3: pvp vhost async test with split ring mergeable path using IDXD kernel driver + """ + self.DC.create_work_queue(work_queue_number=1, dsa_index=0) + dmas = "txq0@wq0.0;rxq0@wq0.0" + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=1 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + self.start_vhost_user_testpmd( + eal_param=vhost_eal_param, param=vhost_param, ports=ports + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=1," + virtio_param = "--tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024" + self.start_virtio_user_testpmd(eal_param=virtio_eal_param, param=virtio_param) + + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.running_case + ] + case_info = "split_ring_mergeable_idxd" + self.send_and_verify(case_info) + self.result_table_print() + self.quit_all_testpmd() + self.handle_expected() + self.handle_results() + + def test_perf_pvp_split_ring_non_mergeable_idxd(self): + """ + Test Case 4: pvp vhost async test with split ring non-mergeable path using IDXD kernel driver + """ + self.DC.create_work_queue(work_queue_number=1, dsa_index=0) + dmas = "txq0@wq0.0;rxq0@wq0.0" + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=1 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + self.start_vhost_user_testpmd( + eal_param=vhost_eal_param, param=vhost_param, ports=ports + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=0," + virtio_param = "--tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024" + self.start_virtio_user_testpmd(eal_param=virtio_eal_param, param=virtio_param) + + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.running_case + ] + case_info = "split_ring_non_mergeable_idxd" + self.send_and_verify(case_info) + self.result_table_print() + self.quit_all_testpmd() + self.handle_expected() + self.handle_results() + + def test_perf_pvp_split_ring_vectorized_idxd(self): + """ + Test Case 5: pvp vhost async test with split ring vectorized path using IDXD kernel driver + """ + self.DC.create_work_queue(work_queue_number=1, dsa_index=0) + dmas = "txq0@wq0.0;rxq0@wq0.0" + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=1 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + self.start_vhost_user_testpmd( + eal_param=vhost_eal_param, param=vhost_param, ports=ports + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=0,vectorized=1" + virtio_param = "--nb-cores=1 --txd=1024 --rxd=1024" + self.start_virtio_user_testpmd(eal_param=virtio_eal_param, param=virtio_param) + + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.running_case + ] + case_info = "split_ring_non_vectorized_idxd" + self.send_and_verify(case_info) + self.result_table_print() + self.quit_all_testpmd() + self.handle_expected() + self.handle_results() + + def test_perf_pvp_packed_ring_inorder_mergeable_idxd(self): + """ + Test Case 6: pvp vhost async test with packed ring inorder mergeable path using IDXD kernel driver + """ + self.DC.create_work_queue(work_queue_number=1, dsa_index=0) + dmas = "txq0@wq0.0;rxq0@wq0.0" + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=1 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + self.start_vhost_user_testpmd( + eal_param=vhost_eal_param, param=vhost_param, ports=ports + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,in_order=1,mrg_rxbuf=1" + virtio_param = "--nb-cores=1 --txd=1024 --rxd=1024" + self.start_virtio_user_testpmd(eal_param=virtio_eal_param, param=virtio_param) + + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.running_case + ] + case_info = "packed_ring_inorder_mergeable_idxd" + self.send_and_verify(case_info) + self.result_table_print() + self.quit_all_testpmd() + self.handle_expected() + self.handle_results() + + def test_perf_pvp_packed_ring_inorder_non_mergeable_idxd(self): + """ + Test Case 7: pvp vhost async test with packed ring inorder non-mergeable path using IDXD kernel driver + """ + self.DC.create_work_queue(work_queue_number=1, dsa_index=0) + dmas = "txq0@wq0.0;rxq0@wq0.0" + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=1 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + self.start_vhost_user_testpmd( + eal_param=vhost_eal_param, param=vhost_param, ports=ports + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,in_order=1,mrg_rxbuf=0" + virtio_param = "--nb-cores=1 --txd=1024 --rxd=1024" + self.start_virtio_user_testpmd(eal_param=virtio_eal_param, param=virtio_param) + + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.running_case + ] + case_info = "packed_ring_inorder_non_mergeable_idxd" + self.send_and_verify(case_info) + self.result_table_print() + self.quit_all_testpmd() + self.handle_expected() + self.handle_results() + + def test_perf_pvp_packed_ring_mergeable_idxd(self): + """ + Test Case 8: pvp vhost async test with packed ring mergeable path using IDXD kernel driver + """ + self.DC.create_work_queue(work_queue_number=1, dsa_index=0) + dmas = "txq0@wq0.0;rxq0@wq0.0" + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=1 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + self.start_vhost_user_testpmd( + eal_param=vhost_eal_param, param=vhost_param, ports=ports + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,in_order=0,mrg_rxbuf=1" + virtio_param = "--nb-cores=1 --txd=1024 --rxd=1024" + self.start_virtio_user_testpmd(eal_param=virtio_eal_param, param=virtio_param) + + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.running_case + ] + case_info = "packed_ring_mergeable_idxd" + self.send_and_verify(case_info) + self.result_table_print() + self.quit_all_testpmd() + self.handle_expected() + self.handle_results() + + def test_perf_pvp_packed_ring_non_mergeable_idxd(self): + """ + Test Case 9: pvp vhost async test with packed ring non-mergeable path using IDXD kernel driver + """ + self.DC.create_work_queue(work_queue_number=1, dsa_index=0) + dmas = "txq0@wq0.0;rxq0@wq0.0" + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=1 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + self.start_vhost_user_testpmd( + eal_param=vhost_eal_param, param=vhost_param, ports=ports + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,in_order=0,mrg_rxbuf=0" + virtio_param = "--nb-cores=1 --txd=1024 --rxd=1024" + self.start_virtio_user_testpmd(eal_param=virtio_eal_param, param=virtio_param) + + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.running_case + ] + case_info = "packed_ring_non_mergeable_idxd" + self.send_and_verify(case_info) + self.result_table_print() + self.quit_all_testpmd() + self.handle_expected() + self.handle_results() + + def test_perf_pvp_packed_ring_vectorized_idxd(self): + """ + Test Case 10: pvp vhost async test with packed ring vectorized path using IDXD kernel driver + """ + self.DC.create_work_queue(work_queue_number=1, dsa_index=0) + dmas = "txq0@wq0.0;rxq0@wq0.0" + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=1 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + self.start_vhost_user_testpmd( + eal_param=vhost_eal_param, param=vhost_param, ports=ports + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,in_order=1,mrg_rxbuf=0,vectorized=1" + virtio_param = "--nb-cores=1 --txd=1024 --rxd=1024" + self.start_virtio_user_testpmd(eal_param=virtio_eal_param, param=virtio_param) + + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.running_case + ] + case_info = "packed_ring_vectorized_idxd" + self.send_and_verify(case_info) + self.result_table_print() + self.quit_all_testpmd() + self.handle_expected() + self.handle_results() + + def test_perf_pvp_split_ring_inorder_mergeable_vfio_pci(self): + """ + Test Case 11: pvp vhost async test with split ring inorder mergeable path using vfio-pci driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci", socket=self.ports_socket + ) + dmas = "txq0@%s-q0;" "rxq0@%s-q1" % (self.use_dsa_list[0], self.use_dsa_list[0]) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=1 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + ports.append(self.use_dsa_list[0]) + port_options = {self.use_dsa_list[0]: "max_queues=2"} + self.start_vhost_user_testpmd( + eal_param=vhost_eal_param, + param=vhost_param, + ports=ports, + port_options=port_options, + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=1,mrg_rxbuf=1" + virtio_param = "--tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024" + self.start_virtio_user_testpmd(eal_param=virtio_eal_param, param=virtio_param) + + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.running_case + ] + case_info = "split_ring_inorder_mergeable_vfio_pci" + self.send_and_verify(case_info) + self.result_table_print() + self.quit_all_testpmd() + self.handle_expected() + self.handle_results() + + def test_perf_pvp_split_ring_inorder_non_mergeable_vfio_pci(self): + """ + Test Case 12: pvp vhost async test with split ring inorder non-mergeable path using vfio-pci driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci", socket=self.ports_socket + ) + dmas = "txq0@%s-q0;" "rxq0@%s-q1" % (self.use_dsa_list[0], self.use_dsa_list[0]) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=1 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + ports.append(self.use_dsa_list[0]) + port_options = {self.use_dsa_list[0]: "max_queues=2"} + self.start_vhost_user_testpmd( + eal_param=vhost_eal_param, + param=vhost_param, + ports=ports, + port_options=port_options, + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=1,mrg_rxbuf=0" + virtio_param = "--tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024" + self.start_virtio_user_testpmd(eal_param=virtio_eal_param, param=virtio_param) + + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.running_case + ] + case_info = "split_ring_inorder_non_mergeable_vfio_pci" + self.send_and_verify(case_info) + self.result_table_print() + self.quit_all_testpmd() + self.handle_expected() + self.handle_results() + + def test_perf_pvp_split_ring_mergeable_vfio_pci(self): + """ + Test Case 13: pvp vhost async test with split ring mergeable path using vfio-pci driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci", socket=self.ports_socket + ) + dmas = "txq0@%s-q0;" "rxq0@%s-q1" % (self.use_dsa_list[0], self.use_dsa_list[0]) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=1 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + ports.append(self.use_dsa_list[0]) + port_options = {self.use_dsa_list[0]: "max_queues=2"} + self.start_vhost_user_testpmd( + eal_param=vhost_eal_param, + param=vhost_param, + ports=ports, + port_options=port_options, + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=1" + virtio_param = "--tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024" + self.start_virtio_user_testpmd(eal_param=virtio_eal_param, param=virtio_param) + + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.running_case + ] + case_info = "split_ring_mergeable_vfio_pci" + self.send_and_verify(case_info) + self.result_table_print() + self.quit_all_testpmd() + self.handle_expected() + self.handle_results() + + def test_perf_pvp_split_ring_non_mergeable_vfio_pci(self): + """ + Test Case 14: pvp vhost async test with split ring non-mergeable path using vfio-pci driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci", socket=self.ports_socket + ) + dmas = "txq0@%s-q0;" "rxq0@%s-q1" % (self.use_dsa_list[0], self.use_dsa_list[0]) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=1 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + ports.append(self.use_dsa_list[0]) + port_options = {self.use_dsa_list[0]: "max_queues=2"} + self.start_vhost_user_testpmd( + eal_param=vhost_eal_param, + param=vhost_param, + ports=ports, + port_options=port_options, + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=0" + virtio_param = "--tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024" + self.start_virtio_user_testpmd(eal_param=virtio_eal_param, param=virtio_param) + + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.running_case + ] + case_info = "split_ring_non_mergeable_vfio_pci" + self.send_and_verify(case_info) + self.result_table_print() + self.quit_all_testpmd() + self.handle_expected() + self.handle_results() + + def test_perf_pvp_split_ring_vectorized_vfio_pci(self): + """ + Test Case 15: pvp vhost async test with split ring vectorized_rx path using vfio-pci driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci", socket=self.ports_socket + ) + dmas = "txq0@%s-q0;" "rxq0@%s-q1" % (self.use_dsa_list[0], self.use_dsa_list[0]) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=1 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + ports.append(self.use_dsa_list[0]) + port_options = {self.use_dsa_list[0]: "max_queues=2"} + self.start_vhost_user_testpmd( + eal_param=vhost_eal_param, + param=vhost_param, + ports=ports, + port_options=port_options, + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=0,vectorized=1" + virtio_param = "--nb-cores=1 --txd=1024 --rxd=1024" + self.start_virtio_user_testpmd(eal_param=virtio_eal_param, param=virtio_param) + + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.running_case + ] + case_info = "split_ring_vectorized_vfio_pci" + self.send_and_verify(case_info) + self.result_table_print() + self.quit_all_testpmd() + self.handle_expected() + self.handle_results() + + def test_perf_pvp_packed_ring_inorder_mergeable_vfio_pci(self): + """ + Test Case 16: pvp vhost async test with packed ring inorder mergeable path using vfio-pci driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci", socket=self.ports_socket + ) + dmas = "txq0@%s-q0;" "rxq0@%s-q1" % (self.use_dsa_list[0], self.use_dsa_list[0]) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=1 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + ports.append(self.use_dsa_list[0]) + port_options = {self.use_dsa_list[0]: "max_queues=2"} + self.start_vhost_user_testpmd( + eal_param=vhost_eal_param, + param=vhost_param, + ports=ports, + port_options=port_options, + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,in_order=1,mrg_rxbuf=1" + virtio_param = "--tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024" + self.start_virtio_user_testpmd(eal_param=virtio_eal_param, param=virtio_param) + + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.running_case + ] + case_info = "packed_ring_inorder_mergeable_vfio_pci" + self.send_and_verify(case_info) + self.result_table_print() + self.quit_all_testpmd() + self.handle_expected() + self.handle_results() + + def test_perf_pvp_packed_ring_inorder_non_mergeable_vfio_pci(self): + """ + Test Case 17: pvp vhost async test with packed ring inorder non-mergeable path using vfio-pci driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci", socket=self.ports_socket + ) + dmas = "txq0@%s-q0;" "rxq0@%s-q1" % (self.use_dsa_list[0], self.use_dsa_list[0]) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=1 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + ports.append(self.use_dsa_list[0]) + port_options = {self.use_dsa_list[0]: "max_queues=2"} + self.start_vhost_user_testpmd( + eal_param=vhost_eal_param, + param=vhost_param, + ports=ports, + port_options=port_options, + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,in_order=1,mrg_rxbuf=0" + virtio_param = "--tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024" + self.start_virtio_user_testpmd(eal_param=virtio_eal_param, param=virtio_param) + + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.running_case + ] + case_info = "packed_ring_inorder_non_mergeable_vfio_pci" + self.send_and_verify(case_info) + self.result_table_print() + self.quit_all_testpmd() + self.handle_expected() + self.handle_results() + + def test_perf_pvp_packed_ring_mergeable_vfio_pci(self): + """ + Test Case 18: pvp vhost async test with packed ring mergeable path using vfio-pci driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci", socket=self.ports_socket + ) + dmas = "txq0@%s-q0;" "rxq0@%s-q1" % (self.use_dsa_list[0], self.use_dsa_list[0]) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=1 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + ports.append(self.use_dsa_list[0]) + port_options = {self.use_dsa_list[0]: "max_queues=2"} + self.start_vhost_user_testpmd( + eal_param=vhost_eal_param, + param=vhost_param, + ports=ports, + port_options=port_options, + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,in_order=0,mrg_rxbuf=1" + virtio_param = "--tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024" + self.start_virtio_user_testpmd(eal_param=virtio_eal_param, param=virtio_param) + + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.running_case + ] + case_info = "packed_ring_mergeable_vfio_pci" + self.send_and_verify(case_info) + self.result_table_print() + self.quit_all_testpmd() + self.handle_expected() + self.handle_results() + + def test_perf_pvp_packed_ring_non_mergeable_vfio_pci(self): + """ + Test Case 19: pvp vhost async test with packed ring non-mergeable path using vfio-pci driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci", socket=self.ports_socket + ) + dmas = "txq0@%s-q0;" "rxq0@%s-q1" % (self.use_dsa_list[0], self.use_dsa_list[0]) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=1 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + ports.append(self.use_dsa_list[0]) + port_options = {self.use_dsa_list[0]: "max_queues=2"} + self.start_vhost_user_testpmd( + eal_param=vhost_eal_param, + param=vhost_param, + ports=ports, + port_options=port_options, + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,in_order=0,mrg_rxbuf=0" + virtio_param = "--tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024" + self.start_virtio_user_testpmd(eal_param=virtio_eal_param, param=virtio_param) + + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.running_case + ] + case_info = "packed_ring_non_mergeable_vfio_pci" + self.send_and_verify(case_info) + self.result_table_print() + self.quit_all_testpmd() + self.handle_expected() + self.handle_results() + + def test_perf_pvp_packed_ring_vectorized_vfio_pci(self): + """ + Test Case 20: pvp vhost async test with split ring vectorized path using vfio-pci driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci", socket=self.ports_socket + ) + dmas = "txq0@%s-q0;" "rxq0@%s-q1" % (self.use_dsa_list[0], self.use_dsa_list[0]) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=1 --txd=1024 --rxd=1024" + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + ports.append(self.use_dsa_list[0]) + port_options = {self.use_dsa_list[0]: "max_queues=2"} + self.start_vhost_user_testpmd( + eal_param=vhost_eal_param, + param=vhost_param, + ports=ports, + port_options=port_options, + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,in_order=1,mrg_rxbuf=0,vectorized=1" + virtio_param = "--nb-cores=1 --txd=1024 --rxd=1024" + self.start_virtio_user_testpmd(eal_param=virtio_eal_param, param=virtio_param) + + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.running_case + ] + case_info = "packed_ring_vectorized_vfio_pci" + self.send_and_verify(case_info) + self.result_table_print() + self.quit_all_testpmd() + self.handle_expected() + self.handle_results() + + def quit_all_testpmd(self): + self.virtio_user_pmd.quit() + self.vhost_user_pmd.quit() + + def tear_down(self): + """ + Run after each test case. + """ + self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#") + self.dut.send_expect("rm -rf %s/vhost-net*" % self.base_dir, "#") + self.DC.reset_all_work_queue() + self.DC.bind_all_dsa_to_kernel() + + def tear_down_all(self): + """ + Run after each test suite. + """ + self.dut.close_session(self.virtio_user) + self.dut.close_session(self.vhost_user) From patchwork Mon Feb 6 04:02:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 123082 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7CA1341BE4; Mon, 6 Feb 2023 05:14:13 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 73F8742D12; Mon, 6 Feb 2023 05:14:13 +0100 (CET) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id B010C40A7D for ; Mon, 6 Feb 2023 05:14:11 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675656851; x=1707192851; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=p9NQt8TZCSzncJTPgQSXGj5jUSnQ7swbefyRnOfQ+mA=; b=RvQHG0bR7dVg7F+q6ukBr9GJceu+J0CJM3iCtEWyieyxo8F6w7HcE55S AIfMYJjGoMv6BJGZzu+a25gYSNmXE//7GDLLyarSEyXVcpcfjLWGJX407 5y5NzPxaJCkaLKP3RpBewcjzH7ZOwajbOaTj227R1wDrIFAFMO2eNpgUr E6u/BySIYW86PV2YGtkkPuv5Gqbgk2H+pB2ARSmckqrFYC+1KkkZDbC5K 6kpzY7v5uVJIkhFx0wo2I0X6EU1nXuSVZTaHxrPpqja/4cFZJiVNmesoa zkElc7v9TQKRm3po+It2OZI8MgO7gzDDVT0fzR+fF1eSplwdcPBzMAth5 Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10612"; a="312783190" X-IronPort-AV: E=Sophos;i="5.97,276,1669104000"; d="scan'208";a="312783190" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Feb 2023 20:14:10 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10612"; a="729884727" X-IronPort-AV: E=Sophos;i="5.97,276,1669104000"; d="scan'208";a="729884727" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Feb 2023 20:14:09 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 4/4] conf/pvp_vhost_async_multi_paths_performance_dsa: add testsuite config file Date: Mon, 6 Feb 2023 12:02:08 +0800 Message-Id: <20230206040208.3641854-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add conf/pvp_vhost_async_multi_paths_performance_dsa.cfg. Signed-off-by: Wei Ling --- ...host_async_multi_paths_performance_dsa.cfg | 166 ++++++++++++++++++ 1 file changed, 166 insertions(+) create mode 100644 conf/pvp_vhost_async_multi_paths_performance_dsa.cfg diff --git a/conf/pvp_vhost_async_multi_paths_performance_dsa.cfg b/conf/pvp_vhost_async_multi_paths_performance_dsa.cfg new file mode 100644 index 00000000..10c82cde --- /dev/null +++ b/conf/pvp_vhost_async_multi_paths_performance_dsa.cfg @@ -0,0 +1,166 @@ +[suite] +update_expected = True +test_parameters = {64: [1024], 128: [1024], 256: [1024], 512: [1024], 1024: [1024], 1518: [1024]} +test_duration = 60 +accepted_tolerance = 2 +expected_throughput = { + 'test_perf_pvp_split_ring_inorder_mergeable_idxd': { + 64: {1024: 0.00}, + 128: {1024: 0.00}, + 256: {1024: 0.00}, + 512: {1024: 0.00}, + 1024: {1024: 0.00}, + 1518: {1024: 0.00}, + }, + 'test_perf_pvp_split_ring_inorder_non_mergeable_idxd': { + 64: {1024: 0.00}, + 128: {1024: 0.00}, + 256: {1024: 0.00}, + 512: {1024: 0.00}, + 1024: {1024: 0.00}, + 1518: {1024: 0.00}, + }, + 'test_perf_pvp_split_ring_mergeable_idxd': { + 64: {1024: 0.00}, + 128: {1024: 0.00}, + 256: {1024: 0.00}, + 512: {1024: 0.00}, + 1024: {1024: 0.00}, + 1518: {1024: 0.00}, + }, + 'test_perf_pvp_split_ring_non_mergeable_idxd': { + 64: {1024: 0.00}, + 128: {1024: 0.00}, + 256: {1024: 0.00}, + 512: {1024: 0.00}, + 1024: {1024: 0.00}, + 1518: {1024: 0.00}, + }, + 'test_perf_pvp_split_ring_vectorized_rx_idxd': { + 64: {1024: 0.00}, + 128: {1024: 0.00}, + 256: {1024: 0.00}, + 512: {1024: 0.00}, + 1024: {1024: 0.00}, + 1518: {1024: 0.00}, + }, + 'test_perf_pvp_packed_ring_inorder_mergeable_idxd': { + 64: {1024: 0.00}, + 128: {1024: 0.00}, + 256: {1024: 0.00}, + 512: {1024: 0.00}, + 1024: {1024: 0.00}, + 1518: {1024: 0.00}, + }, + 'test_perf_pvp_packed_ring_inorder_non_mergeable_idxd': { + 64: {1024: 0.00}, + 128: {1024: 0.00}, + 256: {1024: 0.00}, + 512: {1024: 0.00}, + 1024: {1024: 0.00}, + 1518: {1024: 0.00}, + }, + 'test_perf_pvp_packed_ring_mergeable_idxd': { + 64: {1024: 0.00}, + 128: {1024: 0.00}, + 256: {1024: 0.00}, + 512: {1024: 0.00}, + 1024: {1024: 0.00}, + 1518: {1024: 0.00}, + }, + 'test_perf_pvp_packed_ring_non_mergeable_idxd': { + 64: {1024: 0.00}, + 128: {1024: 0.00}, + 256: {1024: 0.00}, + 512: {1024: 0.00}, + 1024: {1024: 0.00}, + 1518: {1024: 0.00}, + }, + 'test_perf_pvp_packed_ring_vectorized_idxd': { + 64: {1024: 0.00}, + 128: {1024: 0.00}, + 256: {1024: 0.00}, + 512: {1024: 0.00}, + 1024: {1024: 0.00}, + 1518: {1024: 0.00}, + }, + 'test_perf_pvp_split_ring_inorder_mergeable_vfio_pci': { + 64: {1024: 0.00}, + 128: {1024: 0.00}, + 256: {1024: 0.00}, + 512: {1024: 0.00}, + 1024: {1024: 0.00}, + 1518: {1024: 0.00}, + }, + 'test_perf_pvp_split_ring_inorder_non_mergeable_vfio_pci': { + 64: {1024: 0.00}, + 128: {1024: 0.00}, + 256: {1024: 0.00}, + 512: {1024: 0.00}, + 1024: {1024: 0.00}, + 1518: {1024: 0.00}, + }, + 'test_perf_pvp_split_ring_mergeable_vfio_pci': { + 64: {1024: 0.00}, + 128: {1024: 0.00}, + 256: {1024: 0.00}, + 512: {1024: 0.00}, + 1024: {1024: 0.00}, + 1518: {1024: 0.00}, + }, + 'test_perf_pvp_split_ring_non_mergeable_vfio_pci': { + 64: {1024: 0.00}, + 128: {1024: 0.00}, + 256: {1024: 0.00}, + 512: {1024: 0.00}, + 1024: {1024: 0.00}, + 1518: {1024: 0.00}, + }, + 'test_perf_pvp_split_ring_vectorized_rx_vfio_pci': { + 64: {1024: 0.00}, + 128: {1024: 0.00}, + 256: {1024: 0.00}, + 512: {1024: 0.00}, + 1024: {1024: 0.00}, + 1518: {1024: 0.00}, + }, + 'test_perf_pvp_packed_ring_inorder_mergeable_vfio_pci': { + 64: {1024: 0.00}, + 128: {1024: 0.00}, + 256: {1024: 0.00}, + 512: {1024: 0.00}, + 1024: {1024: 0.00}, + 1518: {1024: 0.00}, + }, + 'test_perf_pvp_packed_ring_inorder_non_mergeable_vfio_pci': { + 64: {1024: 0.00}, + 128: {1024: 0.00}, + 256: {1024: 0.00}, + 512: {1024: 0.00}, + 1024: {1024: 0.00}, + 1518: {1024: 0.00}, + }, + 'test_perf_pvp_packed_ring_mergeable_vfio_pci': { + 64: {1024: 0.00}, + 128: {1024: 0.00}, + 256: {1024: 0.00}, + 512: {1024: 0.00}, + 1024: {1024: 0.00}, + 1518: {1024: 0.00}, + }, + 'test_perf_pvp_packed_ring_non_mergeable_vfio_pci': { + 64: {1024: 0.00}, + 128: {1024: 0.00}, + 256: {1024: 0.00}, + 512: {1024: 0.00}, + 1024: {1024: 0.00}, + 1518: {1024: 0.00}, + }, + 'test_perf_pvp_packed_ring_vectorized_vfio_pci': { + 64: {1024: 0.00}, + 128: {1024: 0.00}, + 256: {1024: 0.00}, + 512: {1024: 0.00}, + 1024: {1024: 0.00}, + 1518: {1024: 0.00},}} +