From patchwork Mon Feb 6 04:49:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 123094 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D511A41BE7; Mon, 6 Feb 2023 06:01:10 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CAA86427E9; Mon, 6 Feb 2023 06:01:10 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id E651A40A7D for ; Mon, 6 Feb 2023 06:01:08 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675659669; x=1707195669; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=ZiRzbUfHlmk5HAa6nU1CxH9W2tCEPnWTiCMt20zMX00=; b=m65z8c9k49TM486yDMpdVEYV3pDjSNmsvH1EPefqfJGnT+QM6inZfV3G ZbtvccSQQmHbkQmfn0ca+aahT0hb5RXVtckl9QRkRHMNjWcgWEKv1QofW tlxWPvEPleWYVCuRAJ40Sq17Tw2diW3vuL7kXPrgGEW+/ZS5Sd2z9sUaQ wM4SaGMLrSGuIQb/TP/QuV+/Q2P71gdS9AkLZcO8PqRfXmnj6Dcr7IFuj 12LGOnjzFKGHRCEh/yYYKdeKD4UCwIJeLlwGAry3CLOM/Q7oWTeOsiNHv GglNHWgfuB8LRsorcmrtfEmxHXOvWenE9jd2VNqvK5T/DxW3k0o9QgDIq Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10612"; a="393729808" X-IronPort-AV: E=Sophos;i="5.97,276,1669104000"; d="scan'208";a="393729808" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Feb 2023 21:01:08 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10612"; a="659732057" X-IronPort-AV: E=Sophos;i="5.97,276,1669104000"; d="scan'208";a="659732057" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Feb 2023 21:01:06 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 1/4] test_plans/index: add pvp_vhost_async_virtio_pmd_perf_dsa Date: Mon, 6 Feb 2023 12:49:06 +0800 Message-Id: <20230206044906.3643940-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add pvp_vhost_async_virtio_pmd_perf_dsa_test_plan in testplans/index.rst. Signed-off-by: Wei Ling --- test_plans/index.rst | 1 + 1 file changed, 1 insertion(+) diff --git a/test_plans/index.rst b/test_plans/index.rst index ea13fc8e..03eae731 100644 --- a/test_plans/index.rst +++ b/test_plans/index.rst @@ -132,6 +132,7 @@ The following are the test plans for the DPDK DTS automated test system. pvp_multi_paths_performance_test_plan pvp_multi_paths_vhost_single_core_performance_test_plan pvp_multi_paths_virtio_single_core_performance_test_plan + pvp_vhost_async_virtio_pmd_perf_dsa_test_plan qinq_filter_test_plan qos_api_test_plan qos_meter_test_plan From patchwork Mon Feb 6 04:49:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 123095 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 106BC41BE7; Mon, 6 Feb 2023 06:01:21 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 03A7342D0D; Mon, 6 Feb 2023 06:01:21 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id C797D40A7D for ; Mon, 6 Feb 2023 06:01:18 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675659679; x=1707195679; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=8sfAw1pZ1MXZoqB3O+TI32MjhIF6HBqYY+PLzH2v61s=; b=fNCoAnkQeIIzXBek2qcVjvFiUPVweCUGZrXUrbtsTw9lyYdPAC4Gt0gm LFrOY8GqbN1dySH1j1bA4m7DiFMiRbLgNpv+ylid9UNp1/1+FpKtou/0D eMRW900deUFn9uwNr2VbC5X2UkVpIDsibM8r5Fl4BZb3tbynYw41gS7xE 51P6LsTQejqEbtRs6RX0/zeR8uekFZMOt7zBXe9z7breOJRQgz5G9D6Ii cQAwAoiIZbBwMfQLwKWHgO0xrzzLfm/I7px+k2SVwMLn9iqY0z4BM18YC EgHtaBaLkxTyMCop3oGeqk59eKKVyvYyxuqbAWXSpDzUI68FGBP1MuoVN g==; X-IronPort-AV: E=McAfee;i="6500,9779,10612"; a="393729816" X-IronPort-AV: E=Sophos;i="5.97,276,1669104000"; d="scan'208";a="393729816" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Feb 2023 21:01:18 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10612"; a="659732097" X-IronPort-AV: E=Sophos;i="5.97,276,1669104000"; d="scan'208";a="659732097" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Feb 2023 21:01:16 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 2/4] test_plans/pvp_vhost_async_virtio_pmd_perf_dsa: add new testplan Date: Mon, 6 Feb 2023 12:49:15 +0800 Message-Id: <20230206044915.3644000-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add pvp_vhost_async_virtio_pmd_perf_dsa testplan to test pvp vhost/virtio-pmd async data-path performance use DSA device with dpdk and kernel driver. Signed-off-by: Wei Ling --- ...st_async_virtio_pmd_perf_dsa_test_plan.rst | 1120 +++++++++++++++++ 1 file changed, 1120 insertions(+) create mode 100644 test_plans/pvp_vhost_async_virtio_pmd_perf_dsa_test_plan.rst diff --git a/test_plans/pvp_vhost_async_virtio_pmd_perf_dsa_test_plan.rst b/test_plans/pvp_vhost_async_virtio_pmd_perf_dsa_test_plan.rst new file mode 100644 index 00000000..4f03043a --- /dev/null +++ b/test_plans/pvp_vhost_async_virtio_pmd_perf_dsa_test_plan.rst @@ -0,0 +1,1120 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2019 Intel Corporation + +=================================================== +PVP vhost/virtio-pmd async data-path perf test plan +=================================================== + +Description +=========== + +Benchmark pvp qemu test with vhost async data-path. + +Test flow +========= + +TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG + +Test Case 1: pvp split ring vhost async test with 1core 1queue using idxd kernel driver +--------------------------------------------------------------------------------------- + +1. Bind one nic port to vfio-pci and 1 dsa device to idxd, then generate 2wq by below command:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:29:00.0 + ./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0 + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0 + +2. Launch vhost testpmd by below command:: + + ./build/app/dpdk-testpmd -n 8 -l 9-10 --file-prefix=dpdk_vhost --huge-dir=/dev/hugepages -a 0000:29:00.0 --socket-mem 8192 \ + --vdev 'net_vhost,iface=./vhost-net,queues=1,dmas=[txq0@wq0.0;rxq0@wq0.1]' \ + --iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=2048 --rxd=2048 --forward-mode=mac -a + +3. Launch VM with mrg_rxbuf feature on:: + + taskset -c 11-18 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm \ + -m 8192 -object memory-backend-file,id=mem,size=8192M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -smp cores=8,sockets=1 -drive file=/home/xingguang/osimg/ubuntu22-04.img \ + -chardev socket,id=char0,path=./vhost-net -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=ntts1 \ + -netdev user,id=ntts1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,csum=off,guest_csum=off,gso=off,host_tso4=off,guest_tso4=off,guest_ecn=off -vnc :10 --monitor stdio + +4. Set affinity:: + + (qemu)info cpus + taskset -cp 11 xxx + ... + taskset -cp 18 xxx + +5. On VM, bind virtio net to vfio-pci and run testpmd:: + + mount -t hugetlbfs nodev /mnt/huge + modprobe vfio-pci + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode + ./usertools/dpdk-devbind.py -b vfio-pci 00:04.0 + + ./build/app/dpdk-testpmd -c 0x3 -n 8 -- -i --nb-cores=1 --txd=2048 --rxd=2048 + testpmd>set fwd csum + testpmd>start + +6. Send tcp/ip packets by packet generator with different packet sizes [64,128,256,512,1024,1280,1518], show throughput with below command:: + + testpmd>show port stats all + +Test Case 2: pvp split ring vhost async test with 1core 2queue using idxd kernel driver +--------------------------------------------------------------------------------------- + +1. Bind one nic port to vfio-pci and 1 dsa device to idxd, then generate 4wq by below command:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:29:00.0 + ./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0 + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0 + +2. Launch vhost testpmd by below command:: + + ./build/app/dpdk-testpmd -l 2-3 -n 8 --huge-dir=/dev/hugepages -a 0000:29:00.0 --socket-mem 8192 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,dmas=[txq0@wq0.0;rxq0@wq0.1;txq1@wq0.2;rxq1@wq0.3]' + -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=2048 --rxd=2048 --forward-mode=mac -a + +2. Launch VM with mrg_rxbuf feature on:: + + taskset -c 11-18 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm \ + -m 8192 -object memory-backend-file,id=mem,size=8192M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -smp cores=8,sockets=1 -drive file=/home/xingguang/osimg/ubuntu22-04.img \ + -chardev socket,id=char0,path=./vhost-net -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=ntts1 \ + -netdev user,id=ntts1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=2 \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=6,rx_queue_size=1024,tx_queue_size=1024,csum=off,guest_csum=off,gso=off,host_tso4=off,guest_tso4=off,guest_ecn=off -vnc :10 --monitor stdio + +3. Set affinity:: + + (qemu)info cpus + taskset -cp 11 xxx + ... + taskset -cp 18 xxx + +4. On VM, bind virtio net to vfio-pci and run testpmd:: + + mount -t hugetlbfs nodev /mnt/huge + modprobe vfio-pci + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode + ./usertools/dpdk-devbind.py -b vfio-pci 00:04.0 + + ./build/app/dpdk-testpmd -c 0x3 -n 8 -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=2048 --rxd=2048 + testpmd>set fwd csum + testpmd>start + +5. Send tcp/ip packets by packet generator with different packet sizes [64,128,256,512,1024,1280,1518], show throughput with below command:: + + testpmd>show port stats all + +Test Case 3: pvp split ring vhost async test with 2core 2queue using idxd kernel driver +--------------------------------------------------------------------------------------- + +1. Bind one nic port to vfio-pci and 1 dsa device to idxd, then generate 4wq by below command:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:29:00.0 + ./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0 + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0 + +2. Launch vhost testpmd by below command:: + + ./build/app/dpdk-testpmd -l 2-4 -n 8 --huge-dir=/dev/hugepages -a 0000:29:00.0 --socket-mem 8192 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,dmas=[txq0@wq0.0;rxq0@wq0.1;txq1@wq0.2;rxq1@wq0.3]' \ + -- -i --nb-cores=2 --txq=2 --rxq=2 --txd=2048 --rxd=2048 --forward-mode=mac -a + +2. Launch VM with mrg_rxbuf feature on:: + + taskset -c 11-18 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm \ + -m 8192 -object memory-backend-file,id=mem,size=8192M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -smp cores=8,sockets=1 -drive file=/home/xingguang/osimg/ubuntu22-04.img \ + -chardev socket,id=char0,path=./vhost-net -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=ntts1 \ + -netdev user,id=ntts1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=2 \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=6,rx_queue_size=1024,tx_queue_size=1024,csum=off,guest_csum=off,gso=off,host_tso4=off,guest_tso4=off,guest_ecn=off -vnc :10 --monitor stdio + +3. Set affinity:: + + (qemu)info cpus + taskset -cp 11 xxx + ... + taskset -cp 18 xxx + +4. On VM, bind virtio net to vfio-pci and run testpmd:: + + mount -t hugetlbfs nodev /mnt/huge + modprobe vfio-pci + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode + ./usertools/dpdk-devbind.py -b vfio-pci 00:04.0 + + ./build/app/dpdk-testpmd -c 0x7 -n 8 -- -i --nb-cores=2 --txq=2 --rxq=2 --txd=2048 --rxd=2048 + testpmd>set fwd csum + testpmd>start + +5. Send tcp/ip packets by packet generator with different packet sizes [64,128,256,512,1024,1280,1518], show throughput with below command:: + + testpmd>show port stats all + +Test Case 4: pvp split ring vhost async test with 2core 4queue using idxd kernel driver +--------------------------------------------------------------------------------------- + +1. Bind one nic port to vfio-pci and 1 dsa device to idxd, then generate 8wq by below command:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:29:00.0 + ./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0 + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 + +2. Launch vhost testpmd by below command:: + + ./build/app/dpdk-testpmd -l 2-4 -n 8 --huge-dir=/dev/hugepages -a 0000:29:00.0 --socket-mem 8192 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=4,dmas=[txq0@wq0.0;rxq0@wq0.1;txq1@wq0.2;rxq1@wq0.3;txq2@wq0.4;rxq2@wq0.5;txq3@wq0.6;rxq3@wq0.7]' + -- -i --nb-cores=2 --txq=4 --rxq=4 --txd=2048 --rxd=2048 --forward-mode=mac -a + +2. Launch VM with mrg_rxbuf feature on:: + + taskset -c 11-18 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm \ + -m 8192 -object memory-backend-file,id=mem,size=8192M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -smp cores=8,sockets=1 -drive file=/home/xingguang/osimg/ubuntu22-04.img \ + -chardev socket,id=char0,path=./vhost-net -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=ntts1 \ + -netdev user,id=ntts1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=4 \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=10,rx_queue_size=1024,tx_queue_size=1024,csum=off,guest_csum=off,gso=off,host_tso4=off,guest_tso4=off,guest_ecn=off -vnc :10 --monitor stdio + +3. Set affinity:: + + (qemu)info cpus + taskset -cp 11 xxx + ... + taskset -cp 18 xxx + +4. On VM, bind virtio net to vfio-pci and run testpmd:: + + mount -t hugetlbfs nodev /mnt/huge + modprobe vfio-pci + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode + ./usertools/dpdk-devbind.py -b vfio-pci 00:04.0 + + ./build/app/dpdk-testpmd -c 0x7 -n 8 -- -i --nb-cores=2 --txq=4 --rxq=4 --txd=2048 --rxd=2048 + testpmd>set fwd csum + testpmd>start + +5. Send tcp/ip packets by packet generator with different packet sizes [64,128,256,512,1024,1280,1518], show throughput with below command:: + + testpmd>show port stats all + +Test Case 5: pvp split ring vhost async test with 4core 4queue using idxd kernel driver +--------------------------------------------------------------------------------------- + +1. Bind one nic port to vfio-pci and 1 dsa device to idxd, then generate 8wq by below command:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:29:00.0 + ./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0 + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 + +2. Launch vhost testpmd by below command:: + + ./build/app/dpdk-testpmd -l 2-6 -n 8 --huge-dir=/dev/hugepages -a 0000:29:00.0 --socket-mem 8192 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=4,dmas=[txq0@wq0.0;rxq0@wq0.1;txq1@wq0.2;rxq1@wq0.3;txq2@wq0.4;rxq2@wq0.5;txq3@wq0.6;rxq3@wq0.7]' \ + -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=2048 --rxd=2048 --max-pkt-len=5200 --tx-offloads=0x00008000 --forward-mode=mac -a + +2. Launch VM with mrg_rxbuf feature on:: + + taskset -c 11-18 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm \ + -m 8192 -object memory-backend-file,id=mem,size=8192M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -smp cores=8,sockets=1 -drive file=/home/xingguang/osimg/ubuntu22-04.img \ + -chardev socket,id=char0,path=./vhost-net -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=ntts1 \ + -netdev user,id=ntts1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=4 \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=10,rx_queue_size=1024,tx_queue_size=1024,csum=off,guest_csum=off,gso=off,host_tso4=off,guest_tso4=off,guest_ecn=off -vnc :10 --monitor stdio + +3. Set affinity:: + + (qemu)info cpus + taskset -cp 11 xxx + ... + taskset -cp 18 xxx + +4. On VM, bind virtio net to vfio-pci and run testpmd:: + + mount -t hugetlbfs nodev /mnt/huge + modprobe vfio-pci + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode + ./usertools/dpdk-devbind.py -b vfio-pci 00:04.0 + + ./build/app/dpdk-testpmd -c 0x1f -n 8 -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=2048 --rxd=2048 + testpmd>set fwd csum + testpmd>start + +5. Send tcp/ip packets by packet generator with different packet sizes [64,128,256,512,1024,1280,1518,2048,4096], show throughput with below command:: + + testpmd>show port stats all + +Test Case 6: pvp split ring vhost async test with 4core 8queue using idxd kernel driver +--------------------------------------------------------------------------------------- + +1. Bind one nic port to vfio-pci and 1 dsa device to idxd, then generate 8wq by below command:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:29:00.0 + ./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0 + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 + +2. Launch vhost testpmd by below command:: + + ./build/app/dpdk-testpmd -l 2-6 -n 8 --huge-dir=/dev/hugepages -a 0000:29:00.0 --socket-mem 8192 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=8,dmas=[txq0@wq0.0;rxq0@wq0.0;txq1@wq0.1;rxq1@wq0.1;txq2@wq0.2;rxq2@wq0.2;txq3@wq0.3;rxq3@wq0.3;txq4@wq0.4;rxq4@wq0.4;txq5@wq0.5;rxq5@wq0.5;txq6@wq0.6;rxq6@wq0.6;txq7@wq0.7;rxq7@wq0.7]' + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=2048 --rxd=2048 --max-pkt-len=5200 --tx-offloads=0x00008000 --forward-mode=mac -a + +2. Launch VM with mrg_rxbuf feature on:: + + taskset -c 11-18 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm \ + -m 8192 -object memory-backend-file,id=mem,size=8192M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -smp cores=8,sockets=1 -drive file=/home/xingguang/osimg/ubuntu22-04.img \ + -chardev socket,id=char0,path=./vhost-net -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=ntts1 \ + -netdev user,id=ntts1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=18,rx_queue_size=1024,tx_queue_size=1024,csum=off,guest_csum=off,gso=off,host_tso4=off,guest_tso4=off,guest_ecn=off -vnc :10 --monitor stdio + +3. Set affinity:: + + (qemu)info cpus + taskset -cp 11 xxx + ... + taskset -cp 18 xxx + +4. On VM, bind virtio net to vfio-pci and run testpmd:: + + mount -t hugetlbfs nodev /mnt/huge + modprobe vfio-pci + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode + ./usertools/dpdk-devbind.py -b vfio-pci 00:04.0 + + ./build/app/dpdk-testpmd -c 0x1f -n 8 -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=2048 --rxd=2048 + testpmd>set fwd csum + testpmd>start + +5. Send tcp/ip packets by packet generator with different packet sizes [64,128,256,512,1024,1280,1518,2048,4096], show throughput with below command:: + + testpmd>show port stats all + +Test Case 7: pvp packed ring vhost async test with 1core 1queue using idxd kernel driver +---------------------------------------------------------------------------------------- + +1. Bind one nic port to vfio-pci and 1 dsa device to idxd, then generate 2wq by below command:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:29:00.0 + ./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0 + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0 + +2. Launch vhost testpmd by below command:: + + ./build/app/dpdk-testpmd -n 8 -l 9-10 --file-prefix=dpdk_vhost --huge-dir=/dev/hugepages -a 0000:29:00.0 --socket-mem 8192 \ + --vdev 'net_vhost,iface=./vhost-net,queues=1,dmas=[txq0@wq0.0;rxq0@wq0.1]' \ + --iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=2048 --rxd=2048 --forward-mode=mac -a + +3. Launch VM with mrg_rxbuf feature on:: + + taskset -c 11-18 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm \ + -m 8192 -object memory-backend-file,id=mem,size=8192M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -smp cores=8,sockets=1 -drive file=/home/xingguang/osimg/ubuntu22-04.img \ + -chardev socket,id=char0,path=./vhost-net -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=ntts1 \ + -netdev user,id=ntts1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,csum=off,guest_csum=off,gso=off,host_tso4=off,guest_tso4=off,guest_ecn=off,packed=on -vnc :10 --monitor stdio + +4. Set affinity:: + + (qemu)info cpus + taskset -cp 11 xxx + ... + taskset -cp 18 xxx + +5. On VM, bind virtio net to vfio-pci and run testpmd:: + + mount -t hugetlbfs nodev /mnt/huge + modprobe vfio-pci + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode + ./usertools/dpdk-devbind.py -b vfio-pci 00:04.0 + + ./build/app/dpdk-testpmd -c 0x3 -n 8 -- -i --nb-cores=1 --txd=2048 --rxd=2048 + testpmd>set fwd csum + testpmd>start + +6. Send tcp/ip packets by packet generator with different packet sizes [64,128,256,512,1024,1280,1518], show throughput with below command:: + + testpmd>show port stats all + +Test Case 8: pvp packed ring vhost async test with 1core 2queue using idxd kernel driver +---------------------------------------------------------------------------------------- + +1. Bind one nic port to vfio-pci and 1 dsa device to idxd, then generate 4wq by below command:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:29:00.0 + ./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0 + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0 + +2. Launch vhost testpmd by below command:: + + ./build/app/dpdk-testpmd -l 2-3 -n 8 --huge-dir=/dev/hugepages -a 0000:29:00.0 --socket-mem 8192 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,dmas=[txq0@wq0.0;rxq0@wq0.1;txq1@wq0.2;rxq1@wq0.3]' + -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=2048 --rxd=2048 --forward-mode=mac -a + +2. Launch VM with mrg_rxbuf feature on:: + + taskset -c 11-18 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm \ + -m 8192 -object memory-backend-file,id=mem,size=8192M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -smp cores=8,sockets=1 -drive file=/home/xingguang/osimg/ubuntu22-04.img \ + -chardev socket,id=char0,path=./vhost-net -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=ntts1 \ + -netdev user,id=ntts1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=2 \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=6,rx_queue_size=1024,tx_queue_size=1024,csum=off,guest_csum=off,gso=off,host_tso4=off,guest_tso4=off,guest_ecn=off,packed=on -vnc :10 --monitor stdio + +3. Set affinity:: + + (qemu)info cpus + taskset -cp 11 xxx + ... + taskset -cp 18 xxx + +4. On VM, bind virtio net to vfio-pci and run testpmd:: + + mount -t hugetlbfs nodev /mnt/huge + modprobe vfio-pci + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode + ./usertools/dpdk-devbind.py -b vfio-pci 00:04.0 + + ./build/app/dpdk-testpmd -c 0x3 -n 8 -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=2048 --rxd=2048 + testpmd>set fwd csum + testpmd>start + +5. Send tcp/ip packets by packet generator with different packet sizes [64,128,256,512,1024,1280,1518], show throughput with below command:: + + testpmd>show port stats all + +Test Case 9: pvp packed ring vhost async test with 2core 2queue using idxd kernel driver +---------------------------------------------------------------------------------------- + +1. Bind one nic port to vfio-pci and 1 dsa device to idxd, then generate 4wq by below command:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:29:00.0 + ./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0 + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0 + +2. Launch vhost testpmd by below command:: + + ./build/app/dpdk-testpmd -l 2-4 -n 8 --huge-dir=/dev/hugepages -a 0000:29:00.0 --socket-mem 8192 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,dmas=[txq0@wq0.0;rxq0@wq0.1;txq1@wq0.2;rxq1@wq0.3]' \ + -- -i --nb-cores=2 --txq=2 --rxq=2 --txd=2048 --rxd=2048 --forward-mode=mac -a + +2. Launch VM with mrg_rxbuf feature on:: + + taskset -c 11-18 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm \ + -m 8192 -object memory-backend-file,id=mem,size=8192M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -smp cores=8,sockets=1 -drive file=/home/xingguang/osimg/ubuntu22-04.img \ + -chardev socket,id=char0,path=./vhost-net -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=ntts1 \ + -netdev user,id=ntts1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=2 \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=6,rx_queue_size=1024,tx_queue_size=1024,csum=off,guest_csum=off,gso=off,host_tso4=off,guest_tso4=off,guest_ecn=off,packed=on -vnc :10 --monitor stdio + +3. Set affinity:: + + (qemu)info cpus + taskset -cp 11 xxx + ... + taskset -cp 18 xxx + +4. On VM, bind virtio net to vfio-pci and run testpmd:: + + mount -t hugetlbfs nodev /mnt/huge + modprobe vfio-pci + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode + ./usertools/dpdk-devbind.py -b vfio-pci 00:04.0 + + ./build/app/dpdk-testpmd -c 0x7 -n 8 -- -i --nb-cores=2 --txq=2 --rxq=2 --txd=2048 --rxd=2048 + testpmd>set fwd csum + testpmd>start + +5. Send tcp/ip packets by packet generator with different packet sizes [64,128,256,512,1024,1280,1518], show throughput with below command:: + + testpmd>show port stats all + +Test Case 10: pvp packed ring vhost async test with 2core 4queue using idxd kernel driver +----------------------------------------------------------------------------------------- + +1. Bind one nic port to vfio-pci and 1 dsa device to idxd, then generate 8wq by below command:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:29:00.0 + ./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0 + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 + +2. Launch vhost testpmd by below command:: + + ./build/app/dpdk-testpmd -l 2-4 -n 8 --huge-dir=/dev/hugepages -a 0000:29:00.0 --socket-mem 8192 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=4,dmas=[txq0@wq0.0;rxq0@wq0.1;txq1@wq0.2;rxq1@wq0.3;txq2@wq0.4;rxq2@wq0.5;txq3@wq0.6;rxq3@wq0.7]' + -- -i --nb-cores=2 --txq=4 --rxq=4 --txd=2048 --rxd=2048 --forward-mode=mac -a + +2. Launch VM with mrg_rxbuf feature on:: + + taskset -c 11-18 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm \ + -m 8192 -object memory-backend-file,id=mem,size=8192M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -smp cores=8,sockets=1 -drive file=/home/xingguang/osimg/ubuntu22-04.img \ + -chardev socket,id=char0,path=./vhost-net -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=ntts1 \ + -netdev user,id=ntts1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=4 \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=10,rx_queue_size=1024,tx_queue_size=1024,csum=off,guest_csum=off,gso=off,host_tso4=off,guest_tso4=off,guest_ecn=off,packed=on -vnc :10 --monitor stdio + +3. Set affinity:: + + (qemu)info cpus + taskset -cp 11 xxx + ... + taskset -cp 18 xxx + +4. On VM, bind virtio net to vfio-pci and run testpmd:: + + mount -t hugetlbfs nodev /mnt/huge + modprobe vfio-pci + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode + ./usertools/dpdk-devbind.py -b vfio-pci 00:04.0 + + ./build/app/dpdk-testpmd -c 0x7 -n 8 -- -i --nb-cores=2 --txq=4 --rxq=4 --txd=2048 --rxd=2048 + testpmd>set fwd csum + testpmd>start + +5. Send tcp/ip packets by packet generator with different packet sizes [64,128,256,512,1024,1280,1518], show throughput with below command:: + + testpmd>show port stats all + +Test Case 11: pvp packed ring vhost async test with 4core 4queue using idxd kernel driver +----------------------------------------------------------------------------------------- + +1. Bind one nic port to vfio-pci and 1 dsa device to idxd, then generate 8wq by below command:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:29:00.0 + ./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0 + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 + +2. Launch vhost testpmd by below command:: + + ./build/app/dpdk-testpmd -l 2-6 -n 8 --huge-dir=/dev/hugepages -a 0000:29:00.0 --socket-mem 8192 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=4,dmas=[txq0@wq0.0;rxq0@wq0.1;txq1@wq0.2;rxq1@wq0.3;txq2@wq0.4;rxq2@wq0.5;txq3@wq0.6;rxq3@wq0.7]' \ + -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=2048 --rxd=2048 --max-pkt-len=5200 --tx-offloads=0x00008000 --forward-mode=mac -a + +2. Launch VM with mrg_rxbuf feature on:: + + taskset -c 11-18 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm \ + -m 8192 -object memory-backend-file,id=mem,size=8192M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -smp cores=8,sockets=1 -drive file=/home/xingguang/osimg/ubuntu22-04.img \ + -chardev socket,id=char0,path=./vhost-net -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=ntts1 \ + -netdev user,id=ntts1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=4 \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=10,rx_queue_size=1024,tx_queue_size=1024,csum=off,guest_csum=off,gso=off,host_tso4=off,guest_tso4=off,guest_ecn=off,packed=on -vnc :10 --monitor stdio + +3. Set affinity:: + + (qemu)info cpus + taskset -cp 11 xxx + ... + taskset -cp 18 xxx + +4. On VM, bind virtio net to vfio-pci and run testpmd:: + + mount -t hugetlbfs nodev /mnt/huge + modprobe vfio-pci + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode + ./usertools/dpdk-devbind.py -b vfio-pci 00:04.0 + + ./build/app/dpdk-testpmd -c 0x1f -n 8 -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=2048 --rxd=2048 + testpmd>set fwd csum + testpmd>start + +5. Send tcp/ip packets by packet generator with different packet sizes [64,128,256,512,1024,1280,1518,2048,4096], show throughput with below command:: + + testpmd>show port stats all + +Test Case 12: pvp packed ring vhost async test with 4core 8queue using idxd kernel driver +----------------------------------------------------------------------------------------- + +1. Bind one nic port to vfio-pci and 1 dsa device to idxd, then generate 8wq by below command:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:29:00.0 + ./usertools/dpdk-devbind.py -b idxd 0000:6a:01.0 + ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 + +2. Launch vhost testpmd by below command:: + + ./build/app/dpdk-testpmd -l 2-6 -n 8 --huge-dir=/dev/hugepages -a 0000:29:00.0 --socket-mem 8192 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=8,dmas=[txq0@wq0.0;rxq0@wq0.0;txq1@wq0.1;rxq1@wq0.1;txq2@wq0.2;rxq2@wq0.2;txq3@wq0.3;rxq3@wq0.3;txq4@wq0.4;rxq4@wq0.4;txq5@wq0.5;rxq5@wq0.5;txq6@wq0.6;rxq6@wq0.6;txq7@wq0.7;rxq7@wq0.7]' + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=2048 --rxd=2048 --max-pkt-len=5200 --tx-offloads=0x00008000 --forward-mode=mac -a + +2. Launch VM with mrg_rxbuf feature on:: + + taskset -c 11-18 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm \ + -m 8192 -object memory-backend-file,id=mem,size=8192M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -smp cores=8,sockets=1 -drive file=/home/xingguang/osimg/ubuntu22-04.img \ + -chardev socket,id=char0,path=./vhost-net -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=ntts1 \ + -netdev user,id=ntts1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=18,rx_queue_size=1024,tx_queue_size=1024,csum=off,guest_csum=off,gso=off,host_tso4=off,guest_tso4=off,guest_ecn=off,packed=on -vnc :10 --monitor stdio + +3. Set affinity:: + + (qemu)info cpus + taskset -cp 11 xxx + ... + taskset -cp 18 xxx + +4. On VM, bind virtio net to vfio-pci and run testpmd:: + + mount -t hugetlbfs nodev /mnt/huge + modprobe vfio-pci + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode + ./usertools/dpdk-devbind.py -b vfio-pci 00:04.0 + + ./build/app/dpdk-testpmd -c 0x1f -n 8 -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=2048 --rxd=2048 + testpmd>set fwd csum + testpmd>start + +5. Send tcp/ip packets by packet generator with different packet sizes [64,128,256,512,1024,1280,1518,2048,4096], show throughput with below command:: + + testpmd>show port stats all + +Test Case 13: pvp split ring vhost async test with 1core 1queue using vfio-pci driver +------------------------------------------------------------------------------------- + +1. Bind one nic port and 1 dsa device to vfio-pci:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:29:00.0 0000:6a:01.0 + +2. Launch vhost testpmd by below command:: + + ./build/app/dpdk-testpmd -n 8 -l 9-10 --file-prefix=dpdk_vhost --huge-dir=/dev/hugepages -a 0000:29:00.0 -a 0000:6a:01.0,max_queues=2 --socket-mem 8192 \ + --vdev 'net_vhost,iface=./vhost-net,queues=1,dmas=[txq0@0000:6a:01.0-q0;rxq0@0000:6a:01.0-q1]' \ + --iova=va -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=2048 --rxd=2048 --forward-mode=mac -a + +3. Launch VM with mrg_rxbuf feature on:: + + taskset -c 11-18 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm \ + -m 8192 -object memory-backend-file,id=mem,size=8192M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -smp cores=8,sockets=1 -drive file=/home/xingguang/osimg/ubuntu22-04.img \ + -chardev socket,id=char0,path=./vhost-net -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=ntts1 \ + -netdev user,id=ntts1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,csum=off,guest_csum=off,gso=off,host_tso4=off,guest_tso4=off,guest_ecn=off -vnc :10 --monitor stdio + +4. Set affinity:: + + (qemu)info cpus + taskset -cp 11 xxx + ... + taskset -cp 18 xxx + +5. On VM, bind virtio net to vfio-pci and run testpmd:: + + mount -t hugetlbfs nodev /mnt/huge + modprobe vfio-pci + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode + ./usertools/dpdk-devbind.py -b vfio-pci 00:04.0 + + ./build/app/dpdk-testpmd -c 0x3 -n 8 -- -i --nb-cores=1 --txd=2048 --rxd=2048 + testpmd>set fwd csum + testpmd>start + +6. Send tcp/ip packets by packet generator with different packet sizes [64,128,256,512,1024,1280,1518], show throughput with below command:: + + testpmd>show port stats all + +Test Case 14: pvp split ring vhost async test with 1core 2queue using vfio-pci driver +------------------------------------------------------------------------------------- + +1. Bind one nic port and 1 dsa device to vfio-pci:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:29:00.0 0000:6a:01.0 + +2. Launch vhost testpmd by below command:: + + ./build/app/dpdk-testpmd -l 2-3 -n 8 --huge-dir=/dev/hugepages -a 0000:29:00.0 -a 0000:6a:01.0,max_queues=4 --socket-mem 8192 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,dmas=[txq0@0000:6a:01.0-q0;rxq0@0000:6a:01.0-q1;txq1@0000:6a:01.0-q2;rxq1@0000:6a:01.0-q3]' + -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=2048 --rxd=2048 --forward-mode=mac -a + +2. Launch VM with mrg_rxbuf feature on:: + + taskset -c 11-18 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm \ + -m 8192 -object memory-backend-file,id=mem,size=8192M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -smp cores=8,sockets=1 -drive file=/home/xingguang/osimg/ubuntu22-04.img \ + -chardev socket,id=char0,path=./vhost-net -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=ntts1 \ + -netdev user,id=ntts1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=2 \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=6,rx_queue_size=1024,tx_queue_size=1024,csum=off,guest_csum=off,gso=off,host_tso4=off,guest_tso4=off,guest_ecn=off -vnc :10 --monitor stdio + +3. Set affinity:: + + (qemu)info cpus + taskset -cp 11 xxx + ... + taskset -cp 18 xxx + +4. On VM, bind virtio net to vfio-pci and run testpmd:: + + mount -t hugetlbfs nodev /mnt/huge + modprobe vfio-pci + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode + ./usertools/dpdk-devbind.py -b vfio-pci 00:04.0 + + ./build/app/dpdk-testpmd -c 0x3 -n 8 -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=2048 --rxd=2048 + testpmd>set fwd csum + testpmd>start + +5. Send tcp/ip packets by packet generator with different packet sizes [64,128,256,512,1024,1280,1518], show throughput with below command:: + + testpmd>show port stats all + +Test Case 15: pvp split ring vhost async test with 2core 2queue using vfio-pci driver +------------------------------------------------------------------------------------- + +1. Bind one nic port and 1 dsa device to vfio-pci:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:29:00.0 0000:6a:01.0 + +2. Launch vhost testpmd by below command:: + + ./build/app/dpdk-testpmd -l 2-4 -n 8 --huge-dir=/dev/hugepages -a 0000:29:00.0 -a 0000:6a:01.0,max_queues=4 --socket-mem 8192 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,dmas=[txq0@0000:6a:01.0-q0;rxq0@0000:6a:01.0-q1;txq1@0000:6a:01.0-q2;rxq1@0000:6a:01.0-q3]' + -- -i --nb-cores=2 --txq=2 --rxq=2 --txd=2048 --rxd=2048 --forward-mode=mac -a + +2. Launch VM with mrg_rxbuf feature on:: + + taskset -c 11-18 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm \ + -m 8192 -object memory-backend-file,id=mem,size=8192M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -smp cores=8,sockets=1 -drive file=/home/xingguang/osimg/ubuntu22-04.img \ + -chardev socket,id=char0,path=./vhost-net -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=ntts1 \ + -netdev user,id=ntts1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=2 \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=6,rx_queue_size=1024,tx_queue_size=1024,csum=off,guest_csum=off,gso=off,host_tso4=off,guest_tso4=off,guest_ecn=off -vnc :10 --monitor stdio + +3. Set affinity:: + + (qemu)info cpus + taskset -cp 11 xxx + ... + taskset -cp 18 xxx + +4. On VM, bind virtio net to vfio-pci and run testpmd:: + + mount -t hugetlbfs nodev /mnt/huge + modprobe vfio-pci + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode + ./usertools/dpdk-devbind.py -b vfio-pci 00:04.0 + + ./build/app/dpdk-testpmd -c 0x7 -n 8 -- -i --nb-cores=2 --txq=2 --rxq=2 --txd=2048 --rxd=2048 + testpmd>set fwd csum + testpmd>start + +5. Send tcp/ip packets by packet generator with different packet sizes [64,128,256,512,1024,1280,1518], show throughput with below command:: + + testpmd>show port stats all + +Test Case 16: pvp split ring vhost async test with 2core 4queue using vfio-pci driver +------------------------------------------------------------------------------------- + +1. Bind one nic port and 1 dsa device to vfio-pci:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:29:00.0 0000:6a:01.0 + +2. Launch vhost testpmd by below command:: + + ./build/app/dpdk-testpmd -l 2-4 -n 8 --huge-dir=/dev/hugepages -a 0000:29:00.0 -a 0000:6a:01.0,max_queues=8 --socket-mem 8192 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=4,dmas=[txq0@0000:6a:01.0-q0;rxq0@0000:6a:01.0-q1;txq1@0000:6a:01.0-q2;rxq1@0000:6a:01.0-q3;txq2@0000:6a:01.0-q4;rxq2@0000:6a:01.0-q5;txq3@0000:6a:01.0-q6;rxq3@0000:6a:01.0-q7]' + -- -i --nb-cores=2 --txq=4 --rxq=4 --txd=2048 --rxd=2048 --forward-mode=mac -a + +2. Launch VM with mrg_rxbuf feature on:: + + taskset -c 11-18 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm \ + -m 8192 -object memory-backend-file,id=mem,size=8192M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -smp cores=8,sockets=1 -drive file=/home/xingguang/osimg/ubuntu22-04.img \ + -chardev socket,id=char0,path=./vhost-net -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=ntts1 \ + -netdev user,id=ntts1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=4 \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=10,rx_queue_size=1024,tx_queue_size=1024,csum=off,guest_csum=off,gso=off,host_tso4=off,guest_tso4=off,guest_ecn=off -vnc :10 --monitor stdio + +3. Set affinity:: + + (qemu)info cpus + taskset -cp 11 xxx + ... + taskset -cp 18 xxx + +4. On VM, bind virtio net to vfio-pci and run testpmd:: + + mount -t hugetlbfs nodev /mnt/huge + modprobe vfio-pci + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode + ./usertools/dpdk-devbind.py -b vfio-pci 00:04.0 + + ./build/app/dpdk-testpmd -c 0x7 -n 8 -- -i --nb-cores=2 --txq=4 --rxq=4 --txd=2048 --rxd=2048 + testpmd>set fwd csum + testpmd>start + +5. Send tcp/ip packets by packet generator with different packet sizes [64,128,256,512,1024,1280,1518], show throughput with below command:: + + testpmd>show port stats all + +Test Case 17: pvp split ring vhost async test with 4core 4queue using vfio-pci driver +------------------------------------------------------------------------------------- + +1. Bind one nic port and 1 dsa device to vfio-pci:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:29:00.0 0000:6a:01.0 + +2. Launch vhost testpmd by below command:: + + ./build/app/dpdk-testpmd -l 2-6 -n 8 --huge-dir=/dev/hugepages -a 0000:29:00.0 -a 0000:6a:01.0,max_queues=8 --socket-mem 8192 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=4,dmas=[txq0@0000:6a:01.0-q0;rxq0@0000:6a:01.0-q1;txq1@0000:6a:01.0-q2;rxq1@0000:6a:01.0-q3;txq2@0000:6a:01.0-q4;rxq2@0000:6a:01.0-q5;txq3@0000:6a:01.0-q6;rxq3@0000:6a:01.0-q7]' + -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=2048 --rxd=2048 --max-pkt-len=5200 --tx-offloads=0x00008000 --forward-mode=mac -a + +2. Launch VM with mrg_rxbuf feature on:: + + taskset -c 11-18 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm \ + -m 8192 -object memory-backend-file,id=mem,size=8192M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -smp cores=8,sockets=1 -drive file=/home/xingguang/osimg/ubuntu22-04.img \ + -chardev socket,id=char0,path=./vhost-net -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=ntts1 \ + -netdev user,id=ntts1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=4 \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=10,rx_queue_size=1024,tx_queue_size=1024,csum=off,guest_csum=off,gso=off,host_tso4=off,guest_tso4=off,guest_ecn=off -vnc :10 --monitor stdio + +3. Set affinity:: + + (qemu)info cpus + taskset -cp 11 xxx + ... + taskset -cp 18 xxx + +4. On VM, bind virtio net to vfio-pci and run testpmd:: + + mount -t hugetlbfs nodev /mnt/huge + modprobe vfio-pci + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode + ./usertools/dpdk-devbind.py -b vfio-pci 00:04.0 + + ./build/app/dpdk-testpmd -c 0x1f -n 8 -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=2048 --rxd=2048 + testpmd>set fwd csum + testpmd>start + +5. Send tcp/ip packets by packet generator with different packet sizes [64,128,256,512,1024,1280,1518,2048,4096], show throughput with below command:: + + testpmd>show port stats all + +Test Case 18: pvp split ring vhost async test with 4core 8queue using vfio-pci driver +------------------------------------------------------------------------------------- + +1. Bind one nic port and 1 dsa device to vfio-pci:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:29:00.0 0000:6a:01.0 + +2. Launch vhost testpmd by below command:: + + ./build/app/dpdk-testpmd -l 2-6 -n 8 --huge-dir=/dev/hugepages -a 0000:29:00.0 -a 0000:6a:01.0,max_queues=8 --socket-mem 8192 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=8,dmas=[txq0@0000:6a:01.0-q0;rxq0@0000:6a:01.0-q0;txq1@0000:6a:01.0-q1;rxq1@0000:6a:01.0-q1;txq2@0000:6a:01.0-q2;rxq2@0000:6a:01.0-q2;txq3@0000:6a:01.0-q3;rxq3@0000:6a:01.0-q3;txq4@0000:6a:01.0-q4;rxq4@0000:6a:01.0-q4;txq5@0000:6a:01.0-q5;rxq5@0000:6a:01.0-q5;txq6@0000:6a:01.0-q6;rxq6@0000:6a:01.0-q6;txq7@0000:6a:01.0-q7;rxq7@0000:6a:01.0-q7]' + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=2048 --rxd=2048 --max-pkt-len=5200 --tx-offloads=0x00008000 --forward-mode=mac -a + +2. Launch VM with mrg_rxbuf feature on:: + + taskset -c 11-18 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm \ + -m 8192 -object memory-backend-file,id=mem,size=8192M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -smp cores=8,sockets=1 -drive file=/home/xingguang/osimg/ubuntu22-04.img \ + -chardev socket,id=char0,path=./vhost-net -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=ntts1 \ + -netdev user,id=ntts1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=18,rx_queue_size=1024,tx_queue_size=1024,csum=off,guest_csum=off,gso=off,host_tso4=off,guest_tso4=off,guest_ecn=off -vnc :10 --monitor stdio + +3. Set affinity:: + + (qemu)info cpus + taskset -cp 11 xxx + ... + taskset -cp 18 xxx + +4. On VM, bind virtio net to vfio-pci and run testpmd:: + + mount -t hugetlbfs nodev /mnt/huge + modprobe vfio-pci + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode + ./usertools/dpdk-devbind.py -b vfio-pci 00:04.0 + + ./build/app/dpdk-testpmd -c 0x1f -n 8 -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=2048 --rxd=2048 + testpmd>set fwd csum + testpmd>start + +5. Send tcp/ip packets by packet generator with different packet sizes [64,128,256,512,1024,1280,1518,2048,4096], show throughput with below command:: + + testpmd>show port stats all + +Test Case 19: pvp packed ring vhost async test with 1core 1queue using vfio-pci driver +-------------------------------------------------------------------------------------- + +1. Bind one nic port and 1 dsa device to vfio-pci:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:29:00.0 0000:6a:01.0 + +2. Launch vhost testpmd by below command:: + + ./build/app/dpdk-testpmd -l 2-3 -n 8 --huge-dir=/dev/hugepages -a 0000:29:00.0 -a 0000:6a:01.0,max_queues=2 --socket-mem 8192 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=1,dmas=[txq0@0000:6a:01.0-q0;rxq0@0000:6a:01.0-q1]' + -- -i --nb-cores=1 --txq=1 --rxq=1 --txd=2048 --rxd=2048 --forward-mode=mac -a + +2. Launch VM with mrg_rxbuf feature on:: + + taskset -c 11-18 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm \ + -m 8192 -object memory-backend-file,id=mem,size=8192M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -smp cores=8,sockets=1 -drive file=/home/xingguang/osimg/ubuntu22-04.img \ + -chardev socket,id=char0,path=./vhost-net -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=ntts1 \ + -netdev user,id=ntts1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,csum=off,guest_csum=off,gso=off,host_tso4=off,guest_tso4=off,guest_ecn=off,packed=on -vnc :10 --monitor stdio + +3. Set affinity:: + + (qemu)info cpus + taskset -cp 11 xxx + ... + taskset -cp 18 xxx + +4. On VM, bind virtio net to vfio-pci and run testpmd:: + + mount -t hugetlbfs nodev /mnt/huge + modprobe vfio-pci + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode + ./usertools/dpdk-devbind.py -b vfio-pci 00:04.0 + + ./build/app/dpdk-testpmd -c 0x3 -n 8 -- -i --nb-cores=1 --txd=2048 --rxd=2048 + testpmd>set fwd csum + testpmd>start + +5. Send tcp/ip packets by packet generator with different packet sizes [64,128,256,512,1024,1280,1518], show throughput with below command:: + + testpmd>show port stats all + +Test Case 20: pvp packed ring vhost async test with 1core 2queue using vfio-pci driver +-------------------------------------------------------------------------------------- + +1. Bind one nic port and 1 dsa device to vfio-pci:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:29:00.0 0000:6a:01.0 + +2. Launch vhost testpmd by below command:: + + ./build/app/dpdk-testpmd -l 2-3 -n 8 --huge-dir=/dev/hugepages -a 0000:29:00.0 -a 0000:6a:01.0,max_queues=4 --socket-mem 8192 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,dmas=[txq0@0000:6a:01.0-q0;rxq0@0000:6a:01.0-q1;txq1@0000:6a:01.0-q2;rxq1@0000:6a:01.0-q3]' + -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=2048 --rxd=2048 --forward-mode=mac -a + +2. Launch VM with mrg_rxbuf feature on:: + + taskset -c 11-18 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm \ + -m 8192 -object memory-backend-file,id=mem,size=8192M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -smp cores=8,sockets=1 -drive file=/home/xingguang/osimg/ubuntu22-04.img \ + -chardev socket,id=char0,path=./vhost-net -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=ntts1 \ + -netdev user,id=ntts1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=2 \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=6,rx_queue_size=1024,tx_queue_size=1024,csum=off,guest_csum=off,gso=off,host_tso4=off,guest_tso4=off,guest_ecn=off,packed=on -vnc :10 --monitor stdio + +3. Set affinity:: + + (qemu)info cpus + taskset -cp 11 xxx + ... + taskset -cp 18 xxx + +4. On VM, bind virtio net to vfio-pci and run testpmd:: + + mount -t hugetlbfs nodev /mnt/huge + modprobe vfio-pci + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode + ./usertools/dpdk-devbind.py -b vfio-pci 00:04.0 + + ./build/app/dpdk-testpmd -c 0x3 -n 8 -- -i --nb-cores=1 --txq=2 --rxq=2 --txd=2048 --rxd=2048 + testpmd>set fwd csum + testpmd>start + +5. Send tcp/ip packets by packet generator with different packet sizes [64,128,256,512,1024,1280,1518], show throughput with below command:: + + testpmd>show port stats all + +Test Case 21: pvp packed ring vhost async test with 2core 2queue using vfio-pci driver +-------------------------------------------------------------------------------------- + +1. Bind one nic port and 1 dsa device to vfio-pci:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:29:00.0 0000:6a:01.0 + +2. Launch vhost testpmd by below command:: + + ./build/app/dpdk-testpmd -l 2-4 -n 8 --huge-dir=/dev/hugepages -a 0000:29:00.0 -a 0000:6a:01.0,max_queues=4 --socket-mem 8192 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,dmas=[txq0@0000:6a:01.0-q0;rxq0@0000:6a:01.0-q1;txq1@0000:6a:01.0-q2;rxq1@0000:6a:01.0-q3]' + -- -i --nb-cores=2 --txq=2 --rxq=2 --txd=2048 --rxd=2048 --forward-mode=mac -a + +2. Launch VM with mrg_rxbuf feature on:: + + taskset -c 11-18 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm \ + -m 8192 -object memory-backend-file,id=mem,size=8192M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -smp cores=8,sockets=1 -drive file=/home/xingguang/osimg/ubuntu22-04.img \ + -chardev socket,id=char0,path=./vhost-net -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=ntts1 \ + -netdev user,id=ntts1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=2 \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=6,rx_queue_size=1024,tx_queue_size=1024,csum=off,guest_csum=off,gso=off,host_tso4=off,guest_tso4=off,guest_ecn=off,packed=on -vnc :10 --monitor stdio + +3. Set affinity:: + + (qemu)info cpus + taskset -cp 11 xxx + ... + taskset -cp 18 xxx + +4. On VM, bind virtio net to vfio-pci and run testpmd:: + + mount -t hugetlbfs nodev /mnt/huge + modprobe vfio-pci + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode + ./usertools/dpdk-devbind.py -b vfio-pci 00:04.0 + + ./build/app/dpdk-testpmd -c 0x7 -n 8 -- -i --nb-cores=2 --txq=2 --rxq=2 --txd=2048 --rxd=2048 + testpmd>set fwd csum + testpmd>start + +5. Send tcp/ip packets by packet generator with different packet sizes [64,128,256,512,1024,1280,1518], show throughput with below command:: + + testpmd>show port stats all + +Test Case 22: pvp packed ring vhost async test with 2core 4queue using vfio-pci driver +-------------------------------------------------------------------------------------- + +1. Bind one nic port and 1 dsa device to vfio-pci:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:29:00.0 0000:6a:01.0 + +2. Launch vhost testpmd by below command:: + + ./build/app/dpdk-testpmd -l 2-4 -n 8 --huge-dir=/dev/hugepages -a 0000:29:00.0 -a 0000:6a:01.0,max_queues=8 --socket-mem 8192 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=4,dmas=[txq0@0000:6a:01.0-q0;rxq0@0000:6a:01.0-q1;txq1@0000:6a:01.0-q2;rxq1@0000:6a:01.0-q3;txq2@0000:6a:01.0-q4;rxq2@0000:6a:01.0-q5;txq3@0000:6a:01.0-q6;rxq3@0000:6a:01.0-q7]' + -- -i --nb-cores=2 --txq=4 --rxq=4 --txd=2048 --rxd=2048 --forward-mode=mac -a + +2. Launch VM with mrg_rxbuf feature on:: + + taskset -c 11-18 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm \ + -m 8192 -object memory-backend-file,id=mem,size=8192M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -smp cores=8,sockets=1 -drive file=/home/xingguang/osimg/ubuntu22-04.img \ + -chardev socket,id=char0,path=./vhost-net -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=ntts1 \ + -netdev user,id=ntts1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=4 \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=10,rx_queue_size=1024,tx_queue_size=1024,csum=off,guest_csum=off,gso=off,host_tso4=off,guest_tso4=off,guest_ecn=off,packed=on -vnc :10 --monitor stdio + +3. Set affinity:: + + (qemu)info cpus + taskset -cp 11 xxx + ... + taskset -cp 18 xxx + +4. On VM, bind virtio net to vfio-pci and run testpmd:: + + mount -t hugetlbfs nodev /mnt/huge + modprobe vfio-pci + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode + ./usertools/dpdk-devbind.py -b vfio-pci 00:04.0 + + ./build/app/dpdk-testpmd -c 0x7 -n 8 -- -i --nb-cores=2 --txq=4 --rxq=4 --txd=2048 --rxd=2048 + testpmd>set fwd csum + testpmd>start + +5. Send tcp/ip packets by packet generator with different packet sizes [64,128,256,512,1024,1280,1518], show throughput with below command:: + + testpmd>show port stats all + +Test Case 23: pvp packed ring vhost async test with 4core 4queue using vfio-pci driver +-------------------------------------------------------------------------------------- + +1. Bind one nic port and 1 dsa device to vfio-pci:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:29:00.0 0000:6a:01.0 + +2. Launch vhost testpmd by below command:: + + ./build/app/dpdk-testpmd -l 2-6 -n 8 --huge-dir=/dev/hugepages -a 0000:29:00.0 -a 0000:6a:01.0,max_queues=8 --socket-mem 8192 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=4,dmas=[txq0@0000:6a:01.0-q0;rxq0@0000:6a:01.0-q1;txq1@0000:6a:01.0-q2;rxq1@0000:6a:01.0-q3;txq2@0000:6a:01.0-q4;rxq2@0000:6a:01.0-q5;txq3@0000:6a:01.0-q6;rxq3@0000:6a:01.0-q7]' + -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=2048 --rxd=2048 --max-pkt-len=5200 --tx-offloads=0x00008000 --forward-mode=mac -a + +2. Launch VM with mrg_rxbuf feature on:: + + taskset -c 11-18 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm \ + -m 8192 -object memory-backend-file,id=mem,size=8192M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -smp cores=8,sockets=1 -drive file=/home/xingguang/osimg/ubuntu22-04.img \ + -chardev socket,id=char0,path=./vhost-net -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=ntts1 \ + -netdev user,id=ntts1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=4 \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=10,rx_queue_size=1024,tx_queue_size=1024,csum=off,guest_csum=off,gso=off,host_tso4=off,guest_tso4=off,guest_ecn=off,packed=on -vnc :10 --monitor stdio + +3. Set affinity:: + + (qemu)info cpus + taskset -cp 11 xxx + ... + taskset -cp 18 xxx + +4. On VM, bind virtio net to vfio-pci and run testpmd:: + + mount -t hugetlbfs nodev /mnt/huge + modprobe vfio-pci + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode + ./usertools/dpdk-devbind.py -b vfio-pci 00:04.0 + + ./build/app/dpdk-testpmd -c 0x1f -n 8 -- -i --nb-cores=4 --txq=4 --rxq=4 --txd=2048 --rxd=2048 + testpmd>set fwd csum + testpmd>start + +5. Send tcp/ip packets by packet generator with different packet sizes [64,128,256,512,1024,1280,1518,2048,4096], show throughput with below command:: + + testpmd>show port stats all + +Test Case 24: pvp packed ring vhost async test with 4core 8queue using vfio-pci driver +-------------------------------------------------------------------------------------- + +1. Bind one nic port and 1 dsa device to vfio-pci:: + + ./usertools/dpdk-devbind.py -b vfio-pci 0000:29:00.0 0000:6a:01.0 + +2. Launch vhost testpmd by below command:: + + ./build/app/dpdk-testpmd -l 2-6 -n 8 --huge-dir=/dev/hugepages -a 0000:29:00.0 -a 0000:6a:01.0,max_queues=8 --socket-mem 8192 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=8,dmas=[txq0@0000:6a:01.0-q0;rxq0@0000:6a:01.0-q0;txq1@0000:6a:01.0-q1;rxq1@0000:6a:01.0-q1;txq2@0000:6a:01.0-q2;rxq2@0000:6a:01.0-q2;txq3@0000:6a:01.0-q3;rxq3@0000:6a:01.0-q3;txq4@0000:6a:01.0-q4;rxq4@0000:6a:01.0-q4;txq5@0000:6a:01.0-q5;rxq5@0000:6a:01.0-q5;txq6@0000:6a:01.0-q6;rxq6@0000:6a:01.0-q6;txq7@0000:6a:01.0-q7;rxq7@0000:6a:01.0-q7]' + -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=2048 --rxd=2048 --max-pkt-len=5200 --tx-offloads=0x00008000 --forward-mode=mac -a + +2. Launch VM with mrg_rxbuf feature on:: + + taskset -c 11-18 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm \ + -m 8192 -object memory-backend-file,id=mem,size=8192M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -smp cores=8,sockets=1 -drive file=/home/xingguang/osimg/ubuntu22-04.img \ + -chardev socket,id=char0,path=./vhost-net -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=ntts1 \ + -netdev user,id=ntts1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=18,rx_queue_size=1024,tx_queue_size=1024,csum=off,guest_csum=off,gso=off,host_tso4=off,guest_tso4=off,guest_ecn=off,packed=on -vnc :10 --monitor stdio + +3. Set affinity:: + + (qemu)info cpus + taskset -cp 11 xxx + ... + taskset -cp 18 xxx + +4. On VM, bind virtio net to vfio-pci and run testpmd:: + + mount -t hugetlbfs nodev /mnt/huge + modprobe vfio-pci + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode + ./usertools/dpdk-devbind.py -b vfio-pci 00:04.0 + + ./build/app/dpdk-testpmd -c 0x1f -n 8 -- -i --nb-cores=4 --txq=8 --rxq=8 --txd=2048 --rxd=2048 + testpmd>set fwd csum + testpmd>start + +5. Send tcp/ip packets by packet generator with different packet sizes [64,128,256,512,1024,1280,1518,2048,4096], show throughput with below command:: + + testpmd>show port stats all \ No newline at end of file From patchwork Mon Feb 6 04:49:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 123096 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4392A41BE7; Mon, 6 Feb 2023 06:01:32 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3727742D0E; Mon, 6 Feb 2023 06:01:32 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 92C4840A7D for ; Mon, 6 Feb 2023 06:01:29 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675659689; x=1707195689; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=iezMvICWOGKtU+umQ13ClkyCLyi1ZXpSbC+fN6MftVw=; b=NGWQ0WNOSNbHIDHZmtKKQ+F+Ww5lBenjf3LYth9YdgXGjSWCSS4CjVF6 PzSjojSdDgWJVRh9ZVb3L1DTyfcnvhTUTjbQyPyI7IAf9FuKorlJ3zMQn 2zFfAEaQoEdgC8sRQ9wXD3sp+53FNOO3b0mF/llWm/yBfz9bFODUPCvsn cApSYU9o1gTsag0T04M8tIAIJEBnM64gveqAB6JI8F1bA2EISVaC+Xw7N idQOZ6PgHNFG4+xiLVofoMTDRzzGbhi5Wpd2KaICwBsY8ADvS4H25hHLT 6ToFUPIAFNjDn7XMo/1sQGqIajye+yF3eWduRjWWkNNR4irSvDkaNooaZ w==; X-IronPort-AV: E=McAfee;i="6500,9779,10612"; a="327761987" X-IronPort-AV: E=Sophos;i="5.97,276,1669104000"; d="scan'208";a="327761987" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Feb 2023 21:01:28 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10612"; a="840240884" X-IronPort-AV: E=Sophos;i="5.97,276,1669104000"; d="scan'208";a="840240884" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Feb 2023 21:01:26 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 3/4] tests/pvp_vhost_async_virtio_pmd_perf_dsa: add new testsuite Date: Mon, 6 Feb 2023 12:49:25 +0800 Message-Id: <20230206044925.3644060-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add pvp_vhost_async_virtio_pmd_perf_dsa testsuite to test pvp vhost/virtio-pmd async data-path performance use DSA device with dpdk and kernel driver. Signed-off-by: Wei Ling --- ...ite_pvp_vhost_async_virtio_pmd_perf_dsa.py | 990 ++++++++++++++++++ 1 file changed, 990 insertions(+) create mode 100644 tests/TestSuite_pvp_vhost_async_virtio_pmd_perf_dsa.py diff --git a/tests/TestSuite_pvp_vhost_async_virtio_pmd_perf_dsa.py b/tests/TestSuite_pvp_vhost_async_virtio_pmd_perf_dsa.py new file mode 100644 index 00000000..f66fa041 --- /dev/null +++ b/tests/TestSuite_pvp_vhost_async_virtio_pmd_perf_dsa.py @@ -0,0 +1,990 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2019 Intel Corporation +# + +import json +import os +from copy import deepcopy + +import framework.rst as rst +from framework.packet import Packet +from framework.pktgen import PacketGeneratorHelper +from framework.pmd_output import PmdOutput +from framework.settings import HEADER_SIZE, UPDATE_EXPECTED, load_global_setting +from framework.test_case import TestCase +from framework.virt_common import VM + +from .virtio_common import dsa_common as DC + + +class TestPVPVhostAsyncVirtioPmdPerfDsa(TestCase): + def set_up_all(self): + """ + Run at the start of each test suite. + """ + self.vhost_user = self.dut.new_session(suite="vhost-user") + self.vhost_user_pmd = PmdOutput(self.dut, self.vhost_user) + self.dut_ports = self.dut.get_ports() + self.ports_socket = self.dut.get_numa_id(self.dut_ports[0]) + self.core_list = self.dut.get_core_list(config="all", socket=self.ports_socket) + self.frame_sizes = [64, 128, 256, 512, 1024, 1280, 1518] + self.headers_size = HEADER_SIZE["eth"] + HEADER_SIZE["ip"] + HEADER_SIZE["tcp"] + self.virtio_mac = "00:11:22:33:44:55" + self.number_of_ports = 1 + self.testpmd_path = self.dut.apps_name["test-pmd"] + self.testpmd_name = self.testpmd_path.split("/")[-1] + self.out_path = "/tmp" + out = self.tester.send_expect("ls -d %s" % self.out_path, "# ") + if "No such file or directory" in out: + self.tester.send_expect("mkdir -p %s" % self.out_path, "# ") + self.base_dir = self.dut.base_dir.replace("~", "/root") + self.logger.info(self.base_dir) + # create an instance to set stream field setting + self.pktgen_helper = PacketGeneratorHelper() + self.save_result_flag = True + self.json_obj = {} + self.DC = DC(self) + + def set_up(self): + """ + Run before each test case. + """ + self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#") + self.dut.send_expect("killall -s INT qemu-system-x86_64", "#") + self.dut.send_expect("rm -rf %s/vhost-net*" % self.base_dir, "#") + self.DC.reset_all_work_queue() + self.DC.bind_all_dsa_to_kernel() + # Prepare the result table + self.table_header = ["Frame"] + self.table_header.append("Mode/RXD-TXD") + self.table_header.append("Mpps") + self.table_header.append("% linerate") + self.result_table_create(self.table_header) + + self.test_parameters = self.get_suite_cfg()["test_parameters"] + # test parameters include: frames size, descriptor numbers + self.test_parameters = self.get_suite_cfg()["test_parameters"] + + # traffic duraion in second + self.test_duration = self.get_suite_cfg()["test_duration"] + + # initilize throughput attribution + # {'$framesize':{"$nb_desc": 'throughput'} + self.throughput = {} + + # Accepted tolerance in Mpps + self.gap = self.get_suite_cfg()["accepted_tolerance"] + self.test_result = {} + self.nb_desc = self.test_parameters[64][0] + + def start_vhost_user_testpmd(self, cores, eal_param, param, ports): + """ + start testpmd on vhost-user + """ + self.vhost_user_pmd.start_testpmd( + cores=cores, + eal_param=eal_param, + param=param, + ports=ports, + prefix="vhost-user", + fixed_prefix=True, + ) + + def start_one_vm(self, packed=False): + """ + start qemus + """ + self.vm = VM(self.dut, "vm0", "vhost_sample") + self.vm.load_config() + vm_params = {} + vm_params["driver"] = "vhost-user" + vm_params["opt_path"] = "%s/vhost-net" % self.base_dir + vm_params["opt_mac"] = "%s" % self.virtio_mac + vm_params["opt_queue"] = self.queues + packed_param = ",packed=on" if packed else "" + vm_params["opt_settings"] = ( + "disable-modern=false,mrg_rxbuf=on,mq=on,rx_queue_size=1024,tx_queue_size=1024%s" + % packed_param + ) + self.vm.set_vm_device(**vm_params) + try: + self.vm_dut = self.vm.start(load_config=False) + if self.vm_dut is None: + raise Exception("Set up VM ENV failed") + except Exception as e: + self.logger.error("ERROR: Failure for %s" % str(e)) + + def start_vm_testpmd(self): + """ + start testpmd in VM + """ + self.vm0_pmd = PmdOutput(self.vm_dut) + param = "--nb-cores=%d --txq=%d --rxq=%d --txd=2048 --rxd=2048" % ( + self.nb_cores, + self.queues, + self.queues, + ) + cores = self.vm_dut.get_core_list(config="all")[0 : (self.nb_cores + 1)] + self.vm0_pmd.start_testpmd(cores=cores, param=param) + self.vm0_pmd.execute_cmd("set fwd csum") + self.vm0_pmd.execute_cmd("start") + + def send_and_verify(self, case_info): + """ + Send packet with packet generator and verify + """ + for frame_size in self.frame_sizes: + payload_size = frame_size - self.headers_size + tgen_input = [] + self.throughput[frame_size] = dict() + self.logger.info( + "Test running at parameters: " + + "framesize: {}, rxd/txd: {}".format(frame_size, self.nb_desc) + ) + rx_port = self.tester.get_local_port(self.dut_ports[0]) + tx_port = self.tester.get_local_port(self.dut_ports[0]) + fields_config = { + "ip": { + "src": {"action": "random"}, + "dst": {"action": "random"}, + }, + } + pkt = Packet() + pkt.assign_layers(["ether", "ipv4", "tcp", "raw"]) + pkt.config_layers( + [ + ("ether", {"dst": "%s" % self.virtio_mac}), + ("ipv4", {"src": "1.1.1.1", "dst": "2.2.2.2"}), + ("raw", {"payload": ["01"] * int("%d" % payload_size)}), + ] + ) + pkt.save_pcapfile( + self.tester, + "%s/pvp_virtio_pmd_perf_dsa_%s.pcap" % (self.out_path, frame_size), + ) + tgen_input.append( + ( + tx_port, + rx_port, + "%s/pvp_virtio_pmd_perf_dsa_%s.pcap" % (self.out_path, frame_size), + ) + ) + self.tester.pktgen.clear_streams() + streams = self.pktgen_helper.prepare_stream_from_tginput( + tgen_input, 100, fields_config, self.tester.pktgen + ) + # set traffic option + traffic_opt = {"delay": 10, "duration": 30} + _, pps = self.tester.pktgen.measure_throughput( + stream_ids=streams, options=traffic_opt + ) + Mpps = pps / 1000000.0 + self.throughput[frame_size][self.nb_desc] = Mpps + linerate = ( + Mpps + * 100 + / float(self.wirespeed(self.nic, frame_size, self.number_of_ports)) + ) + results_row = [frame_size] + results_row.append(case_info) + results_row.append(Mpps) + results_row.append(linerate) + self.result_table_add(results_row) + + def handle_expected(self): + """ + Update expected numbers to configurate file: $DTS_CFG_FOLDER/$suite_name.cfg + """ + if load_global_setting(UPDATE_EXPECTED) == "yes": + for frame_size in self.test_parameters.keys(): + for nb_desc in self.test_parameters[frame_size]: + self.expected_throughput[frame_size][nb_desc] = round( + self.throughput[frame_size][nb_desc], 3 + ) + + def handle_results(self): + """ + results handled process: + 1, save to self.test_results + 2, create test results table + 3, save to json file for Open Lab + """ + header = self.table_header + header.append("Expected Throughput") + header.append("Throughput Difference") + for frame_size in self.test_parameters.keys(): + wirespeed = self.wirespeed(self.nic, frame_size, self.number_of_ports) + ret_datas = {} + for nb_desc in self.test_parameters[frame_size]: + ret_data = {} + ret_data[header[0]] = frame_size + ret_data[header[1]] = nb_desc + ret_data[header[2]] = "{:.3f} Mpps".format( + self.throughput[frame_size][nb_desc] + ) + ret_data[header[3]] = "{:.3f}%".format( + self.throughput[frame_size][nb_desc] * 100 / wirespeed + ) + ret_data[header[4]] = "{:.3f} Mpps".format( + self.expected_throughput[frame_size][nb_desc] + ) + ret_data[header[5]] = "{:.3f} Mpps".format( + self.throughput[frame_size][nb_desc] + - self.expected_throughput[frame_size][nb_desc] + ) + ret_datas[nb_desc] = deepcopy(ret_data) + self.test_result[frame_size] = deepcopy(ret_datas) + # Create test results table + self.result_table_create(header) + for frame_size in self.test_parameters.keys(): + for nb_desc in self.test_parameters[frame_size]: + table_row = list() + for i in range(len(header)): + table_row.append(self.test_result[frame_size][nb_desc][header[i]]) + self.result_table_add(table_row) + # present test results to screen + self.result_table_print() + # save test results as a file + if self.save_result_flag: + self.save_result(self.test_result) + + def save_result(self, data): + """ + Saves the test results as a separated file named with + self.nic+_perf_virtio_user_pvp.json in output folder + if self.save_result_flag is True + """ + case_name = self.running_case + self.json_obj[case_name] = list() + status_result = [] + for frame_size in self.test_parameters.keys(): + for nb_desc in self.test_parameters[frame_size]: + row_in = self.test_result[frame_size][nb_desc] + row_dict0 = dict() + row_dict0["performance"] = list() + row_dict0["parameters"] = list() + row_dict0["parameters"] = list() + result_throughput = float(row_in["Mpps"].split()[0]) + expected_throughput = float(row_in["Expected Throughput"].split()[0]) + # delta value and accepted tolerance in percentage + delta = result_throughput - expected_throughput + gap = expected_throughput * -self.gap * 0.01 + delta = float(delta) + gap = float(gap) + self.logger.info("Accept tolerance are (Mpps) %f" % gap) + self.logger.info("Throughput Difference are (Mpps) %f" % delta) + if result_throughput > expected_throughput + gap: + row_dict0["status"] = "PASS" + else: + row_dict0["status"] = "FAIL" + row_dict1 = dict( + name="Throughput", value=result_throughput, unit="Mpps", delta=delta + ) + row_dict2 = dict( + name="Txd/Rxd", value=row_in["Mode/RXD-TXD"], unit="descriptor" + ) + row_dict3 = dict(name="frame_size", value=row_in["Frame"], unit="bytes") + row_dict0["performance"].append(row_dict1) + row_dict0["parameters"].append(row_dict2) + row_dict0["parameters"].append(row_dict3) + self.json_obj[case_name].append(row_dict0) + status_result.append(row_dict0["status"]) + with open( + os.path.join( + rst.path2Result, "{0:s}_{1}.json".format(self.nic, self.suite_name) + ), + "w", + ) as fp: + json.dump(self.json_obj, fp) + self.verify("FAIL" not in status_result, "Exceeded Gap") + + def perf_test(self, case_info, ports, packed=False, port_options=""): + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] + vhost_eal_param = ( + "--socket-mem 8192 --vdev 'net_vhost0,iface=vhost-net,queues=%d,dmas=[%s]'" + % (self.queues, self.dmas) + ) + vhost_param = ( + "--nb-cores=%d --txq=%d --rxq=%d --txd=2048 --rxd=2048 --forward-mode=mac -a" + % (self.nb_cores, self.queues, self.queues) + ) + cores = self.core_list[0 : (self.nb_cores + 1)] + if not port_options: + self.vhost_user_pmd.start_testpmd( + cores=cores, + eal_param=vhost_eal_param, + param=vhost_param, + ports=ports, + prefix="vhost-user", + fixed_prefix=True, + ) + else: + self.vhost_user_pmd.start_testpmd( + cores=cores, + eal_param=vhost_eal_param, + param=vhost_param, + ports=ports, + port_options=port_options, + prefix="vhost-user", + fixed_prefix=True, + ) + + self.start_one_vm(packed=packed) + self.start_vm_testpmd() + self.send_and_verify(case_info=case_info) + self.result_table_print() + self.handle_expected() + self.handle_results() + + def test_perf_virtio_pmd_split_ring_1c_1q_idxd(self): + """ + Test Case 1: pvp split ring vhost async test with 1core 1queue using idxd kernel driver + """ + self.DC.create_work_queue(work_queue_number=2, dsa_index=0) + self.queues = 1 + self.dmas = "txq0@wq0.0;rxq0@wq0.1" + self.nb_cores = 1 + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + self.perf_test( + case_info=self.running_case, ports=ports, packed=False, port_options="" + ) + + def test_perf_virtio_pmd_split_ring_1c_2q_idxd(self): + """ + Test Case 2: pvp split ring vhost async test with 1core 2queue using idxd kernel driver + """ + self.DC.create_work_queue(work_queue_number=4, dsa_index=0) + self.queues = 2 + self.dmas = "txq0@wq0.0;rxq0@wq0.1;txq1@wq0.2;rxq1@wq0.3" + self.nb_cores = 1 + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + self.perf_test( + case_info=self.running_case, ports=ports, packed=False, port_options="" + ) + + def test_perf_virtio_pmd_split_ring_2c_2q_idxd(self): + """ + Test Case 3: pvp split ring vhost async test with 2core 2queue using idxd kernel driver + """ + self.DC.create_work_queue(work_queue_number=4, dsa_index=0) + self.queues = 2 + self.dmas = "txq0@wq0.0;rxq0@wq0.1;txq1@wq0.2;rxq1@wq0.3" + self.nb_cores = 1 + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + self.perf_test( + case_info=self.running_case, ports=ports, packed=False, port_options="" + ) + + def test_perf_virtio_pmd_split_ring_2c_4q_idxd(self): + """ + Test Case 4: pvp split ring vhost async test with 2core 4queue using idxd kernel driver + """ + self.DC.create_work_queue(work_queue_number=8, dsa_index=0) + self.queues = 4 + self.dmas = ( + "txq0@wq0.0;" + "rxq0@wq0.1;" + "txq1@wq0.2;" + "rxq1@wq0.3;" + "txq2@wq0.4;" + "rxq2@wq0.5;" + "txq3@wq0.6;" + "rxq3@wq0.7" + ) + self.nb_cores = 2 + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + self.perf_test( + case_info=self.running_case, ports=ports, packed=False, port_options="" + ) + + def test_perf_virtio_pmd_split_ring_4c_4q_idxd(self): + """ + Test Case 5: pvp split ring vhost async test with 4core 4queue using idxd kernel driver + """ + self.DC.create_work_queue(work_queue_number=8, dsa_index=0) + self.queues = 4 + self.dmas = ( + "txq0@wq0.0;" + "rxq0@wq0.1;" + "txq1@wq0.2;" + "rxq1@wq0.3;" + "txq2@wq0.4;" + "rxq2@wq0.5;" + "txq3@wq0.6;" + "rxq3@wq0.7" + ) + self.nb_cores = 4 + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + self.perf_test( + case_info=self.running_case, ports=ports, packed=False, port_options="" + ) + + def test_perf_virtio_pmd_split_ring_4c_8q_idxd(self): + """ + Test Case 6: pvp split ring vhost async test with 4core 8queue using idxd kernel driver + """ + self.DC.create_work_queue(work_queue_number=8, dsa_index=0) + self.queues = 8 + self.dmas = ( + "txq0@wq0.0;" + "rxq0@wq0.0;" + "txq1@wq0.1;" + "rxq1@wq0.1;" + "txq2@wq0.2;" + "rxq2@wq0.2;" + "txq3@wq0.3;" + "rxq3@wq0.3;" + "txq4@wq0.4;" + "rxq4@wq0.4;" + "txq5@wq0.5;" + "rxq5@wq0.5;" + "txq6@wq0.6;" + "rxq6@wq0.6;" + "txq7@wq0.7;" + "rxq7@wq0.7" + ) + self.nb_cores = 4 + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + self.perf_test( + case_info=self.running_case, ports=ports, packed=False, port_options="" + ) + + def test_perf_virtio_pmd_packed_ring_1c_1q_idxd(self): + """ + Test Case 7: pvp packed ring vhost async test with 1core 1queue using idxd kernel driver + """ + self.DC.create_work_queue(work_queue_number=2, dsa_index=0) + self.queues = 1 + self.dmas = "txq0@wq0.0;rxq0@wq0.1" + self.nb_cores = 1 + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + self.perf_test( + case_info=self.running_case, ports=ports, packed=True, port_options="" + ) + + def test_perf_virtio_pmd_packed_ring_1c_2q_idxd(self): + """ + Test Case 8: pvp packed ring vhost async test with 1core 2queue using idxd kernel driver + """ + self.DC.create_work_queue(work_queue_number=4, dsa_index=0) + self.queues = 2 + self.dmas = "txq0@wq0.0;rxq0@wq0.1;txq1@wq0.2;rxq1@wq0.3" + self.nb_cores = 1 + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + self.perf_test( + case_info=self.running_case, ports=ports, packed=True, port_options="" + ) + + def test_perf_virtio_pmd_packed_ring_2c_2q_idxd(self): + """ + Test Case 9: pvp packed ring vhost async test with 2core 2queue using idxd kernel driver + """ + self.DC.create_work_queue(work_queue_number=4, dsa_index=0) + self.queues = 2 + self.dmas = "txq0@wq0.0;rxq0@wq0.1;txq1@wq0.2;rxq1@wq0.3" + self.nb_cores = 2 + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + self.perf_test( + case_info=self.running_case, ports=ports, packed=True, port_options="" + ) + + def test_perf_virtio_pmd_packed_ring_2c_4q_idxd(self): + """ + Test Case 10: pvp packed ring vhost async test with 2core 4queue using idxd kernel driver + """ + self.DC.create_work_queue(work_queue_number=8, dsa_index=0) + self.queues = 4 + self.dmas = ( + "txq0@wq0.0;" + "rxq0@wq0.1;" + "txq1@wq0.2;" + "rxq1@wq0.3;" + "txq2@wq0.4;" + "rxq2@wq0.5;" + "txq3@wq0.6;" + "rxq3@wq0.7" + ) + self.nb_cores = 2 + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + self.perf_test( + case_info=self.running_case, ports=ports, packed=True, port_options="" + ) + + def test_perf_virtio_pmd_packed_ring_4c_4q_idxd(self): + """ + Test Case 11: pvp packed ring vhost async test with 4core 4queue using idxd kernel driver + """ + self.DC.create_work_queue(work_queue_number=8, dsa_index=0) + self.queues = 4 + self.dmas = ( + "txq0@wq0.0;" + "rxq0@wq0.1;" + "txq1@wq0.2;" + "rxq1@wq0.3;" + "txq2@wq0.4;" + "rxq2@wq0.5;" + "txq3@wq0.6;" + "rxq3@wq0.7" + ) + self.nb_cores = 4 + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + self.perf_test( + case_info=self.running_case, ports=ports, packed=True, port_options="" + ) + + def test_perf_virtio_pmd_packed_ring_4c_8q_idxd(self): + """ + Test Case 12: pvp packed ring vhost async test with 4core 8queue using idxd kernel driver + """ + self.DC.create_work_queue(work_queue_number=8, dsa_index=0) + self.queues = 8 + self.dmas = ( + "txq0@wq0.0;" + "rxq0@wq0.0;" + "txq1@wq0.1;" + "rxq1@wq0.1;" + "txq2@wq0.2;" + "rxq2@wq0.2;" + "txq3@wq0.3;" + "rxq3@wq0.3;" + "txq4@wq0.4;" + "rxq4@wq0.4;" + "txq5@wq0.5;" + "rxq5@wq0.5;" + "txq6@wq0.6;" + "rxq6@wq0.6;" + "txq7@wq0.7;" + "rxq7@wq0.7" + ) + self.nb_cores = 4 + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + self.perf_test( + case_info=self.running_case, ports=ports, packed=True, port_options="" + ) + + def test_perf_virtio_pmd_split_ring_1c_1q_vfio_pci(self): + """ + Test Case 13: pvp split ring vhost async test with 1core 1queue using vfio-pci driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci", socket=self.ports_socket + ) + self.queues = 1 + self.dmas = "txq0@%s-q0;rxq0@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + self.nb_cores = 1 + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + ports.append(self.use_dsa_list[0]) + port_options = {self.use_dsa_list[0]: "max_queues=2"} + self.perf_test( + case_info=self.running_case, + ports=ports, + packed=False, + port_options=port_options, + ) + + def test_perf_virtio_pmd_split_ring_1c_2q_vfio_pci(self): + """ + Test Case 14: pvp split ring vhost async test with 1core 2queue using vfio-pci driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci", socket=self.ports_socket + ) + self.queues = 2 + self.dmas = ( + "txq0@%s-q0;" + "rxq0@%s-q1;" + "txq1@%s-q2;" + "rxq1@%s-q3" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + self.nb_cores = 1 + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + ports.append(self.use_dsa_list[0]) + port_options = {self.use_dsa_list[0]: "max_queues=4"} + self.perf_test( + case_info=self.running_case, + ports=ports, + packed=False, + port_options=port_options, + ) + + def test_perf_virtio_pmd_split_ring_2c_2q_vfio_pci(self): + """ + Test Case 15: pvp split ring vhost async test with 2core 2queue using vfio-pci driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci", socket=self.ports_socket + ) + self.queues = 2 + self.dmas = ( + "txq0@%s-q0;" + "rxq0@%s-q1;" + "txq1@%s-q2;" + "rxq1@%s-q3" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + self.nb_cores = 2 + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + ports.append(self.use_dsa_list[0]) + port_options = {self.use_dsa_list[0]: "max_queues=4"} + self.perf_test( + case_info=self.running_case, + ports=ports, + packed=False, + port_options=port_options, + ) + + def test_perf_virtio_pmd_split_ring_2c_4q_vfio_pci(self): + """ + Test Case 16: pvp split ring vhost async test with 2core 4queue using vfio-pci driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci", socket=self.ports_socket + ) + self.queues = 4 + self.dmas = ( + "txq0@%s-q0;" + "rxq0@%s-q1;" + "txq1@%s-q2;" + "rxq1@%s-q3;" + "txq2@%s-q4;" + "rxq2@%s-q5;" + "txq3@%s-q6;" + "rxq3@%s-q7" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + self.nb_cores = 2 + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + ports.append(self.use_dsa_list[0]) + port_options = {self.use_dsa_list[0]: "max_queues=8"} + self.perf_test( + case_info=self.running_case, + ports=ports, + packed=False, + port_options=port_options, + ) + + def test_perf_virtio_pmd_split_ring_4c_4q_vfio_pci(self): + """ + Test Case 17: pvp split ring vhost async test with 4core 4queue using vfio-pci driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci", socket=self.ports_socket + ) + self.queues = 4 + self.dmas = ( + "txq0@%s-q0;" + "rxq0@%s-q1;" + "txq1@%s-q2;" + "rxq1@%s-q3;" + "txq2@%s-q4;" + "rxq2@%s-q5;" + "txq3@%s-q6;" + "rxq3@%s-q7" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + self.nb_cores = 4 + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + ports.append(self.use_dsa_list[0]) + port_options = {self.use_dsa_list[0]: "max_queues=8"} + self.perf_test( + case_info=self.running_case, + ports=ports, + packed=False, + port_options=port_options, + ) + + def test_perf_virtio_pmd_split_ring_4c_8q_vfio_pci(self): + """ + Test Case 18: pvp split ring vhost async test with 4core 8queue using vfio-pci driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci", socket=self.ports_socket + ) + self.queues = 4 + self.dmas = ( + "txq0@%s-q0;" + "rxq0@%s-q1;" + "txq1@%s-q2;" + "rxq1@%s-q3;" + "txq2@%s-q4;" + "rxq2@%s-q5;" + "txq3@%s-q6;" + "rxq3@%s-q7" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + self.nb_cores = 4 + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + ports.append(self.use_dsa_list[0]) + port_options = {self.use_dsa_list[0]: "max_queues=8"} + self.perf_test( + case_info=self.running_case, + ports=ports, + packed=False, + port_options=port_options, + ) + + def test_perf_virtio_pmd_packed_ring_1c_1q_vfio_pci(self): + """ + Test Case 19: pvp packed ring vhost async test with 1core 1queue using vfio-pci driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci", socket=self.ports_socket + ) + self.queues = 1 + self.dmas = "txq0@%s-q0;rxq0@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + self.nb_cores = 1 + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + ports.append(self.use_dsa_list[0]) + port_options = {self.use_dsa_list[0]: "max_queues=2"} + self.perf_test( + case_info=self.running_case, + ports=ports, + packed=True, + port_options=port_options, + ) + + def test_perf_virtio_pmd_packed_ring_1c_2q_vfio_pci(self): + """ + Test Case 20: pvp packed ring vhost async test with 1core 2queue using vfio-pci driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci", socket=self.ports_socket + ) + self.queues = 2 + self.dmas = ( + "txq0@%s-q0;" + "rxq0@%s-q1;" + "txq1@%s-q2;" + "rxq1@%s-q3" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + self.nb_cores = 1 + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + ports.append(self.use_dsa_list[0]) + port_options = {self.use_dsa_list[0]: "max_queues=4"} + self.perf_test( + case_info=self.running_case, + ports=ports, + packed=True, + port_options=port_options, + ) + + def test_perf_virtio_pmd_packed_ring_2c_2q_vfio_pci(self): + """ + Test Case 21: pvp packed ring vhost async test with 2core 2queue using vfio-pci driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci", socket=self.ports_socket + ) + self.queues = 2 + self.dmas = ( + "txq0@%s-q0;" + "rxq0@%s-q1;" + "txq1@%s-q2;" + "rxq1@%s-q3" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + self.nb_cores = 2 + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + ports.append(self.use_dsa_list[0]) + port_options = {self.use_dsa_list[0]: "max_queues=4"} + self.perf_test( + case_info=self.running_case, + ports=ports, + packed=True, + port_options=port_options, + ) + + def test_perf_virtio_pmd_packed_ring_2c_4q_vfio_pci(self): + """ + Test Case 22: pvp packed ring vhost async test with 2core 4queue using vfio-pci driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci", socket=self.ports_socket + ) + self.queues = 4 + self.dmas = ( + "txq0@%s-q0;" + "rxq0@%s-q1;" + "txq1@%s-q2;" + "rxq1@%s-q3;" + "txq2@%s-q4;" + "rxq2@%s-q5;" + "txq3@%s-q6;" + "rxq3@%s-q7" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + self.nb_cores = 2 + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + ports.append(self.use_dsa_list[0]) + port_options = {self.use_dsa_list[0]: "max_queues=8"} + self.perf_test( + case_info=self.running_case, + ports=ports, + packed=True, + port_options=port_options, + ) + + def test_perf_virtio_pmd_packed_ring_4c_4q_vfio_pci(self): + """ + Test Case 23: pvp packed ring vhost async test with 4core 4queue using vfio-pci driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci", socket=self.ports_socket + ) + self.queues = 4 + self.dmas = ( + "txq0@%s-q0;" + "rxq0@%s-q1;" + "txq1@%s-q2;" + "rxq1@%s-q3;" + "txq2@%s-q4;" + "rxq2@%s-q5;" + "txq3@%s-q6;" + "rxq3@%s-q7" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + self.nb_cores = 4 + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + ports.append(self.use_dsa_list[0]) + port_options = {self.use_dsa_list[0]: "max_queues=8"} + self.perf_test( + case_info=self.running_case, + ports=ports, + packed=True, + port_options=port_options, + ) + + def test_perf_virtio_pmd_packed_ring_4c_8q_vfio_pci(self): + """ + Test Case 24: pvp packed ring vhost async test with 4core 8queue using vfio-pci driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci", socket=self.ports_socket + ) + self.queues = 4 + self.dmas = ( + "txq0@%s-q0;" + "rxq0@%s-q1;" + "txq1@%s-q2;" + "rxq1@%s-q3;" + "txq2@%s-q4;" + "rxq2@%s-q5;" + "txq3@%s-q6;" + "rxq3@%s-q7" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + self.nb_cores = 4 + ports = [self.dut.ports_info[self.dut_ports[0]]["pci"]] + ports.append(self.use_dsa_list[0]) + port_options = {self.use_dsa_list[0]: "max_queues=8"} + self.perf_test( + case_info=self.running_case, + ports=ports, + packed=True, + port_options=port_options, + ) + + def stop_vm_and_quit_testpmd(self): + self.vm.stop() + out = self.vhost_user_pmd.execute_cmd("stop") + self.logger.info(out) + self.vhost_user_pmd.quit() + + def tear_down(self): + """ + Run after each test case. + """ + self.stop_vm_and_quit_testpmd() + self.DC.reset_all_work_queue() + self.DC.bind_all_dsa_to_kernel() + + def tear_down_all(self): + """ + Run after each test suite. + """ + self.dut.close_session(self.vhost_user) From patchwork Mon Feb 6 04:49:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 123097 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6A2A741BE7; Mon, 6 Feb 2023 06:01:41 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5D7A041611; Mon, 6 Feb 2023 06:01:41 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id C5A2040A7D for ; Mon, 6 Feb 2023 06:01:38 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675659699; x=1707195699; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=1gbNE5lQIiBusM7MDiXR25p3PLz4cB+emAjKxX6RB98=; b=Ucl44yX3Os3FAkBqIx+3eEe5UV7ohDv08H3spVfkHyEv55bk45TF+x66 S1XFI34YVxXcqbt1YqVacqm8scWndwLHJkPD+w7h9tUqISFyNjTKRgNSS qVuMWu532p4CZzDfUBov0TqkptbjRTv8qdN1WZX88k5guiQOQ5/dhcmeZ WN5KrlHvBTwQSaXEmbHj0WMCqDCTotKuxkkjVJL4tbLbI5ArVIzxLBJYF VR26UbSxy7sF7/BjClL7gO+ERDvIh6FD8YNqhDDonONKAj6VBgry9m7+u Frj6SmRCkXs6o0AzaYC7kav5turGeF/v4St4zew4lfJJyF+Yp3u03H38B g==; X-IronPort-AV: E=McAfee;i="6500,9779,10612"; a="327762002" X-IronPort-AV: E=Sophos;i="5.97,276,1669104000"; d="scan'208";a="327762002" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Feb 2023 21:01:37 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10612"; a="840240949" X-IronPort-AV: E=Sophos;i="5.97,276,1669104000"; d="scan'208";a="840240949" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Feb 2023 21:01:36 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 4/4] conf/pvp_vhost_async_virtio_pmd_perf_dsa: add testsuite config file Date: Mon, 6 Feb 2023 12:49:35 +0800 Message-Id: <20230206044935.3644120-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add pvp_vhost_async_virtio_pmd_perf_dsa config file to test pvp vhost/virtio-pmd async data-path performance use DSA device with dpdk and kernel driver. Signed-off-by: Wei Ling --- conf/pvp_vhost_async_virtio_pmd_perf_dsa.cfg | 557 +++++++++++++++++++ 1 file changed, 557 insertions(+) create mode 100644 conf/pvp_vhost_async_virtio_pmd_perf_dsa.cfg diff --git a/conf/pvp_vhost_async_virtio_pmd_perf_dsa.cfg b/conf/pvp_vhost_async_virtio_pmd_perf_dsa.cfg new file mode 100644 index 00000000..a28d0063 --- /dev/null +++ b/conf/pvp_vhost_async_virtio_pmd_perf_dsa.cfg @@ -0,0 +1,557 @@ +[suite] +update_expected = True +test_parameters = {64: [1024], 128: [1024], 256: [1024], 512: [1024], 1024: [1024], 1280: [1024], 1518: [1024]} +test_duration = 60 +accepted_tolerance = 2 +expected_throughput = { + 'test_perf_virtio_pmd_split_ring_1c_1q_idxd': { + 64: { + 1024: 0.000 + }, + 128: { + 1024: 0.000 + }, + 256: { + 1024: 0.000 + }, + 512: { + 1024: 0.000 + }, + 1024: { + 1024: 0.000 + }, + 1280: { + 1024: 0.000 + }, + 1518: { + 1024: 0.000 + } + }, + 'test_perf_virtio_pmd_split_ring_1c_2q_idxd': { + 64: { + 1024: 0.000 + }, + 128: { + 1024: 0.000 + }, + 256: { + 1024: 0.000 + }, + 512: { + 1024: 0.000 + }, + 1024: { + 1024: 0.000 + }, + 1280: { + 1024: 0.000 + }, + 1518: { + 1024: 0.000 + } + }, + 'test_perf_virtio_pmd_split_ring_2c_2q_idxd': { + 64: { + 1024: 0.000 + }, + 128: { + 1024: 0.000 + }, + 256: { + 1024: 0.000 + }, + 512: { + 1024: 0.000 + }, + 1024: { + 1024: 0.000 + }, + 1280: { + 1024: 0.000 + }, + 1518: { + 1024: 0.000 + } + }, + 'test_perf_virtio_pmd_split_ring_2c_4q_idxd': { + 64: { + 1024: 0.000 + }, + 128: { + 1024: 0.000 + }, + 256: { + 1024: 0.000 + }, + 512: { + 1024: 0.000 + }, + 1024: { + 1024: 0.000 + }, + 1280: { + 1024: 0.000 + }, + 1518: { + 1024: 0.000 + } + }, + 'test_perf_virtio_pmd_split_ring_4c_4q_idxd': { + 64: { + 1024: 0.000 + }, + 128: { + 1024: 0.000 + }, + 256: { + 1024: 0.000 + }, + 512: { + 1024: 0.000 + }, + 1024: { + 1024: 0.000 + }, + 1280: { + 1024: 0.000 + }, + 1518: { + 1024: 0.000 + } + }, + 'test_perf_virtio_pmd_split_ring_4c_4q_idxd': { + 64: { + 1024: 0.000 + }, + 128: { + 1024: 0.000 + }, + 256: { + 1024: 0.000 + }, + 512: { + 1024: 0.000 + }, + 1024: { + 1024: 0.000 + }, + 1280: { + 1024: 0.000 + }, + 1518: { + 1024: 0.000 + } + }, + 'test_perf_virtio_pmd_packed_ring_1c_1q_idxd': { + 64: { + 1024: 0.000 + }, + 128: { + 1024: 0.000 + }, + 256: { + 1024: 0.000 + }, + 512: { + 1024: 0.000 + }, + 1024: { + 1024: 0.000 + }, + 1280: { + 1024: 0.000 + }, + 1518: { + 1024: 0.000 + } + }, + 'test_perf_virtio_pmd_packed_ring_1c_2q_idxd': { + 64: { + 1024: 0.000 + }, + 128: { + 1024: 0.000 + }, + 256: { + 1024: 0.000 + }, + 512: { + 1024: 0.000 + }, + 1024: { + 1024: 0.000 + }, + 1280: { + 1024: 0.000 + }, + 1518: { + 1024: 0.000 + } + }, + 'test_perf_virtio_pmd_packed_ring_2c_2q_idxd': { + 64: { + 1024: 0.000 + }, + 128: { + 1024: 0.000 + }, + 256: { + 1024: 0.000 + }, + 512: { + 1024: 0.000 + }, + 1024: { + 1024: 0.000 + }, + 1280: { + 1024: 0.000 + }, + 1518: { + 1024: 0.000 + } + }, + 'test_perf_virtio_pmd_packed_ring_2c_4q_idxd': { + 64: { + 1024: 0.000 + }, + 128: { + 1024: 0.000 + }, + 256: { + 1024: 0.000 + }, + 512: { + 1024: 0.000 + }, + 1024: { + 1024: 0.000 + }, + 1280: { + 1024: 0.000 + }, + 1518: { + 1024: 0.000 + } + }, + 'test_perf_virtio_pmd_packed_ring_4c_4q_idxd': { + 64: { + 1024: 0.000 + }, + 128: { + 1024: 0.000 + }, + 256: { + 1024: 0.000 + }, + 512: { + 1024: 0.000 + }, + 1024: { + 1024: 0.000 + }, + 1280: { + 1024: 0.000 + }, + 1518: { + 1024: 0.000 + } + }, + 'test_perf_virtio_pmd_packed_ring_4c_4q_idxd': { + 64: { + 1024: 0.000 + }, + 128: { + 1024: 0.000 + }, + 256: { + 1024: 0.000 + }, + 512: { + 1024: 0.000 + }, + 1024: { + 1024: 0.000 + }, + 1280: { + 1024: 0.000 + }, + 1518: { + 1024: 0.000 + } + }, + 'test_perf_virtio_pmd_split_ring_1c_1q_vfio_pci': { + 64: { + 1024: 0.000 + }, + 128: { + 1024: 0.000 + }, + 256: { + 1024: 0.000 + }, + 512: { + 1024: 0.000 + }, + 1024: { + 1024: 0.000 + }, + 1280: { + 1024: 0.000 + }, + 1518: { + 1024: 0.000 + } + }, + 'test_perf_virtio_pmd_split_ring_1c_2q_vfio_pci': { + 64: { + 1024: 0.000 + }, + 128: { + 1024: 0.000 + }, + 256: { + 1024: 0.000 + }, + 512: { + 1024: 0.000 + }, + 1024: { + 1024: 0.000 + }, + 1280: { + 1024: 0.000 + }, + 1518: { + 1024: 0.000 + } + }, + 'test_perf_virtio_pmd_split_ring_2c_2q_vfio_pci': { + 64: { + 1024: 0.000 + }, + 128: { + 1024: 0.000 + }, + 256: { + 1024: 0.000 + }, + 512: { + 1024: 0.000 + }, + 1024: { + 1024: 0.000 + }, + 1280: { + 1024: 0.000 + }, + 1518: { + 1024: 0.000 + } + }, + 'test_perf_virtio_pmd_split_ring_2c_4q_vfio_pci': { + 64: { + 1024: 0.000 + }, + 128: { + 1024: 0.000 + }, + 256: { + 1024: 0.000 + }, + 512: { + 1024: 0.000 + }, + 1024: { + 1024: 0.000 + }, + 1280: { + 1024: 0.000 + }, + 1518: { + 1024: 0.000 + } + }, + 'test_perf_virtio_pmd_split_ring_4c_4q_vfio_pci': { + 64: { + 1024: 0.000 + }, + 128: { + 1024: 0.000 + }, + 256: { + 1024: 0.000 + }, + 512: { + 1024: 0.000 + }, + 1024: { + 1024: 0.000 + }, + 1280: { + 1024: 0.000 + }, + 1518: { + 1024: 0.000 + } + }, + 'test_perf_virtio_pmd_split_ring_4c_4q_vfio_pci': { + 64: { + 1024: 0.000 + }, + 128: { + 1024: 0.000 + }, + 256: { + 1024: 0.000 + }, + 512: { + 1024: 0.000 + }, + 1024: { + 1024: 0.000 + }, + 1280: { + 1024: 0.000 + }, + 1518: { + 1024: 0.000 + } + }, + 'test_perf_virtio_pmd_packed_ring_1c_1q_vfio_pci': { + 64: { + 1024: 0.000 + }, + 128: { + 1024: 0.000 + }, + 256: { + 1024: 0.000 + }, + 512: { + 1024: 0.000 + }, + 1024: { + 1024: 0.000 + }, + 1280: { + 1024: 0.000 + }, + 1518: { + 1024: 0.000 + } + }, + 'test_perf_virtio_pmd_packed_ring_1c_2q_vfio_pci': { + 64: { + 1024: 0.000 + }, + 128: { + 1024: 0.000 + }, + 256: { + 1024: 0.000 + }, + 512: { + 1024: 0.000 + }, + 1024: { + 1024: 0.000 + }, + 1280: { + 1024: 0.000 + }, + 1518: { + 1024: 0.000 + } + }, + 'test_perf_virtio_pmd_packed_ring_2c_2q_vfio_pci': { + 64: { + 1024: 0.000 + }, + 128: { + 1024: 0.000 + }, + 256: { + 1024: 0.000 + }, + 512: { + 1024: 0.000 + }, + 1024: { + 1024: 0.000 + }, + 1280: { + 1024: 0.000 + }, + 1518: { + 1024: 0.000 + } + }, + 'test_perf_virtio_pmd_packed_ring_2c_4q_vfio_pci': { + 64: { + 1024: 0.000 + }, + 128: { + 1024: 0.000 + }, + 256: { + 1024: 0.000 + }, + 512: { + 1024: 0.000 + }, + 1024: { + 1024: 0.000 + }, + 1280: { + 1024: 0.000 + }, + 1518: { + 1024: 0.000 + } + }, + 'test_perf_virtio_pmd_packed_ring_4c_4q_vfio_pci': { + 64: { + 1024: 0.000 + }, + 128: { + 1024: 0.000 + }, + 256: { + 1024: 0.000 + }, + 512: { + 1024: 0.000 + }, + 1024: { + 1024: 0.000 + }, + 1280: { + 1024: 0.000 + }, + 1518: { + 1024: 0.000 + } + }, + 'test_perf_virtio_pmd_packed_ring_4c_4q_vfio_pci': { + 64: { + 1024: 0.000 + }, + 128: { + 1024: 0.000 + }, + 256: { + 1024: 0.000 + }, + 512: { + 1024: 0.000 + }, + 1024: { + 1024: 0.000 + }, + 1280: { + 1024: 0.000 + }, + 1518: { + 1024: 0.000 + }}}