From patchwork Fri Nov 11 07:52:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 119777 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0B2E3A0542; Fri, 11 Nov 2022 08:58:57 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0223740697; Fri, 11 Nov 2022 08:58:57 +0100 (CET) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id E8C7140150 for ; Fri, 11 Nov 2022 08:58:54 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1668153535; x=1699689535; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=JP1qIvs+3RjVtFfAZ6QFh2F6NouTj6qNvgQgxo72gLA=; b=hiLTXP8reYujHQmZtNUNT+8XVJVgzAg7OjqMmYi3WYsth3aOfMq+mXkf CZqNwQmvz+PR5ZRhO9BpiJnrOd+ZRNTUT7/skAJO6a5SnwoAQzynJMuK8 M7Kz3Lw7qo7ptigF85tN6jiZPmwRRK4/VSD2MPBeyXKU5CLm+xiNusvGN le4fZP+Tzlkz5Kb16Tssw2fd2mM19MScZi0KZG0+hJ+Bo6Sc+YYsynPpi Ug8c9PnV13d8Qzw5DJkgNi0rcnUeHuLPaRE03ft77g5zD6wK0sgXiUE17 ZhZi0jqXAyvd+fD6qdvhVoyroncrU7lN0FNlWKWdbFarLcg2YFVT0g3P1 A==; X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="397846148" X-IronPort-AV: E=Sophos;i="5.96,156,1665471600"; d="scan'208";a="397846148" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 23:58:53 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="631959441" X-IronPort-AV: E=Sophos;i="5.96,156,1665471600"; d="scan'208";a="631959441" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 23:58:50 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 1/2] test_plans/loopback_virtio_user_server_mode_dsa_test_plan: modify the dmas parameter and add cases Date: Fri, 11 Nov 2022 15:52:10 +0800 Message-Id: <20221111075210.2425462-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org 1.From DPDK-22.11, the dmas parameter have changed from `lcore-dma=[lcore1@0000:00:04.0]` to `dmas=[txq0@0000:00:04.0]` by DPDK local patch,so modify the dmas parameter. 2.Add some new testcase to cover the 2 type DSA driver, include DPDK driver and kernel driver. Signed-off-by: Wei Ling --- ..._virtio_user_server_mode_dsa_test_plan.rst | 1015 +++++++++++++---- 1 file changed, 783 insertions(+), 232 deletions(-) diff --git a/test_plans/loopback_virtio_user_server_mode_dsa_test_plan.rst b/test_plans/loopback_virtio_user_server_mode_dsa_test_plan.rst index a96ce539..e04b6759 100644 --- a/test_plans/loopback_virtio_user_server_mode_dsa_test_plan.rst +++ b/test_plans/loopback_virtio_user_server_mode_dsa_test_plan.rst @@ -1,17 +1,24 @@ .. SPDX-License-Identifier: BSD-3-Clause Copyright(c) 2022 Intel Corporation -======================================================================= +===================================================================== Loopback vhost-user/virtio-user server mode with DSA driver test plan -======================================================================= +===================================================================== Description =========== Vhost asynchronous data path leverages DMA devices to offload memory copies from the CPU and it is implemented in an asynchronous way. -In addition, vhost supports M:N mapping between vrings and DMA virtual channels. Specifically, one vring can use multiple different DMA -channels and one DMA channel can be shared by multiple vrings at the same time. Vhost enqueue and dequeue operation with CBDMA channels is supported -in both split and packed ring. +Asynchronous data path is enabled per tx/rx queue, and users need to specify the DMA device used by the tx/rx queue. +Each tx/rx queue only supports to use one DMA device, but one DMA device can be shared among multiple tx/rx queues of d +ifferent vhostpmd ports. + +Two PMD parameters are added: +- dmas: specify the used DMA device for a tx/rx queue.(Default: no queues enable asynchronous data path) +- dma-ring-size: DMA ring size.(Default: 4096). + +Here is an example: +--vdev 'eth_vhost0,iface=./s0,dmas=[txq0@0000:00.01.0;rxq0@0000:00.01.1],dma-ring-size=4096' This document provides the test plan for testing the following features when Vhost-user using asynchronous data path with DSA driver (kernel IDXD driver and DPDK vfio-pci driver) in loopback virtio-user topology. @@ -29,7 +36,7 @@ If iommu on, kernel dsa driver only can work with iova=va by program IOMMU, can' Note: 1. When DMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may exceed IOMMU's max capability, better to use 1G guest hugepage. -2. DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd, and the suite has not yet been automated. +2.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. Prerequisites ============= @@ -44,8 +51,8 @@ General set up # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static # ninja -C -j 110 For example: - CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc - ninja -C x86_64-native-linuxapp-gcc -j 110 + CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc + ninja -C x86_64-native-linuxapp-gcc -j 110 2. Get the PCI device ID and DSA device ID of DUT, for example, 0000:4f:00.1 is PCI device ID, 0000:6a:01.0 - 0000:f6:01.0 are DSA device IDs:: @@ -106,9 +113,9 @@ Common steps Check WQ by 'ls /dev/dsa' and can find "wq0.0 wq1.0 wq1.1 wq1.2 wq1.3" Test Case 1: Loopback split ring server mode large chain packets stress test with dsa dpdk driver ---------------------------------------------------------------------------------------------------- +------------------------------------------------------------------------------------------------- This is a stress test case about forwading large chain packets in loopback vhost-user/virtio-user split ring with server mode -when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver. Both iova as VA and PA mode test. +when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver. 1. Bind 1 dsa device to vfio-pci like common step 1:: @@ -117,8 +124,8 @@ when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk dr 2. Launch vhost:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost -a 0000:e7:01.0,max_queues=1 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0;rxq0],client=1' \ - --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --mbuf-size=45535 --lcore-dma=[lcore3@0000:e7:01.0-q0] + --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:e7:01.0-q0;rxq0@0000:e7:01.0-q0],client=1' \ + --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --mbuf-size=65535 3. launch virtio and start testpmd:: @@ -129,59 +136,43 @@ when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk dr 4. Send large pkts from vhost and check the stats:: - testpmd>set txpkts 45535,45535,45535,45535,45535 + testpmd>set txpkts 65535,65535 testpmd>start tx_first 32 testpmd>show port stats all -5. Stop and quit vhost testpmd and relaunch vhost with pa mode by below command:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost -a 0000:e7:01.0,max_queues=4 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0;rxq0],client=1' \ - --iova=pa -- -i --nb-cores=1 --txd=1024 --rxd=1024 --mbuf-size=45535 --lcore-dma=[lcore3@0000:e7:01.0-q0] - -6. Rerun step 4. - Test Case 2: Loopback packed ring server mode large chain packets stress test with dsa dpdk driver ----------------------------------------------------------------------------------------------------- +-------------------------------------------------------------------------------------------------- This is a stress test case about forwading large chain packets in loopback vhost-user/virtio-user packed ring with server mode -when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test. +when vhost uses the asynchronous operations with dsa dpdk driver. 1. Bind 1 dsa port to vfio-pci as common step 1:: - # ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 + # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 2. Launch vhost:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 -a 0000:f1:01.0,max_queues=1 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0;rxq0],client=1' \ - --iova=va -- -i --nb-cores=1 --mbuf-size=45535 --lcore-dma=[lcore3@0000:f1:01.0-q0] + --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:e7:01.0-q0;rxq0@0000:e7:01.0-q0],client=1' \ + --iova=va -- -i --nb-cores=1 --mbuf-size=65535 3. launch virtio and start testpmd:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=testpmd0 --no-pci \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=testpmd0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=1,mrg_rxbuf=1,in_order=0,vectorized=1,packed_vq=1,queue_size=2048,server=1 \ -- -i --rxq=1 --txq=1 --txd=2048 --rxd=2048 --nb-cores=1 testpmd>start 4. Send large pkts from vhost and check the stats:: - testpmd>set txpkts 45535,45535,45535,45535,45535 + testpmd>set txpkts 65535,65535 testpmd>start tx_first 32 testpmd>show port stats all -5. Stop and quit vhost testpmd and relaunch vhost with pa mode by below command:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 -a 0000:f1:01.0,max_queues=1 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0;rxq0],client=1' \ - --iova=va -- -i --nb-cores=1 --mbuf-size=45535 --lcore-dma=[lcore3@0000:f1:01.0-q0] - -6. Rerun step 4. - -Test Case 3: Loopback split ring all path server mode and multi-queues payload check with dsa dpdk driver ------------------------------------------------------------------------------------------------------------ +Test Case 3: Loopback split ring inorder mergeable path multi-queues payload check with server mode and dsa dpdk driver +----------------------------------------------------------------------------------------------------------------------- This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user split ring -all path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver. -Both iova as VA and PA mode test. +inorder mergeable path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver. + 1. bind 3 dsa port to vfio-pci like common step 1:: @@ -191,12 +182,11 @@ Both iova as VA and PA mode test. 2. Launch vhost:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0,max_queues=4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:e7:01.0-q0,lcore12@0000:e7:01.0-q0,lcore13@0000:e7:01.0-q1,lcore13@0000:e7:01.0-q2,lcore14@0000:e7:01.0-q1,lcore14@0000:e7:01.0-q2] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0,max_queues=8 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;txq2@0000:e7:01.0-q0;txq3@0000:e7:01.0-q0;txq4@0000:e7:01.0-q1;txq5@0000:e7:01.0-q1;rxq2@0000:e7:01.0-q2;rxq3@0000:e7:01.0-q2;rxq4@0000:e7:01.0-q3;rxq5@0000:e7:01.0-q3;rxq6@0000:e7:01.0-q3;rxq7@0000:e7:01.0-q3]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 -3. Launch virtio-user with split ring mergeable inorder path:: +3. Launch virtio-user with split ring inorder mergeable path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,server=1 \ @@ -221,9 +211,25 @@ Both iova as VA and PA mode test. 6. Quit pdump, check all the packets length are 6192 Byte in the pcap file and the payload in receive packets are same. -7. Quit and relaunch vhost and rerun step 4-6. +Test Case 4: Loopback split ring mergeable path multi-queues payload check with server mode and dsa dpdk driver +--------------------------------------------------------------------------------------------------------------- +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user split ring +mergeable path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver. + + +1. bind 3 dsa port to vfio-pci like common step 1:: + + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u e7:01.0 ec:01.0 f1:01.0 + ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 f1:01.0 + +2. Launch vhost:: -8. Quit and relaunch virtio with split ring mergeable path as below:: + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0,max_queues=8 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;txq2@0000:e7:01.0-q0;txq3@0000:e7:01.0-q0;txq4@0000:e7:01.0-q1;txq5@0000:e7:01.0-q1;rxq2@0000:e7:01.0-q2;rxq3@0000:e7:01.0-q2;rxq4@0000:e7:01.0-q3;rxq5@0000:e7:01.0-q3;rxq6@0000:e7:01.0-q3;rxq7@0000:e7:01.0-q3]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + +3. Launch virtio-user with split ring mergeable path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,server=1 \ @@ -231,9 +237,42 @@ Both iova as VA and PA mode test. testpmd>set fwd csum testpmd>start -9. Rerun steps 4-7. +4. Attach pdump secondary process to primary process by same file-prefix:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' + +5. Send large pkts from vhost, check loopback performance can get expected and each queue can receive packets:: + + testpmd> set fwd csum + testpmd> set txpkts 64,64,64,2000,2000,2000 + testpmd> set burst 1 + testpmd> start tx_first 1 + testpmd> show port stats all + testpmd> stop + +6. Quit pdump, check all the packets length are 6192 Byte in the pcap file and the payload in receive packets are same. + +Test Case 5: Loopback split ring non-mergeable path multi-queues payload check with server mode and dsa dpdk driver +------------------------------------------------------------------------------------------------------------------- +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user split ring +non-mergeable path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver. + -10. Quit and relaunch virtio with split ring non-mergeable path as below:: +1. bind 3 dsa port to vfio-pci like common step 1:: + + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u e7:01.0 ec:01.0 f1:01.0 + ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 f1:01.0 + +2. Launch vhost:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0,max_queues=8 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;txq2@0000:e7:01.0-q0;txq3@0000:e7:01.0-q0;txq4@0000:e7:01.0-q1;txq5@0000:e7:01.0-q1;rxq2@0000:e7:01.0-q2;rxq3@0000:e7:01.0-q2;rxq4@0000:e7:01.0-q3;rxq5@0000:e7:01.0-q3;rxq6@0000:e7:01.0-q3;rxq7@0000:e7:01.0-q3]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + +3. Launch virtio-user with split ring non-mergeable path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,server=1 \ @@ -241,9 +280,13 @@ Both iova as VA and PA mode test. testpmd>set fwd csum testpmd>start -11. Rerun step 4. +4. Attach pdump secondary process to primary process by same file-prefix:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' -12. Send pkts from vhost, check loopback performance can get expected and each queue can receive packets:: +5. Send large pkts from vhost, check loopback performance can get expected and each queue can receive packets:: testpmd> set fwd csum testpmd> set txpkts 64,128,256,512 @@ -252,11 +295,27 @@ Both iova as VA and PA mode test. testpmd> show port stats all testpmd> stop -13. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. +6. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. + +Test Case 6: Loopback split ring inorder non-mergeable path multi-queues payload check with server mode and dsa dpdk driver +--------------------------------------------------------------------------------------------------------------------------- +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user split ring +inorder non-mergeable path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver. + + +1. bind 3 dsa port to vfio-pci like common step 1:: + + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u e7:01.0 ec:01.0 f1:01.0 + ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 f1:01.0 -14. Quit and relaunch vhost and rerun step 11-13. +2. Launch vhost:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0,max_queues=8 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;txq2@0000:e7:01.0-q0;txq3@0000:e7:01.0-q0;txq4@0000:e7:01.0-q1;txq5@0000:e7:01.0-q1;rxq2@0000:e7:01.0-q2;rxq3@0000:e7:01.0-q2;rxq4@0000:e7:01.0-q3;rxq5@0000:e7:01.0-q3;rxq6@0000:e7:01.0-q3;rxq7@0000:e7:01.0-q3]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 -15. Quit and relaunch virtio with split ring inorder non-mergeable path as below:: +3. Launch virtio-user with split ring inorder non-mergeable path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,server=1 \ @@ -264,9 +323,42 @@ Both iova as VA and PA mode test. testpmd>set fwd csum testpmd>start -16. Rerun step 11-14. +4. Attach pdump secondary process to primary process by same file-prefix:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' + +5. Send large pkts from vhost, check loopback performance can get expected and each queue can receive packets:: + + testpmd> set fwd csum + testpmd> set txpkts 64,128,256,512 + testpmd> set burst 1 + testpmd> start tx_first 1 + testpmd> show port stats all + testpmd> stop + +6. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. + +Test Case 7: Loopback split ring vectorized path multi-queues payload check with server mode and dsa dpdk driver +---------------------------------------------------------------------------------------------------------------- +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user split ring +vectorized path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver. + + +1. bind 3 dsa port to vfio-pci like common step 1:: + + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u e7:01.0 ec:01.0 f1:01.0 + ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 f1:01.0 + +2. Launch vhost:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0,max_queues=8 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;txq2@0000:e7:01.0-q0;txq3@0000:e7:01.0-q0;txq4@0000:e7:01.0-q1;txq5@0000:e7:01.0-q1;rxq2@0000:e7:01.0-q2;rxq3@0000:e7:01.0-q2;rxq4@0000:e7:01.0-q3;rxq5@0000:e7:01.0-q3;rxq6@0000:e7:01.0-q3;rxq7@0000:e7:01.0-q3]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 -17. Quit and relaunch virtio with split ring vectorized path as below:: +3. Launch virtio-user with split ring vectorized path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,vectorized=1,server=1 \ @@ -274,30 +366,28 @@ Both iova as VA and PA mode test. testpmd>set fwd csum testpmd>start -18. Rerun step 11-14. - -19. Quit and relaunch vhost with diff channel:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:e7:01.0-q0,lcore12@0000:e7:01.0-q0,lcore13@0000:ec:01.0-q1,lcore13@0000:f1:01.0-q2,lcore14@0000:ec:01.0-q1,lcore14@0000:f1:01.0-q2,lcore15@0000:ec:01.0-q1,lcore15@0000:f1:01.0-q2] +4. Attach pdump secondary process to primary process by same file-prefix:: -20. Rerun steps 11-14. + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' -21. Quit and relaunch vhost w/ iova=pa:: +5. Send large pkts from vhost, check loopback performance can get expected and each queue can receive packets:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=pa -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:e7:01.0-q0,lcore12@0000:e7:01.0-q0,lcore13@0000:e7:01.0-q1,lcore13@0000:e7:01.0-q2,lcore14@0000:e7:01.0-q1,lcore14@0000:e7:01.0-q2,lcore15@0000:e7:01.0-q1,lcore15@0000:e7:01.0-q2] + testpmd> set fwd csum + testpmd> set txpkts 64,128,256,512 + testpmd> set burst 1 + testpmd> start tx_first 1 + testpmd> show port stats all + testpmd> stop -22. Rerun steps 11-14. +6. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. -Test Case 4: Loopback packed ring all path server mode and multi-queues payload check with dsa dpdk driver ------------------------------------------------------------------------------------------------------------- +Test Case 8: Loopback packed ring inorder mergeable path multi-queues payload check with server mode and dsa dpdk driver +------------------------------------------------------------------------------------------------------------------------ This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user packed ring -all path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver. Both iova as VA and PA mode test. +inorder mergeable path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver. + 1. bind 2 dsa port to vfio-pci like common step 1:: @@ -308,11 +398,10 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue 2. Launch vhost:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0,max_queues=4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:e7:01.0-q0,lcore12@0000:e7:01.0-q1,lcore13@0000:e7:01.0-q2,lcore14@0000:e7:01.0-q3] + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;txq2@0000:e7:01.0-q0;txq3@0000:e7:01.0-q0;txq4@0000:e7:01.0-q1;txq5@0000:e7:01.0-q1;rxq2@0000:e7:01.0-q2;rxq3@0000:e7:01.0-q2;rxq4@0000:e7:01.0-q3;rxq5@0000:e7:01.0-q3;rxq6@0000:e7:01.0-q3;rxq7@0000:e7:01.0-q3]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 -3. Launch virtio-user with packed ring mergeable inorder path:: +3. Launch virtio-user with packed ring inorder mergeable path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,packed_vq=1,server=1 \ @@ -337,9 +426,25 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue 6. Quit pdump, check all the packets length are 6192 Byte in the pcap file and the payload in receive packets are same. -7. Quit and relaunch vhost and rerun step 4-6. +Test Case 9: Loopback packed ring mergeable path multi-queues payload check with server mode and dsa dpdk driver +---------------------------------------------------------------------------------------------------------------- +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user packed ring +mergeable path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver. + + +1. bind 2 dsa port to vfio-pci like common step 1:: + + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u e7:01.0 ec:01.0 + ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 + +2. Launch vhost:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0,max_queues=4 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;txq2@0000:e7:01.0-q0;txq3@0000:e7:01.0-q0;txq4@0000:e7:01.0-q1;txq5@0000:e7:01.0-q1;rxq2@0000:e7:01.0-q2;rxq3@0000:e7:01.0-q2;rxq4@0000:e7:01.0-q3;rxq5@0000:e7:01.0-q3;rxq6@0000:e7:01.0-q3;rxq7@0000:e7:01.0-q3]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 -8. Quit and relaunch virtio with packed ring mergeable path as below:: +3. Launch virtio-user with packed ring mergeable path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,packed_vq=1,server=1 \ @@ -347,9 +452,42 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue testpmd>set fwd csum testpmd>start -9. Rerun steps 4-7. +4. Attach pdump secondary process to primary process by same file-prefix:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' + +5. Send large pkts from vhost, check loopback performance can get expected and each queue can receive packets:: + + testpmd> set fwd csum + testpmd> set txpkts 64,64,64,2000,2000,2000 + testpmd> set burst 1 + testpmd> start tx_first 1 + testpmd> show port stats all + testpmd> stop + +6. Quit pdump, check all the packets length are 6192 Byte in the pcap file and the payload in receive packets are same. -10. Quit and relaunch virtio with packed ring non-mergeable path as below:: +Test Case 10: Loopback packed ring non-mergeable path multi-queues payload check with server mode and dsa dpdk driver +--------------------------------------------------------------------------------------------------------------------- +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user packed ring +non-mergeable path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver. + + +1. bind 2 dsa port to vfio-pci like common step 1:: + + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u e7:01.0 ec:01.0 + ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 + +2. Launch vhost:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0,max_queues=4 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;txq2@0000:e7:01.0-q0;txq3@0000:e7:01.0-q0;txq4@0000:e7:01.0-q1;txq5@0000:e7:01.0-q1;rxq2@0000:e7:01.0-q2;rxq3@0000:e7:01.0-q2;rxq4@0000:e7:01.0-q3;rxq5@0000:e7:01.0-q3;rxq6@0000:e7:01.0-q3;rxq7@0000:e7:01.0-q3]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + +3. Launch virtio-user with packed ring non-mergeable path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,packed_vq=1,server=1 \ @@ -357,9 +495,13 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue testpmd>set fwd csum testpmd>start -11. Rerun step 4. +4. Attach pdump secondary process to primary process by same file-prefix:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' -12. Send pkts from vhost, check loopback performance can get expected and each queue can receive packets:: +5. Send large pkts from vhost, check loopback performance can get expected and each queue can receive packets:: testpmd> set fwd csum testpmd> set txpkts 64,128,256,512 @@ -368,11 +510,27 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue testpmd> show port stats all testpmd> stop -13. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. +6. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. + +Test Case 11: Loopback packed ring inorder non-mergeable path multi-queues payload check with server mode and dsa dpdk driver +----------------------------------------------------------------------------------------------------------------------------- +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user packed ring +inorder non-mergeable path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver. + + +1. bind 2 dsa port to vfio-pci like common step 1:: + + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u e7:01.0 ec:01.0 + ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 + +2. Launch vhost:: -14. Quit and relaunch vhost and rerun step 11-13. + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0,max_queues=4 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;txq2@0000:e7:01.0-q0;txq3@0000:e7:01.0-q0;txq4@0000:e7:01.0-q1;txq5@0000:e7:01.0-q1;rxq2@0000:e7:01.0-q2;rxq3@0000:e7:01.0-q2;rxq4@0000:e7:01.0-q3;rxq5@0000:e7:01.0-q3;rxq6@0000:e7:01.0-q3;rxq7@0000:e7:01.0-q3]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 -15. Quit and relaunch virtio with packed ring inorder non-mergeable path as below:: +3. Launch virtio-user with packed ring inorder non-mergeable path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,server=1 \ @@ -380,9 +538,42 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue testpmd>set fwd csum testpmd>start -16. Rerun step 11-14. +4. Attach pdump secondary process to primary process by same file-prefix:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' + +5. Send large pkts from vhost, check loopback performance can get expected and each queue can receive packets:: + + testpmd> set fwd csum + testpmd> set txpkts 64,128,256,512 + testpmd> set burst 1 + testpmd> start tx_first 1 + testpmd> show port stats all + testpmd> stop + +6. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. + +Test Case 12: Loopback packed ring vectorized path multi-queues payload check with server mode and dsa dpdk driver +------------------------------------------------------------------------------------------------------------------ +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user packed ring +vectorized path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa dpdk driver. + + +1. bind 2 dsa port to vfio-pci like common step 1:: + + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u e7:01.0 ec:01.0 + ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 + +2. Launch vhost:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0,max_queues=4 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;txq2@0000:e7:01.0-q0;txq3@0000:e7:01.0-q0;txq4@0000:e7:01.0-q1;txq5@0000:e7:01.0-q1;rxq2@0000:e7:01.0-q2;rxq3@0000:e7:01.0-q2;rxq4@0000:e7:01.0-q3;rxq5@0000:e7:01.0-q3;rxq6@0000:e7:01.0-q3;rxq7@0000:e7:01.0-q3]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 -17. Quit and relaunch virtio with packed ring vectorized path as below:: +3. Launch virtio-user with packed ring vectorized path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,server=1 \ @@ -390,9 +581,42 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue testpmd>set fwd csum testpmd>start -18. Rerun step 11-14. +4. Attach pdump secondary process to primary process by same file-prefix:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' + +5. Send large pkts from vhost, check loopback performance can get expected and each queue can receive packets:: + + testpmd> set fwd csum + testpmd> set txpkts 64,128,256,512 + testpmd> set burst 1 + testpmd> start tx_first 1 + testpmd> show port stats all + testpmd> stop + +6. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. + +Test Case 13: Loopback packed ring vectorized path and ring size is not power of 2 multi-queues payload check with server mode and dsa dpdk driver +-------------------------------------------------------------------------------------------------------------------------------------------------- +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user packed ring +vectorized path multi-queues with server mode and ring size is not power of 2 when vhost uses the asynchronous operations with dsa dpdk driver. + + +1. bind 2 dsa port to vfio-pci like common step 1:: + + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u e7:01.0 ec:01.0 + ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 + +2. Launch vhost:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0,max_queues=4 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;txq2@0000:e7:01.0-q0;txq3@0000:e7:01.0-q0;txq4@0000:e7:01.0-q1;txq5@0000:e7:01.0-q1;rxq2@0000:e7:01.0-q2;rxq3@0000:e7:01.0-q2;rxq4@0000:e7:01.0-q3;rxq5@0000:e7:01.0-q3;rxq6@0000:e7:01.0-q3;rxq7@0000:e7:01.0-q3]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 -19. Quit and relaunch virtio with packed ring vectorized path and ring size is not power of 2 as below:: +3. Launch virtio-user with packed ring vectorized path and ring size is not power of 2:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queue_size=1025,server=1 \ @@ -400,28 +624,25 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue testpmd>set fwd csum testpmd>start -20. Rerun step 11-14. - -21. Quit and relaunch vhost with diff channel:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0,max_queues=2 -a 0000:ec:01.0,max_queues=2 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:e7:01.0-q0,lcore11@0000:ec:01.0-q1] +4. Attach pdump secondary process to primary process by same file-prefix:: -22. Rerun steps 11-14. + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' -23. Quit and relaunch vhost w/ iova=pa:: +5. Send large pkts from vhost, check loopback performance can get expected and each queue can receive packets:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:e7:01.0,max_queues=4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=pa -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@0000:e7:01.0-q1,lcore11@0000:e7:01.0-q3] + testpmd> set fwd csum + testpmd> set txpkts 64,128,256,512 + testpmd> set burst 1 + testpmd> start tx_first 1 + testpmd> show port stats all + testpmd> stop -24. Rerun steps 11-14. +6. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. -Test Case 5: Loopback split ring server mode large chain packets stress test with dsa kernel driver ---------------------------------------------------------------------------------------------------- +Test Case 14: Loopback split ring server mode large chain packets stress test with dsa kernel driver +---------------------------------------------------------------------------------------------------- This is a stress test case about forwading large chain packets in loopback vhost-user/virtio-user split ring with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa kernel driver. @@ -430,14 +651,14 @@ when vhost uses the asynchronous enqueue and dequeue operations with dsa kernel ls /dev/dsa #check wq configure, reset if exist ./usertools/dpdk-devbind.py -u 6a:01.0 ./usertools/dpdk-devbind.py -b idxd 6a:01.0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0 ls /dev/dsa #check wq configure success 2. Launch vhost:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --no-pci \ - --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0;rxq0],client=1' \ - --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --mbuf-size=45535 --lcore-dma=[lcore3@wq0.2] + --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@wq0.0;rxq0@wq0.0],client=1' \ + --iova=va -- -i --nb-cores=1 --txd=1024 --rxd=1024 --mbuf-size=65535 3. launch virtio and start testpmd:: @@ -448,11 +669,11 @@ when vhost uses the asynchronous enqueue and dequeue operations with dsa kernel 4. Send large pkts from vhost:: - testpmd>set txpkts 45535,45535,45535,45535,45535 + testpmd>set txpkts 65535,65535 testpmd>start tx_first 32 testpmd>show port stats all -Test Case 6: Loopback packed ring server mode large chain packets stress test with dsa kernel driver +Test Case 15: Loopback packed ring server mode large chain packets stress test with dsa kernel driver ----------------------------------------------------------------------------------------------------- This is a stress test case about forwading large chain packets in loopback vhost-user/virtio-user packed ring with server mode when vhost uses the asynchronous operations with dsa kernel driver. @@ -468,8 +689,8 @@ when vhost uses the asynchronous operations with dsa kernel driver. 2. Launch vhost:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 --no-pci \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0;rxq0],client=1' \ - --iova=va -- -i --nb-cores=1 --mbuf-size=45535 --lcore-dma=[lcore3@wq0.0] + --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@wq0.0;rxq0@wq0.0],client=1' \ + --iova=va -- -i --nb-cores=1 --mbuf-size=65535 3. launch virtio and start testpmd:: @@ -480,14 +701,14 @@ when vhost uses the asynchronous operations with dsa kernel driver. 4. Send large pkts from vhost and check the stats:: - testpmd>set txpkts 45535,45535,45535,45535,45535 + testpmd>set txpkts 65535,65535 testpmd>start tx_first 32 testpmd>show port stats all -Test Case 7: Loopback split ring all path server mode and multi-queues payload check with dsa kernel driver -------------------------------------------------------------------------------------------------------------- +Test Case 16: Loopback split ring inorder mergeable path multi-queues payload check with server mode and dsa kernel driver +-------------------------------------------------------------------------------------------------------------------------- This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user split ring -all path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa kernel driver. +inorder mergeable path multi-queues with server mode when vhost uses the asynchronous operations with dsa kernel driver. 1. bind 2 dsa port to idxd like common step 2:: @@ -501,11 +722,10 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue 2. Launch vhost:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --no-pci \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@wq0.0,lcore12@wq0.0,lcore13@wq0.1,lcore13@wq0.2,lcore14@wq0.1,lcore14@wq0.2,lcore15@wq0.1,lcore15@wq0.2] + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \ + --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 -3. Launch virtio-user with split ring mergeable inorder path:: +3. Launch virtio-user with split ring inorder mergeable path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,server=1 \ @@ -530,29 +750,102 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue 6. Quit pdump, check all the packets length are 6192 Byte in the pcap file and the payload in receive packets are same. -7. Quit and relaunch vhost and rerun step 4-6. +7. Quit and relaunch vhost w/ diff channel:: -8. Quit and relaunch virtio with split ring mergeable path as below:: + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.3;txq4@wq0.4;txq5@wq0.5;rxq2@wq1.2;rxq3@wq1.3;rxq4@wq1.4;rxq5@wq1.5;rxq6@wq1.6;rxq7@wq1.7]' \ + --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=virtio-user0 --no-pci \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,server=1 \ - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start +8. Rerun step 4-6. -9. Rerun steps 4-7. +Test Case 17: Loopback split ring mergeable path multi-queues payload check with server mode and dsa kernel driver +------------------------------------------------------------------------------------------------------------------ +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user split ring +mergeable path multi-queues with server mode when vhost uses the asynchronous operations with dsa kernel driver. + +1. bind 2 dsa port to idxd like common step 2:: + + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 + ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 + ls /dev/dsa #check wq configure success + +2. Launch vhost:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \ + --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 -10. Quit and relaunch virtio with split ring non-mergeable path as below:: +3. Launch virtio-user with split ring mergeable path:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=virtio-user0 --no-pci \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,server=1 \ + -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + testpmd> set fwd csum + testpmd> start + +4. Attach pdump secondary process to primary process by same file-prefix:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' + +5. Send large pkts from vhost, check loopback performance can get expected and each queue can receive packets:: + + testpmd> set fwd csum + testpmd> set txpkts 64,64,64,2000,2000,2000 + testpmd> set burst 1 + testpmd> start tx_first 1 + testpmd> show port stats all + testpmd> stop + +6. Quit pdump, check all the packets length are 6192 Byte in the pcap file and the payload in receive packets are same. + +7. Quit and relaunch vhost w/ diff channel:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.3;txq4@wq0.4;txq5@wq0.5;rxq2@wq1.2;rxq3@wq1.3;rxq4@wq1.4;rxq5@wq1.5;rxq6@wq1.6;rxq7@wq1.7]' \ + --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + +8. Rerun step 4-6. + +Test Case 18: Loopback split ring non-mergeable path multi-queues payload check with server mode and dsa kernel driver +---------------------------------------------------------------------------------------------------------------------- +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user split ring +non-mergeable path multi-queues with server mode when vhost uses the asynchronous operations with dsa kernel driver. + +1. bind 2 dsa port to idxd like common step 2:: + + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 + ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 + ls /dev/dsa #check wq configure success + +2. Launch vhost:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \ + --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + +3. Launch virtio-user with split ring non-mergeable path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,server=1 \ -- -i --enable-hw-vlan-strip --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start + testpmd> set fwd csum + testpmd> start -11. Rerun step 4. +4. Attach pdump secondary process to primary process by same file-prefix:: -12. Send pkts from vhost, check loopback performance can get expected and each queue can receive packets:: + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' + +5. Send large pkts from vhost, check loopback performance can get expected and each queue can receive packets:: testpmd> set fwd csum testpmd> set txpkts 64,128,256,512 @@ -561,67 +854,149 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue testpmd> show port stats all testpmd> stop -13. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. +6. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. -14. Quit and relaunch vhost and rerun step 11-13. +7. Quit and relaunch vhost w/ diff channel:: -15. Quit and relaunch virtio with split ring inorder non-mergeable path as below:: + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.3;txq4@wq0.4;txq5@wq0.5;rxq2@wq1.2;rxq3@wq1.3;rxq4@wq1.4;rxq5@wq1.5;rxq6@wq1.6;rxq7@wq1.7]' \ + --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + +8. Rerun step 4-6. + +Test Case 19: Loopback split ring inorder non-mergeable path multi-queues payload check with server mode and dsa kernel driver +------------------------------------------------------------------------------------------------------------------------------ +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user split ring +inorder non-mergeable path multi-queues with server mode when vhost uses the asynchronous operations with dsa kernel driver. + +1. bind 2 dsa port to idxd like common step 2:: + + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 + ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 + ls /dev/dsa #check wq configure success + +2. Launch vhost:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \ + --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + +3. Launch virtio-user with split ring inorder non-mergeable path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,server=1 \ -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start + testpmd> set fwd csum + testpmd> start -16. Rerun step 11-14. +4. Attach pdump secondary process to primary process by same file-prefix:: -17. Quit and relaunch virtio with split ring vectorized path as below:: + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' + +5. Send large pkts from vhost, check loopback performance can get expected and each queue can receive packets:: + + testpmd> set fwd csum + testpmd> set txpkts 64,128,256,512 + testpmd> set burst 1 + testpmd> start tx_first 1 + testpmd> show port stats all + testpmd> stop + +6. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. + +7. Quit and relaunch vhost w/ diff channel:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.3;txq4@wq0.4;txq5@wq0.5;rxq2@wq1.2;rxq3@wq1.3;rxq4@wq1.4;rxq5@wq1.5;rxq6@wq1.6;rxq7@wq1.7]' \ + --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + +8. Rerun step 4-6. + +Test Case 20: Loopback split ring vectorized path multi-queues payload check with server mode and dsa kernel driver +------------------------------------------------------------------------------------------------------------------- +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user split ring +vectorized path multi-queues with server mode when vhost uses the asynchronous operations with dsa kernel driver. + +1. bind 2 dsa port to idxd like common step 2:: + + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 + ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 + ls /dev/dsa #check wq configure success + +2. Launch vhost:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \ + --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + +3. Launch virtio-user with split ring vectorized path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,vectorized=1,server=1 \ -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start + testpmd> set fwd csum + testpmd> start + +4. Attach pdump secondary process to primary process by same file-prefix:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' + +5. Send large pkts from vhost, check loopback performance can get expected and each queue can receive packets:: -18. Rerun step 11-14. + testpmd> set fwd csum + testpmd> set txpkts 64,128,256,512 + testpmd> set burst 1 + testpmd> start tx_first 1 + testpmd> show port stats all + testpmd> stop + +6. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. -19. Quit and relaunch vhost with diff channel:: +7. Quit and relaunch vhost w/ diff channel:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --no-pci \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@wq0.0,lcore12@wq0.0,lcore13@wq0.1,lcore13@wq1.0,lcore14@wq0.1,lcore14@wq1.0,lcore15@wq0.1,lcore15@wq1.0] + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.3;txq4@wq0.4;txq5@wq0.5;rxq2@wq1.2;rxq3@wq1.3;rxq4@wq1.4;rxq5@wq1.5;rxq6@wq1.6;rxq7@wq1.7]' \ + --iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 -20. Rerun steps 11-14. +8. Rerun step 4-6. -Test Case 8: Loopback packed ring all path server mode and multi-queues payload check with dsa kernel driver -------------------------------------------------------------------------------------------------------------- +Test Case 21: Loopback packed ring inorder mergeable path multi-queues payload check with server mode and dsa kernel driver +--------------------------------------------------------------------------------------------------------------------------- This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user packed ring -all path multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa kernel driver. +inorder mergeable path multi-queues with server mode when vhost uses the asynchronous operations with dsa kernel driver. 1. bind 8 dsa port to idxd like common step 2:: ls /dev/dsa #check wq configure, reset if exist ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 1 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 ls /dev/dsa #check wq configure success 2. Launch vhost:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 --no-pci \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@wq0.0,lcore12@wq0.1,lcore13@wq0.2,lcore14@wq0.3] + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 -3. Launch virtio-user with packed ring mergeable inorder path:: +3. Launch virtio-user with packed ring inorder mergeable path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,packed_vq=1,server=1 \ -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start + testpmd> set fwd csum + testpmd> start 4. Attach pdump secondary process to primary process by same file-prefix:: @@ -640,29 +1015,102 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue 6. Quit pdump, check all the packets length are 6192 Byte in the pcap file and the payload in receive packets are same. -7. Quit and relaunch vhost and rerun step 4-6. +7. Quit and relaunch vhost w/ diff channel:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.3;txq4@wq0.4;txq5@wq0.5;rxq2@wq1.2;rxq3@wq1.3;rxq4@wq1.4;rxq5@wq1.5;rxq6@wq1.6;rxq7@wq1.7]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 -8. Quit and relaunch virtio with packed ring mergeable path as below:: +8.Rerun step 4-6. + +Test Case 22: Loopback packed ring mergeable path multi-queues payload check with server mode and dsa kernel driver +------------------------------------------------------------------------------------------------------------------- +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user packed ring +mergeable path multi-queues with server mode when vhost uses the asynchronous operations with dsa kernel driver. + +1. bind 8 dsa port to idxd like common step 2:: + + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 + ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 + ls /dev/dsa #check wq configure success + +2. Launch vhost:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + +3. Launch virtio-user with packed ring mergeable path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,packed_vq=1,server=1 \ -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start + testpmd> set fwd csum + testpmd> start + +4. Attach pdump secondary process to primary process by same file-prefix:: -9. Rerun steps 4-7. + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' -10. Quit and relaunch virtio with packed ring non-mergeable path as below:: +5. Send large pkts from vhost, check loopback performance can get expected and each queue can receive packets:: + + testpmd> set fwd csum + testpmd> set txpkts 64,64,64,2000,2000,2000 + testpmd> set burst 1 + testpmd> start tx_first 1 + testpmd> show port stats all + testpmd> stop + +6. Quit pdump, check all the packets length are 6192 Byte in the pcap file and the payload in receive packets are same. + +7. Quit and relaunch vhost w/ diff channel:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.3;txq4@wq0.4;txq5@wq0.5;rxq2@wq1.2;rxq3@wq1.3;rxq4@wq1.4;rxq5@wq1.5;rxq6@wq1.6;rxq7@wq1.7]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + +8.Rerun step 4-6. + +Test Case 23: Loopback packed ring non-mergeable path multi-queues payload check with server mode and dsa kernel driver +----------------------------------------------------------------------------------------------------------------------- +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user packed ring +non-mergeable path multi-queues with server mode when vhost uses the asynchronous operations with dsa kernel driver. + +1. bind 8 dsa port to idxd like common step 2:: + + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 + ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 + ls /dev/dsa #check wq configure success + +2. Launch vhost:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + +3. Launch virtio-user with packed ring non-mergeable path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,packed_vq=1,server=1 \ -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start + testpmd> set fwd csum + testpmd> start -11. Rerun step 4. +4. Attach pdump secondary process to primary process by same file-prefix:: -12. Send pkts from vhost, check loopback performance can get expected and each queue can receive packets:: + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' + +5. Send large pkts from vhost, check loopback performance can get expected and each queue can receive packets:: testpmd> set fwd csum testpmd> set txpkts 64,128,256,512 @@ -671,59 +1119,183 @@ all path multi-queues with server mode when vhost uses the asynchronous enqueue testpmd> show port stats all testpmd> stop -13. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. +6. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. + +7. Quit and relaunch vhost w/ diff channel:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.3;txq4@wq0.4;txq5@wq0.5;rxq2@wq1.2;rxq3@wq1.3;rxq4@wq1.4;rxq5@wq1.5;rxq6@wq1.6;rxq7@wq1.7]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + +8.Rerun step 4-6. + +Test Case 24: Loopback packed ring inorder non-mergeable path multi-queues payload check with server mode and dsa kernel driver +------------------------------------------------------------------------------------------------------------------------------- +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user packed ring +inorder non-mergeable path multi-queues with server mode when vhost uses the asynchronous operations with dsa kernel driver. + +1. bind 8 dsa port to idxd like common step 2:: + + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 + ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 + ls /dev/dsa #check wq configure success + +2. Launch vhost:: -14. Quit and relaunch vhost and rerun step 11-13. + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 -15. Quit and relaunch virtio with packed ring inorder non-mergeable path as below:: +3. Launch virtio-user with packed ring inorder non-mergeable path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,server=1 \ -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start + testpmd> set fwd csum + testpmd> start + +4. Attach pdump secondary process to primary process by same file-prefix:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' + +5. Send large pkts from vhost, check loopback performance can get expected and each queue can receive packets:: + + testpmd> set fwd csum + testpmd> set txpkts 64,128,256,512 + testpmd> set burst 1 + testpmd> start tx_first 1 + testpmd> show port stats all + testpmd> stop + +6. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. + +7. Quit and relaunch vhost w/ diff channel:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.3;txq4@wq0.4;txq5@wq0.5;rxq2@wq1.2;rxq3@wq1.3;rxq4@wq1.4;rxq5@wq1.5;rxq6@wq1.6;rxq7@wq1.7]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + +8.Rerun step 4-6. + +Test Case 25: Loopback packed ring vectorized path multi-queues payload check with server mode and dsa kernel driver +-------------------------------------------------------------------------------------------------------------------- +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user packed ring +vectorized path multi-queues with server mode when vhost uses the asynchronous operations with dsa kernel driver. + +1. bind 8 dsa port to idxd like common step 2:: + + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 + ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 + ls /dev/dsa #check wq configure success -16. Rerun step 11-14. +2. Launch vhost:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 -17. Quit and relaunch virtio with packed ring vectorized path as below:: +3. Launch virtio-user with packed ring vectorized path:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,server=1 \ -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start + testpmd> set fwd csum + testpmd> start + +4. Attach pdump secondary process to primary process by same file-prefix:: -18. Rerun step 11-14. + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' + +5. Send large pkts from vhost, check loopback performance can get expected and each queue can receive packets:: + + testpmd> set fwd csum + testpmd> set txpkts 64,128,256,512 + testpmd> set burst 1 + testpmd> start tx_first 1 + testpmd> show port stats all + testpmd> stop + +6. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. + +7. Quit and relaunch vhost w/ diff channel:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.3;txq4@wq0.4;txq5@wq0.5;rxq2@wq1.2;rxq3@wq1.3;rxq4@wq1.4;rxq5@wq1.5;rxq6@wq1.6;rxq7@wq1.7]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + +8.Rerun step 4-6. + +Test Case 26: Loopback packed ring vectorized path and ring size is not power of 2 multi-queues payload check with server mode and dsa kernel driver +---------------------------------------------------------------------------------------------------------------------------------------------------- +This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user packed ring +vectorized path multi-queues with server mode and ring size is not power of 2 when vhost uses the asynchronous operations with dsa kernel driver. + +1. bind 8 dsa port to idxd like common step 2:: + + ls /dev/dsa #check wq configure, reset if exist + ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 + ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 + ls /dev/dsa #check wq configure success + +2. Launch vhost:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 -19. Quit and relaunch virtio with packed ring vectorized path and ring size is not power of 2 as below:: +3. Launch virtio-user with packed ring vectorized path and ring size is not power of 2:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queue_size=1025,server=1 \ - -- -i --nb-cores=2 --rxq=8 --txq=8 --txd=1025 --rxd=1025 + -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1025 --rxd=1025 testpmd>set fwd csum testpmd>start -20. Rerun step 11-14. +4. Attach pdump secondary process to primary process by same file-prefix:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ + --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-0.pcap,mbuf-size=8000' \ + --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' + +5. Send large pkts from vhost, check loopback performance can get expected and each queue can receive packets:: + + testpmd> set fwd csum + testpmd> set txpkts 64,128,256,512 + testpmd> set burst 1 + testpmd> start tx_first 1 + testpmd> show port stats all + testpmd> stop + +6. Quit pdump, check all the packets length are 960 Byte in the pcap file and the payload in receive packets are same. -21. Quit and relaunch vhost with diff channel:: +7. Quit and relaunch vhost w/ diff channel:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 --no-pci \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore11@wq0.0,lcore11@wq1.0,lcore12@wq0.1,lcore12@wq1.1,lcore13@wq0.2,lcore13@wq1.2,lcore14@wq0.3,lcore14@wq1.3] + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.3;txq4@wq0.4;txq5@wq0.5;rxq2@wq1.2;rxq3@wq1.3;rxq4@wq1.4;rxq5@wq1.5;rxq6@wq1.6;rxq7@wq1.7]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 -22. Rerun steps 3-6. +8.Rerun step 4-6. -Test Case 9: Loopback split and packed ring server mode multi-queues and mergeable path payload check with dsa dpdk and kernel driver --------------------------------------------------------------------------------------------------------------------------------------- -This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user split and packed ring -multi-queues with server mode when vhost uses the asynchronous enqueue and dequeue operations with dsa kernel driver. +Test Case 27: PV split and packed ring server mode test txonly mode with dsa dpdk and kernel driver +--------------------------------------------------------------------------------------------------- 1. bind 2 dsa device to idxd and 2 dsa device to vfio-pci like common step 1-2:: ls /dev/dsa #check wq configure, reset if exist ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 f1:01.0 f6:01.0 - ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 f1:01.0 f6:01.0 + ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 f6:01.0 # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 @@ -731,17 +1303,16 @@ multi-queues with server mode when vhost uses the asynchronous enqueue and deque 2. Launch vhost:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 --file-prefix=vhost -a 0000:f1:01.0,max_queues=4 -a 0000:f6:01.0,max_queues=4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \ - --iova=va -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \ - --lcore-dma=[lcore3@wq0.0,lcore3@wq0.1,lcore3@wq1.0,lcore3@wq1.1,lcore3@0000:f1:01.0-q0,lcore3@0000:f1:01.0-q2,lcore3@0000:f6:01.0-q3] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=vhost -a 0000:f1:01.0,max_queues=2 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@0000:f1:01.0-q0;txq5@0000:f1:01.0-q0;rxq2@0000:f1:01.0-q1;rxq3@0000:f1:01.0-q1;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \ + --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 3. Launch virtio-user with split ring mergeable inorder path:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 4-5 -n 4 --file-prefix=virtio-user --no-pci \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 --file-prefix=virtio-user --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,server=1 \ - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum + -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + testpmd>set fwd rxonly testpmd>start 4. Attach pdump secondary process to primary process by same file-prefix:: @@ -750,30 +1321,20 @@ multi-queues with server mode when vhost uses the asynchronous enqueue and deque --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=/tmp/pdump-virtio-rx-0.pcap,mbuf-size=8000' --pdump 'device_id=net_virtio_user0,queue=3,rx-dev=./pdump-virtio-rx-3.pcap,mbuf-size=8000' -5. Send large pkts from vhost, check loopback performance can get expected and each queue can receive packets:: +5. Send large pkts from vhost:: - testpmd>set fwd csum - testpmd>set txpkts 64,64,64,2000,2000,2000 - testpmd>set burst 1 - testpmd>start tx_first 1 - testpmd>show port stats all - testpmd>stop + testpmd> set fwd txonly + testpmd> async_vhost tx poll completed on + testpmd>set txpkts 64,64,64,2000,2000,2000 + testpmd>set burst 1 + testpmd>start tx_first 1 + testpmd>stop -6. Quit pdump and chcek all the packets length is 6192 and the payload of all packets are same in the pcap file. +6. Quit pdump, check all the packets length are 6192 Byte in the pcap file. 7. Quit and relaunch vhost and rerun step 4-6. -8. Quit and relaunch virtio with split ring mergeable path as below:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 4-5 -n 4 --file-prefix=virtio-user --no-pci \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,server=1 \ - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start - -9. Stop vhost and rerun step 4-7. - -10. Quit and relaunch virtio with packed ring mergeable inorder path as below:: +8. Quit and relaunch virtio with packed ring mergeable inorder path as below:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 4-5 -n 4 --file-prefix=virtio-user --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,packed_vq=1,server=1 \ @@ -781,14 +1342,4 @@ multi-queues with server mode when vhost uses the asynchronous enqueue and deque testpmd>set fwd csum testpmd>start -11. Stop vhost and rerun step 4-7. - -12. Quit and relaunch virtio with packed ring mergeable path as below:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 4-5 -n 4 --file-prefix=virtio-user --no-pci \ - --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,packed_vq=1,server=1 \ - -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 - testpmd>set fwd csum - testpmd>start - -13. Stop vhost and rerun step 4-7. +9. Stop vhost and rerun step 4-7. From patchwork Fri Nov 11 07:52:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 119778 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2EBC1A0542; Fri, 11 Nov 2022 08:59:07 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 28ACB410F2; Fri, 11 Nov 2022 08:59:07 +0100 (CET) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 14F0840150 for ; Fri, 11 Nov 2022 08:59:04 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1668153545; x=1699689545; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=3Wubu3PbQ90tswI37vPNTnGXMSIKlydZah0jtUWZ6tM=; b=FGfNoJFNebf1c4tvWU7jPCE91vuIvbFIBJ7I/DMegzHKUCSJeweDzuqC snEp5arEjf5khxPadHG13RDqeDb6sH9mXE2Txe6w692vA+IH+rY2+JHgX Lwwcy+omLb0KtvjtPA+QryCc689GDGlnBPwi+/0ao9C9jzNSyKhG0IJjx kMVfUKvENX45a2Wp17VrL86yL8Tsq9/+6+3rCh0qexyaX4dgg+Kr2Ky6X GJUbY7O6Ip4hecSYMehPIkEcAvAjKYT1PNDn1HaM03gOSgN9cKFm/7tHh hWe1lADvbNog3uN+aT5pEz0N930CqVBxTa8B2ZaE60w61KZalhkhYDmDf Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="397846187" X-IronPort-AV: E=Sophos;i="5.96,156,1665471600"; d="scan'208";a="397846187" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 23:59:04 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="631959525" X-IronPort-AV: E=Sophos;i="5.96,156,1665471600"; d="scan'208";a="631959525" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 23:59:01 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 2/2] tests/loopback_virtio_user_server_mode_dsa: add new testsuite Date: Fri, 11 Nov 2022 15:52:22 +0800 Message-Id: <20221111075222.2425522-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Upstream the new testsuite TestSuite_loopback_virtio_user_server_mode_dsa.py. Signed-off-by: Wei Ling --- ...te_loopback_virtio_user_server_mode_dsa.py | 1935 +++++++++++++++++ 1 file changed, 1935 insertions(+) create mode 100644 tests/TestSuite_loopback_virtio_user_server_mode_dsa.py diff --git a/tests/TestSuite_loopback_virtio_user_server_mode_dsa.py b/tests/TestSuite_loopback_virtio_user_server_mode_dsa.py new file mode 100644 index 00000000..7d5e9f9f --- /dev/null +++ b/tests/TestSuite_loopback_virtio_user_server_mode_dsa.py @@ -0,0 +1,1935 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2022 Intel Corporation +# + +""" +DPDK Test suite. +Test loopback virtio-user server mode +""" +import re +import time + +from framework.packet import Packet +from framework.pmd_output import PmdOutput +from framework.test_case import TestCase + +from .dsadev_common import DsaDev_common as DC + + +class TestLoopbackVirtioUserServerModeDsa(TestCase): + def set_up_all(self): + """ + Run at the start of each test suite. + """ + self.dut_ports = self.dut.get_ports() + self.ports_socket = self.dut.get_numa_id(self.dut_ports[0]) + self.core_list = self.dut.get_core_list(config="all", socket=self.ports_socket) + self.vhost_core_list = self.core_list[0:9] + self.virtio0_core_list = self.core_list[10:15] + self.path = self.dut.apps_name["test-pmd"] + self.testpmd_name = self.path.split("/")[-1] + self.app_pdump = self.dut.apps_name["pdump"] + self.dump_pcap_q0 = "/root/pdump-rx-q0.pcap" + self.dump_pcap_q1 = "/root/pdump-rx-q1.pcap" + self.device_str = None + self.cbdma_dev_infos = [] + self.vhost_user = self.dut.new_session(suite="vhost_user") + self.virtio_user = self.dut.new_session(suite="virtio-user") + self.pdump_session = self.dut.new_session(suite="pdump") + self.vhost_user_pmd = PmdOutput(self.dut, self.vhost_user) + self.virtio_user_pmd = PmdOutput(self.dut, self.virtio_user) + self.DC = DC(self) + + def set_up(self): + """ + Run before each test case. + """ + self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#") + self.dut.send_expect("rm -rf ./vhost-net*", "#") + self.table_header = [ + "Mode", + "Pkt_size", + "Throughput(Mpps)", + "Queue Number", + "Cycle", + ] + self.result_table_create(self.table_header) + self.use_dsa_list = [] + self.DC.reset_all_work_queue() + self.DC.bind_all_dsa_to_kernel() + + @property + def check_2M_env(self): + out = self.dut.send_expect( + "cat /proc/meminfo |grep Hugepagesize|awk '{print($2)}'", "# " + ) + return True if out == "2048" else False + + def send_6192_packets_from_vhost(self, set_csum=True): + """ + start the testpmd of vhost-user, start to send 8k packets + """ + if set_csum: + self.vhost_user_pmd.execute_cmd("set fwd csum") + self.vhost_user_pmd.execute_cmd("set txpkts 64,64,64,2000,2000,2000") + self.vhost_user_pmd.execute_cmd("set burst 1") + self.vhost_user_pmd.execute_cmd("start tx_first 1") + self.vhost_user_pmd.execute_cmd("stop") + + def send_960_packets_from_vhost(self): + """ + start the testpmd of vhost-user, start to send 8k packets + """ + self.vhost_user_pmd.execute_cmd("set fwd csum") + self.vhost_user_pmd.execute_cmd("set txpkts 64,128,256,512") + self.vhost_user_pmd.execute_cmd("set burst 1") + self.vhost_user_pmd.execute_cmd("start tx_first 1") + self.vhost_user_pmd.execute_cmd("stop") + + def send_chain_packets_from_vhost(self): + self.vhost_user_pmd.execute_cmd("set txpkts 65535,65535") + self.vhost_user_pmd.execute_cmd("start tx_first 32", timeout=30) + + def verify_virtio_user_receive_packets(self): + results = 0.0 + time.sleep(3) + for _ in range(10): + out = self.virtio_user_pmd.execute_cmd("show port stats all") + lines = re.search("Rx-pps:\s*(\d*)", out) + result = lines.group(1) + results += float(result) + Mpps = results / (1000000 * 10) + self.logger.info(Mpps) + self.verify(Mpps > 0, "virtio-user can not receive packets") + + def launch_pdump_to_capture_pkt(self): + command = ( + self.app_pdump + + " " + + "-v --file-prefix=virtio-user0 -- " + + "--pdump 'device_id=net_virtio_user0,queue=0,rx-dev=%s,mbuf-size=8000' " + + "--pdump 'device_id=net_virtio_user0,queue=1,rx-dev=%s,mbuf-size=8000'" + ) + self.pdump_session.send_expect( + command % (self.dump_pcap_q0, self.dump_pcap_q1), "Port" + ) + + def check_packet_payload_valid(self, pkt_len, check_payload=True): + self.pdump_session.send_expect("^c", "# ", 60) + dump_file_list = [self.dump_pcap_q0, self.dump_pcap_q1] + for pcap in dump_file_list: + self.dut.session.copy_file_from(src="%s" % pcap, dst="%s" % pcap) + pkt = Packet() + pkts = pkt.read_pcapfile(pcap) + expect_data = str(pkts[0]["Raw"]) + for i in range(len(pkts)): + self.verify( + len(pkts[i]) == pkt_len, + "virtio-user0 receive packet's length not equal %s Byte" % pkt_len, + ) + if check_payload: + check_data = str(pkts[i]["Raw"]) + self.verify( + check_data == expect_data, + "the payload in receive packets has been changed from %s" % i, + ) + + def start_vhost_testpmd( + self, + cores, + eal_param="", + param="", + no_pci=False, + ports="", + port_options="", + iova_mode="va", + ): + if iova_mode: + eal_param += " --iova=" + iova_mode + if not no_pci and port_options != "": + self.vhost_user_pmd.start_testpmd( + cores=cores, + eal_param=eal_param, + param=param, + port_options=port_options, + ports=ports, + prefix="vhost", + fixed_prefix=True, + ) + elif not no_pci and port_options == "": + self.vhost_user_pmd.start_testpmd( + cores=cores, + eal_param=eal_param, + param=param, + ports=ports, + prefix="vhost", + fixed_prefix=True, + ) + else: + self.vhost_user_pmd.start_testpmd( + cores=cores, + eal_param=eal_param, + param=param, + no_pci=no_pci, + prefix="vhost", + fixed_prefix=True, + ) + + def start_virtio_testpmd_with_vhost_net0( + self, cores, eal_param, param, set_fwd_csum=True + ): + """ + launch the testpmd as virtio with vhost_net0 + """ + if self.check_2M_env: + eal_param += " --single-file-segments" + self.virtio_user_pmd.start_testpmd( + cores=cores, + eal_param=eal_param, + param=param, + no_pci=True, + prefix="virtio-user0", + fixed_prefix=True, + ) + if set_fwd_csum: + self.virtio_user_pmd.execute_cmd("set fwd csum") + self.virtio_user_pmd.execute_cmd("start") + + def test_loopback_split_server_mode_large_chain_packets_stress_test_with_dpdk_driver( + self, + ): + """ + Test Case 1: Loopback split ring server mode large chain packets stress test with dsa dpdk driver + """ + if not self.check_2M_env: + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci" + ) + dmas = "txq0@%s-q0;" "rxq0@%s-q0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[%s],client=1'" + % dmas + ) + vhost_param = "--nb-cores=1 --txd=1024 --rxd=1024 --mbuf-size=65535" + port_options = {self.use_dsa_list[0]: "max_queues=1"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list, + port_options=port_options, + iova_mode="va", + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=1,mrg_rxbuf=1,in_order=0,vectorized=1,queue_size=2048,server=1" + virtio_param = "--nb-cores=1 --rxq=1 --txq=1 --txd=2048 --rxd=2048" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + set_fwd_csum=False, + ) + self.send_chain_packets_from_vhost() + self.verify_virtio_user_receive_packets() + + def test_loopback_packed_server_mode_large_chain_packets_stress_test_with_dpdk_driver( + self, + ): + """ + Test Case 2: Loopback packed ring server mode large chain packets stress test with dsa dpdk driver + """ + if not self.check_2M_env: + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci" + ) + dmas = "txq0@%s-q0;" "rxq0@%s-q0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[%s],client=1'" + % dmas + ) + vhost_param = "--nb-cores=1 --txd=1024 --rxd=1024 --mbuf-size=65535" + port_options = {self.use_dsa_list[0]: "max_queues=1"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list, + port_options=port_options, + iova_mode="va", + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=1,mrg_rxbuf=1,in_order=0,vectorized=1,packed_vq=1,queue_size=2048,server=1" + virtio_param = " --nb-cores=1 --rxq=1 --txq=1 --txd=2048 --rxd=2048" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + set_fwd_csum=False, + ) + self.send_chain_packets_from_vhost() + self.verify_virtio_user_receive_packets() + + def test_loopback_split_inorder_mergeable_multi_queues_payload_check_with_server_mode_and_dpdk_driver( + self, + ): + """ + Test Case 3: Loopback split ring inorder mergeable path multi-queues payload check with server mode and dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci" + ) + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q0;" + "txq3@%s-q0;" + "txq4@%s-q1;" + "txq5@%s-q1;" + "rxq2@%s-q2;" + "rxq3@%s-q2;" + "rxq4@%s-q3;" + "rxq5@%s-q3;" + "rxq6@%s-q3;" + "rxq7@%s-q3" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + port_options = {self.use_dsa_list[0]: "max_queues=8"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list, + port_options=port_options, + iova_mode="va", + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,server=1" + virtio_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + set_fwd_csum=True, + ) + self.launch_pdump_to_capture_pkt() + self.send_6192_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=6192) + + def test_loopback_split_mergeable_multi_queues_payload_check_with_server_mode_and_dpdk_driver( + self, + ): + """ + Test Case 4: Loopback split ring mergeable path multi-queues payload check with server mode and dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci" + ) + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q0;" + "txq3@%s-q0;" + "txq4@%s-q1;" + "txq5@%s-q1;" + "rxq2@%s-q2;" + "rxq3@%s-q2;" + "rxq4@%s-q3;" + "rxq5@%s-q3;" + "rxq6@%s-q3;" + "rxq7@%s-q3" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + port_options = {self.use_dsa_list[0]: "max_queues=8"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list, + port_options=port_options, + iova_mode="va", + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,server=1" + virtio_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + set_fwd_csum=True, + ) + self.launch_pdump_to_capture_pkt() + self.send_6192_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=6192) + + def test_loopback_split_non_mergeable_multi_queues_payload_check_with_server_mode_and_dpdk_driver( + self, + ): + """ + Test Case 5: Loopback split ring non-mergeable path multi-queues payload check with server mode and dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci" + ) + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q0;" + "txq3@%s-q0;" + "txq4@%s-q1;" + "txq5@%s-q1;" + "rxq2@%s-q2;" + "rxq3@%s-q2;" + "rxq4@%s-q3;" + "rxq5@%s-q3;" + "rxq6@%s-q3;" + "rxq7@%s-q3" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + port_options = {self.use_dsa_list[0]: "max_queues=8"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list, + port_options=port_options, + iova_mode="va", + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,server=1" + virtio_param = ( + "--enable-hw-vlan-strip --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + ) + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + set_fwd_csum=True, + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + def test_loopback_split_inorder_non_mergeable_multi_queues_payload_check_with_server_mode_and_dpdk_driver( + self, + ): + """ + Test Case 6: Loopback split ring inorder non-mergeable path multi-queues payload check with server mode and dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci" + ) + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q0;" + "txq3@%s-q0;" + "txq4@%s-q1;" + "txq5@%s-q1;" + "rxq2@%s-q2;" + "rxq3@%s-q2;" + "rxq4@%s-q3;" + "rxq5@%s-q3;" + "rxq6@%s-q3;" + "rxq7@%s-q3" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + port_options = {self.use_dsa_list[0]: "max_queues=8"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list, + port_options=port_options, + iova_mode="va", + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,server=1" + virtio_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + set_fwd_csum=True, + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + def test_loopback_split_vectorized_multi_queues_payload_check_with_server_mode_and_dpdk_driver( + self, + ): + """ + Test Case 7: Loopback split ring vectorized path multi-queues payload check with server mode and dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci" + ) + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q0;" + "txq3@%s-q0;" + "txq4@%s-q1;" + "txq5@%s-q1;" + "rxq2@%s-q2;" + "rxq3@%s-q2;" + "rxq4@%s-q3;" + "rxq5@%s-q3;" + "rxq6@%s-q3;" + "rxq7@%s-q3" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + port_options = {self.use_dsa_list[0]: "max_queues=8"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list, + port_options=port_options, + iova_mode="va", + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,vectorized=1,server=1" + virtio_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + set_fwd_csum=True, + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + def test_loopback_packed_inorder_mergeable_multi_queues_payload_check_with_server_mode_and_dpdk_driver( + self, + ): + """ + Test Case 8: Loopback packed ring inorder mergeable path multi-queues payload check with server mode and dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci" + ) + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q0;" + "txq3@%s-q0;" + "txq4@%s-q1;" + "txq5@%s-q1;" + "rxq2@%s-q2;" + "rxq3@%s-q2;" + "rxq4@%s-q3;" + "rxq5@%s-q3;" + "rxq6@%s-q3;" + "rxq7@%s-q3" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + port_options = {self.use_dsa_list[0]: "max_queues=8"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list, + port_options=port_options, + iova_mode="va", + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,packed_vq=1,server=1" + virtio_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + set_fwd_csum=True, + ) + self.launch_pdump_to_capture_pkt() + self.send_6192_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=6192) + + def test_loopback_packed_mergeable_multi_queues_payload_check_with_server_mode_and_dpdk_driver( + self, + ): + """ + Test Case 9: Loopback packed ring mergeable path multi-queues payload check with server mode and dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci" + ) + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q0;" + "txq3@%s-q0;" + "txq4@%s-q1;" + "txq5@%s-q1;" + "rxq2@%s-q2;" + "rxq3@%s-q2;" + "rxq4@%s-q3;" + "rxq5@%s-q3;" + "rxq6@%s-q3;" + "rxq7@%s-q3" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + port_options = {self.use_dsa_list[0]: "max_queues=8"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list, + port_options=port_options, + iova_mode="va", + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,packed_vq=1,server=1" + virtio_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + set_fwd_csum=True, + ) + self.launch_pdump_to_capture_pkt() + self.send_6192_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=6192) + + def test_loopback_packed_non_mergeable_multi_queues_payload_check_with_server_mode_and_dpdk_driver( + self, + ): + """ + Test Case 10: Loopback packed ring non-mergeable path multi-queues payload check with server mode and dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci" + ) + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q0;" + "txq3@%s-q0;" + "txq4@%s-q1;" + "txq5@%s-q1;" + "rxq2@%s-q2;" + "rxq3@%s-q2;" + "rxq4@%s-q3;" + "rxq5@%s-q3;" + "rxq6@%s-q3;" + "rxq7@%s-q3" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + port_options = {self.use_dsa_list[0]: "max_queues=8"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list, + port_options=port_options, + iova_mode="va", + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,packed_vq=1,server=1" + virtio_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + set_fwd_csum=True, + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + def test_loopback_packed_inorder_non_mergeable_multi_queues_payload_check_with_server_mode_and_dpdk_driver( + self, + ): + """ + Test Case 11: Loopback packed ring inorder non-mergeable path multi-queues payload check with server mode and dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci" + ) + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q0;" + "txq3@%s-q0;" + "txq4@%s-q1;" + "txq5@%s-q1;" + "rxq2@%s-q2;" + "rxq3@%s-q2;" + "rxq4@%s-q3;" + "rxq5@%s-q3;" + "rxq6@%s-q3;" + "rxq7@%s-q3" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + port_options = {self.use_dsa_list[0]: "max_queues=8"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list, + port_options=port_options, + iova_mode="va", + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,server=1" + virtio_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + set_fwd_csum=True, + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + def test_loopback_packed_vectorized_multi_queues_payload_check_with_server_mode_and_dpdk_driver( + self, + ): + """ + Test Case 12: Loopback packed ring vectorized path multi-queues payload check with server mode and dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci" + ) + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q0;" + "txq3@%s-q0;" + "txq4@%s-q1;" + "txq5@%s-q1;" + "rxq2@%s-q2;" + "rxq3@%s-q2;" + "rxq4@%s-q3;" + "rxq5@%s-q3;" + "rxq6@%s-q3;" + "rxq7@%s-q3" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + port_options = {self.use_dsa_list[0]: "max_queues=8"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list, + port_options=port_options, + iova_mode="va", + ) + virtio_eal_param = "--force-max-simd-bitwidth=512 --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,vectorized=1,packed_vq=1,server=1" + virtio_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + set_fwd_csum=True, + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + def test_loopback_packed_vectorized_not_powerof_2_multi_queues_payload_check_with_server_mode_and_dpdk_driver( + self, + ): + """ + Test Case 13: Loopback packed ring vectorized path and ring size is not power of 2 multi-queues payload check with server mode and dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci" + ) + dmas = ( + "txq0@%s-q0;" + "txq1@%s-q0;" + "txq2@%s-q0;" + "txq3@%s-q0;" + "txq4@%s-q1;" + "txq5@%s-q1;" + "rxq2@%s-q2;" + "rxq3@%s-q2;" + "rxq4@%s-q3;" + "rxq5@%s-q3;" + "rxq6@%s-q3;" + "rxq7@%s-q3" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + port_options = {self.use_dsa_list[0]: "max_queues=8"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list, + port_options=port_options, + iova_mode="va", + ) + virtio_eal_param = "--force-max-simd-bitwidth=512 --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,vectorized=1,packed_vq=1,queue_size=1025,server=1" + virtio_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1025 --rxd=1025" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + set_fwd_csum=True, + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + def test_loopback_split_server_mode_large_chain_packets_stress_test_with_kernel_driver( + self, + ): + """ + Test Case 14: Loopback split ring server mode large chain packets stress test with dsa kernel driver + """ + if not self.check_2M_env: + self.DC.create_work_queue(work_queue_number=1, dsa_index=0) + dmas = "txq0@wq0.0;rxq0@wq0.0" + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[%s],client=1'" + % dmas + ) + vhost_param = "--nb-cores=1 --txd=1024 --rxd=1024 --mbuf-size=65535" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=1,mrg_rxbuf=1,in_order=0,vectorized=1,queue_size=2048,server=1" + virtio_param = "--nb-cores=1 --rxq=1 --txq=1 --txd=2048 --rxd=2048" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + set_fwd_csum=False, + ) + self.send_chain_packets_from_vhost() + self.verify_virtio_user_receive_packets() + + def test_loopback_packed_server_mode_large_chain_packets_stress_test_with_kernel_driver( + self, + ): + """ + Test Case 15: Loopback packed ring server mode large chain packets stress test with dsa kernel driver + """ + if not self.check_2M_env: + self.DC.create_work_queue(work_queue_number=1, dsa_index=0) + dmas = "txq0@wq0.0;rxq0@wq0.0" + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[%s],client=1'" + % dmas + ) + vhost_param = "--nb-cores=1 --txd=1024 --rxd=1024 --mbuf-size=65535" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=1,mrg_rxbuf=1,in_order=0,vectorized=1,packed_vq=1,queue_size=2048,server=1" + virtio_param = " --nb-cores=1 --rxq=1 --txq=1 --txd=2048 --rxd=2048" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + set_fwd_csum=False, + ) + self.send_chain_packets_from_vhost() + self.verify_virtio_user_receive_packets() + + def test_loopback_split_inorder_mergeable_multi_queues_payload_check_with_server_mode_and_kernel_driver( + self, + ): + """ + Test Case 16: Loopback split ring inorder mergeable path multi-queues payload check with server mode and dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=8, dsa_index=0) + self.DC.create_work_queue(work_queue_number=8, dsa_index=1) + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.0;" + "txq2@wq0.0;" + "txq3@wq0.0;" + "txq4@wq0.1;" + "txq5@wq0.1;" + "rxq2@wq1.0;" + "rxq3@wq1.0;" + "rxq4@wq1.1;" + "rxq5@wq1.1;" + "rxq6@wq1.1;" + "rxq7@wq1.1" + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,server=1" + virtio_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + set_fwd_csum=True, + ) + self.launch_pdump_to_capture_pkt() + self.send_6192_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=6192) + + self.vhost_user_pmd.quit() + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.1;" + "txq2@wq0.2;" + "txq3@wq0.3;" + "txq4@wq0.4;" + "txq5@wq0.5;" + "rxq2@wq1.2;" + "rxq3@wq1.3;" + "rxq4@wq1.4;" + "rxq5@wq1.5;" + "rxq6@wq1.6;" + "rxq7@wq1.7" + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + self.launch_pdump_to_capture_pkt() + self.send_6192_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=6192) + + def test_loopback_split_mergeable_multi_queues_payload_check_with_server_mode_and_kernel_driver( + self, + ): + """ + Test Case 17: Loopback split ring mergeable path multi-queues payload check with server mode and dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=8, dsa_index=0) + self.DC.create_work_queue(work_queue_number=8, dsa_index=1) + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.0;" + "txq2@wq0.0;" + "txq3@wq0.0;" + "txq4@wq0.1;" + "txq5@wq0.1;" + "rxq2@wq1.0;" + "rxq3@wq1.0;" + "rxq4@wq1.1;" + "rxq5@wq1.1;" + "rxq6@wq1.1;" + "rxq7@wq1.1" + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,server=1" + virtio_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + set_fwd_csum=True, + ) + self.launch_pdump_to_capture_pkt() + self.send_6192_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=6192) + + self.vhost_user_pmd.quit() + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.1;" + "txq2@wq0.2;" + "txq3@wq0.3;" + "txq4@wq0.4;" + "txq5@wq0.5;" + "rxq2@wq1.2;" + "rxq3@wq1.3;" + "rxq4@wq1.4;" + "rxq5@wq1.5;" + "rxq6@wq1.6;" + "rxq7@wq1.7" + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + self.launch_pdump_to_capture_pkt() + self.send_6192_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=6192) + + def test_loopback_split_non_mergeable_multi_queues_payload_check_with_server_mode_and_kernel_driver( + self, + ): + """ + Test Case 18: Loopback split ring non-mergeable path multi-queues payload check with server mode and dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=8, dsa_index=0) + self.DC.create_work_queue(work_queue_number=8, dsa_index=1) + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.0;" + "txq2@wq0.0;" + "txq3@wq0.0;" + "txq4@wq0.1;" + "txq5@wq0.1;" + "rxq2@wq1.0;" + "rxq3@wq1.0;" + "rxq4@wq1.1;" + "rxq5@wq1.1;" + "rxq6@wq1.1;" + "rxq7@wq1.1" + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,server=1" + virtio_param = ( + "--enable-hw-vlan-strip --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + ) + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + set_fwd_csum=True, + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + self.vhost_user_pmd.quit() + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.1;" + "txq2@wq0.2;" + "txq3@wq0.3;" + "txq4@wq0.4;" + "txq5@wq0.5;" + "rxq2@wq1.2;" + "rxq3@wq1.3;" + "rxq4@wq1.4;" + "rxq5@wq1.5;" + "rxq6@wq1.6;" + "rxq7@wq1.7" + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + + def test_loopback_split_inorder_non_mergeable_multi_queues_payload_check_with_server_mode_and_kernel_driver( + self, + ): + """ + Test Case 19: Loopback split ring inorder non-mergeable path multi-queues payload check with server mode and dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=8, dsa_index=0) + self.DC.create_work_queue(work_queue_number=8, dsa_index=1) + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.0;" + "txq2@wq0.0;" + "txq3@wq0.0;" + "txq4@wq0.1;" + "txq5@wq0.1;" + "rxq2@wq1.0;" + "rxq3@wq1.0;" + "rxq4@wq1.1;" + "rxq5@wq1.1;" + "rxq6@wq1.1;" + "rxq7@wq1.1" + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,server=1" + virtio_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + set_fwd_csum=True, + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + self.vhost_user_pmd.quit() + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.1;" + "txq2@wq0.2;" + "txq3@wq0.3;" + "txq4@wq0.4;" + "txq5@wq0.5;" + "rxq2@wq1.2;" + "rxq3@wq1.3;" + "rxq4@wq1.4;" + "rxq5@wq1.5;" + "rxq6@wq1.6;" + "rxq7@wq1.7" + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + + def test_loopback_split_vectorized_multi_queues_payload_check_with_server_mode_and_kernel_driver( + self, + ): + """ + Test Case 20: Loopback split ring vectorized path multi-queues payload check with server mode and dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=8, dsa_index=0) + self.DC.create_work_queue(work_queue_number=8, dsa_index=1) + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.0;" + "txq2@wq0.0;" + "txq3@wq0.0;" + "txq4@wq0.1;" + "txq5@wq0.1;" + "rxq2@wq1.0;" + "rxq3@wq1.0;" + "rxq4@wq1.1;" + "rxq5@wq1.1;" + "rxq6@wq1.1;" + "rxq7@wq1.1" + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,vectorized=1,server=1" + virtio_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + set_fwd_csum=True, + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + self.vhost_user_pmd.quit() + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.1;" + "txq2@wq0.2;" + "txq3@wq0.3;" + "txq4@wq0.4;" + "txq5@wq0.5;" + "rxq2@wq1.2;" + "rxq3@wq1.3;" + "rxq4@wq1.4;" + "rxq5@wq1.5;" + "rxq6@wq1.6;" + "rxq7@wq1.7" + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + + def test_loopback_packed_inorder_mergeable_multi_queues_payload_check_with_server_mode_and_kernel_driver( + self, + ): + """ + Test Case 21: Loopback packed ring inorder mergeable path multi-queues payload check with server mode and dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=8, dsa_index=0) + self.DC.create_work_queue(work_queue_number=8, dsa_index=1) + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.0;" + "txq2@wq0.0;" + "txq3@wq0.0;" + "txq4@wq0.1;" + "txq5@wq0.1;" + "rxq2@wq1.0;" + "rxq3@wq1.0;" + "rxq4@wq1.1;" + "rxq5@wq1.1;" + "rxq6@wq1.1;" + "rxq7@wq1.1" + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,packed_vq=1,server=1" + virtio_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + set_fwd_csum=True, + ) + self.launch_pdump_to_capture_pkt() + self.send_6192_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=6192) + + self.vhost_user_pmd.quit() + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.1;" + "txq2@wq0.2;" + "txq3@wq0.3;" + "txq4@wq0.4;" + "txq5@wq0.5;" + "rxq2@wq1.2;" + "rxq3@wq1.3;" + "rxq4@wq1.4;" + "rxq5@wq1.5;" + "rxq6@wq1.6;" + "rxq7@wq1.7" + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + self.launch_pdump_to_capture_pkt() + self.send_6192_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=6192) + + def test_loopback_packed_mergeable_multi_queues_payload_check_with_server_mode_and_kernel_driver( + self, + ): + """ + Test Case 22: Loopback packed ring mergeable path multi-queues payload check with server mode and dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=8, dsa_index=0) + self.DC.create_work_queue(work_queue_number=8, dsa_index=1) + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.0;" + "txq2@wq0.0;" + "txq3@wq0.0;" + "txq4@wq0.1;" + "txq5@wq0.1;" + "rxq2@wq1.0;" + "rxq3@wq1.0;" + "rxq4@wq1.1;" + "rxq5@wq1.1;" + "rxq6@wq1.1;" + "rxq7@wq1.1" + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,packed_vq=1,server=1" + virtio_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + set_fwd_csum=True, + ) + self.launch_pdump_to_capture_pkt() + self.send_6192_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=6192) + + self.vhost_user_pmd.quit() + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.1;" + "txq2@wq0.2;" + "txq3@wq0.3;" + "txq4@wq0.4;" + "txq5@wq0.5;" + "rxq2@wq1.2;" + "rxq3@wq1.3;" + "rxq4@wq1.4;" + "rxq5@wq1.5;" + "rxq6@wq1.6;" + "rxq7@wq1.7" + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + self.launch_pdump_to_capture_pkt() + self.send_6192_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=6192) + + def test_loopback_packed_non_mergeable_multi_queues_payload_check_with_server_mode_and_kernel_driver( + self, + ): + """ + Test Case 23: Loopback packed ring non-mergeable path multi-queues payload check with server mode and dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=8, dsa_index=0) + self.DC.create_work_queue(work_queue_number=8, dsa_index=1) + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.0;" + "txq2@wq0.0;" + "txq3@wq0.0;" + "txq4@wq0.1;" + "txq5@wq0.1;" + "rxq2@wq1.0;" + "rxq3@wq1.0;" + "rxq4@wq1.1;" + "rxq5@wq1.1;" + "rxq6@wq1.1;" + "rxq7@wq1.1" + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,packed_vq=1,server=1" + virtio_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + set_fwd_csum=True, + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + self.vhost_user_pmd.quit() + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.1;" + "txq2@wq0.2;" + "txq3@wq0.3;" + "txq4@wq0.4;" + "txq5@wq0.5;" + "rxq2@wq1.2;" + "rxq3@wq1.3;" + "rxq4@wq1.4;" + "rxq5@wq1.5;" + "rxq6@wq1.6;" + "rxq7@wq1.7" + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + + def test_loopback_packed_inorder_non_mergeable_multi_queues_payload_check_with_server_mode_and_kernel_driver( + self, + ): + """ + Test Case 24: Loopback packed ring inorder non-mergeable path multi-queues payload check with server mode and dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=8, dsa_index=0) + self.DC.create_work_queue(work_queue_number=8, dsa_index=1) + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.0;" + "txq2@wq0.0;" + "txq3@wq0.0;" + "txq4@wq0.1;" + "txq5@wq0.1;" + "rxq2@wq1.0;" + "rxq3@wq1.0;" + "rxq4@wq1.1;" + "rxq5@wq1.1;" + "rxq6@wq1.1;" + "rxq7@wq1.1" + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,server=1" + virtio_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + set_fwd_csum=True, + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + self.vhost_user_pmd.quit() + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.1;" + "txq2@wq0.2;" + "txq3@wq0.3;" + "txq4@wq0.4;" + "txq5@wq0.5;" + "rxq2@wq1.2;" + "rxq3@wq1.3;" + "rxq4@wq1.4;" + "rxq5@wq1.5;" + "rxq6@wq1.6;" + "rxq7@wq1.7" + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + + def test_loopback_packed_vectorized_multi_queues_payload_check_with_server_mode_and_kernel_driver( + self, + ): + """ + Test Case 25: Loopback packed ring vectorized path multi-queues payload check with server mode and dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=8, dsa_index=0) + self.DC.create_work_queue(work_queue_number=8, dsa_index=1) + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.0;" + "txq2@wq0.0;" + "txq3@wq0.0;" + "txq4@wq0.1;" + "txq5@wq0.1;" + "rxq2@wq1.0;" + "rxq3@wq1.0;" + "rxq4@wq1.1;" + "rxq5@wq1.1;" + "rxq6@wq1.1;" + "rxq7@wq1.1" + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + virtio_eal_param = "--force-max-simd-bitwidth=512 --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,vectorized=1,packed_vq=1,server=1" + virtio_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + set_fwd_csum=True, + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + self.vhost_user_pmd.quit() + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.1;" + "txq2@wq0.2;" + "txq3@wq0.3;" + "txq4@wq0.4;" + "txq5@wq0.5;" + "rxq2@wq1.2;" + "rxq3@wq1.3;" + "rxq4@wq1.4;" + "rxq5@wq1.5;" + "rxq6@wq1.6;" + "rxq7@wq1.7" + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + + def test_loopback_packed_vectorized_not_powerof_2_multi_queues_payload_check_with_server_mode_and_kernel_driver( + self, + ): + """ + Test Case 26: Loopback packed ring vectorized path and ring size is not power of 2 multi-queues payload check with server mode and dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=8, dsa_index=0) + self.DC.create_work_queue(work_queue_number=8, dsa_index=1) + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.0;" + "txq2@wq0.0;" + "txq3@wq0.0;" + "txq4@wq0.1;" + "txq5@wq0.1;" + "rxq2@wq1.0;" + "rxq3@wq1.0;" + "rxq4@wq1.1;" + "rxq5@wq1.1;" + "rxq6@wq1.1;" + "rxq7@wq1.1" + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queue_size=1025,server=1" + virtio_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1025 --rxd=1025" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + set_fwd_csum=True, + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + self.vhost_user_pmd.quit() + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.1;" + "txq2@wq0.2;" + "txq3@wq0.3;" + "txq4@wq0.4;" + "txq5@wq0.5;" + "rxq2@wq1.2;" + "rxq3@wq1.3;" + "rxq4@wq1.4;" + "rxq5@wq1.5;" + "rxq6@wq1.6;" + "rxq7@wq1.7" + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + self.launch_pdump_to_capture_pkt() + self.send_960_packets_from_vhost() + self.check_packet_payload_valid(pkt_len=960) + + def test_pv_split_and_packed_server_mode_txonly_mode_with_dpdk_and_kernel_driver( + self, + ): + """ + Test Case 27: PV split and packed ring server mode test txonly mode with dsa dpdk and kernel driver + """ + self.DC.create_work_queue(work_queue_number=8, dsa_index=0) + self.DC.create_work_queue(work_queue_number=8, dsa_index=1) + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci", dsa_index_list=[2, 3] + ) + dmas = ( + "txq0@wq0.0;" + "txq1@wq0.0;" + "txq2@wq0.0;" + "txq3@wq0.0;" + "txq4@%s-q0;" + "txq5@%s-q0;" + "rxq2@%s-q1;" + "rxq3@%s-q1;" + "rxq4@wq1.1;" + "rxq5@wq1.1;" + "rxq6@wq1.1;" + "rxq7@wq1.1" + % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[%s]'" % dmas + ) + vhost_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + port_options = {self.use_dsa_list[0]: "max_queues=2"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list[0:1], + port_options=port_options, + iova_mode="va", + ) + virtio_eal_param = "--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,server=1" + virtio_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio_eal_param, + param=virtio_param, + set_fwd_csum=True, + ) + self.launch_pdump_to_capture_pkt() + self.vhost_user_pmd.execute_cmd("set fwd txonly") + self.vhost_user_pmd.execute_cmd("async_vhost tx poll completed on") + self.send_6192_packets_from_vhost(set_csum=False) + self.check_packet_payload_valid(pkt_len=6192, check_payload=False) + + self.vhost_user_pmd.quit() + vhost_eal_param = "--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1'" + vhost_param = "--nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + self.launch_pdump_to_capture_pkt() + self.vhost_user_pmd.execute_cmd("set fwd txonly") + self.vhost_user_pmd.execute_cmd("async_vhost tx poll completed on") + self.send_6192_packets_from_vhost(set_csum=False) + self.check_packet_payload_valid(pkt_len=6192, check_payload=False) + + def tear_down(self): + """ + Run after each test case. + """ + self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#") + self.dut.kill_all() + self.DC.reset_all_work_queue() + self.DC.bind_all_dsa_to_kernel() + + def tear_down_all(self): + """ + Run after each test suite. + """ + self.dut.close_session(self.vhost_user) + self.dut.close_session(self.virtio_user)