From patchwork Fri Nov 11 08:15:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 119783 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CE5E8A0542; Fri, 11 Nov 2022 09:21:46 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C8A0240150; Fri, 11 Nov 2022 09:21:46 +0100 (CET) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id BA65440141 for ; Fri, 11 Nov 2022 09:21:44 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1668154904; x=1699690904; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=PGKXBUZPuD/85yusni1BXn6ABNLtK3KuFhQCYRSDWP0=; b=Jy647WQyBSMyKwEUOTAq83RjoLMg3Q3iTC4ghhpxYU0rTdXQSwPxzy8r 6US6K+yqi131VmtllfzoKIeXjOMK420Xn+ZjXIst1EGI8NxeQjwCNiDFk ixRpACb40xBhPjwXFX19bjsDkSdj+froDIVivCWLkrxQ1gqQdLm2HW1wn g34OQjnODYCxfoJilrXo5U5jGnrNg+iqSxZWnpvMcMntooyWFxdh0GmQt 6VGp6mDgpEiSPUP2dnYeluuznv/6aHO/S8DRRQ1nKBsMnYRs2V4uM9gW+ +PEdhdKfwjfPuJ3n8sMT/7e3uAPMlhxkE0xmlU6u40g/cUWBkQlaTCLOP g==; X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="397849928" X-IronPort-AV: E=Sophos;i="5.96,156,1665471600"; d="scan'208";a="397849928" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Nov 2022 00:21:43 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="762599135" X-IronPort-AV: E=Sophos;i="5.96,156,1665471600"; d="scan'208";a="762599135" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Nov 2022 00:21:41 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 1/2] test_plans/vm2vm_virtio_user_dsa_test_plan: modify the dmas parameter Date: Fri, 11 Nov 2022 16:15:01 +0800 Message-Id: <20221111081501.2426191-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org 1.From DPDK-22.11, the dmas parameter have changed from `lcore-dma=[lcore1@0000:00:04.0]` to `dmas=[txq0@0000:00:04.0]` by DPDK local patch,so modify the dmas parameter. 2.Add some new testcase to cover the 2 type DSA driver, include DPDK driver and kernel driver. Signed-off-by: Wei Ling --- .../vm2vm_virtio_user_dsa_test_plan.rst | 1045 +++++++++-------- 1 file changed, 523 insertions(+), 522 deletions(-) diff --git a/test_plans/vm2vm_virtio_user_dsa_test_plan.rst b/test_plans/vm2vm_virtio_user_dsa_test_plan.rst index 8f2f7133..cd42798d 100644 --- a/test_plans/vm2vm_virtio_user_dsa_test_plan.rst +++ b/test_plans/vm2vm_virtio_user_dsa_test_plan.rst @@ -7,10 +7,17 @@ VM2VM vhost-user/virtio-user with DSA driver test plan Description =========== + Vhost asynchronous data path leverages DMA devices to offload memory copies from the CPU and it is implemented in an asynchronous way. -In addition, vhost supports M:N mapping between vrings and DMA virtual channels. Specifically, one vring can use multiple different DMA -channels and one DMA channel can be shared by multiple vrings at the same time. From DPDK22.07, Vhost enqueue and dequeue operation with -DSA driver is supported in both split and packed ring. +Asynchronous data path is enabled per tx/rx queue, and users need to specify the DMA device used by the tx/rx queue. Each tx/rx queue +only supports to use one DMA device, but one DMA device can be shared among multiple tx/rx queues of different vhostpmd ports. + +Two PMD parameters are added: +- dmas: specify the used DMA device for a tx/rx queue(Default: no queues enable asynchronous data path) +- dma-ring-size: DMA ring size.(Default: 4096). + +Here is an example: +--vdev 'eth_vhost0,iface=./s0,dmas=[txq0@0000:00.01.0;rxq0@0000:00.01.1],dma-ring-size=4096' This document provides the test plan for testing the following features when Vhost-user using asynchronous data path with DSA driver (kernel IDXD driver and DPDK vfio-pci driver) in VM2VM virtio-user topology. @@ -20,8 +27,8 @@ DSA driver (kernel IDXD driver and DPDK vfio-pci driver) in VM2VM virtio-user to the split ring mergeable path use indirect descriptor, the 2000,2000,2000,2000 chain packets will only occupy one ring. IOMMU impact: -If iommu off, idxd can work with iova=pa -If iommu on, kernel dsa driver only can work with iova=va by program IOMMU, can't use iova=pa(fwd not work due to pkts payload wrong). +If iommu off, idxd can work with iova=va +If iommu on, kernel dsa driver only can work with iova=va by program IOMMU, can't use iova=va(fwd not work due to pkts payload wrong). Note: 1.When DMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may @@ -104,9 +111,9 @@ Common steps Check WQ by 'ls /dev/dsa' and can find "wq0.0 wq1.0 wq1.1 wq1.2 wq1.3" Test Case 1: VM2VM split ring non-mergeable path and multi-queues payload check with dsa dpdk driver ------------------------------------------------------------------------------------------------------- -This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring non-mergeable path -and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test. +---------------------------------------------------------------------------------------------------- +This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring non-mergeable path +and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. 1. bind 2 dsa device to vfio-pci like common step 1:: @@ -114,10 +121,10 @@ and multi-queues when vhost uses the asynchronous operations with dsa dpdk drive 2. Launch vhost by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0,max_queues=2 -a 0000:ec:01.0,max_queues=2 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q0,lcore2@0000:e7:01.0-q1] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0,max_queues=2 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@0000:e7:01.0-q0;rxq1@0000:e7:01.0-q0]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:e7:01.0-q1;rxq1@0000:e7:01.0-q1]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx 3. Launch virtio-user1 by below command:: @@ -129,12 +136,12 @@ and multi-queues when vhost uses the asynchronous operations with dsa dpdk drive 4. Attach pdump secondary process to primary process of virtio-user1 by same file-prefix:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000' + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000' 5. Launch virtio-user0 and send packets:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=0,queue_size=4096 \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=0,in_order=0,queue_size=4096 \ -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 testpmd>set burst 1 testpmd>set txpkts 64,128,256,512 @@ -155,12 +162,12 @@ and multi-queues when vhost uses the asynchronous operations with dsa dpdk drive testpmd>clear port stats all testpmd>start -8. Relaunch vhost with pa mode by below command:: +8. Relaunch vhost by below command:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0,max_queues=4 -a 0000:ec:01.0,max_queues=4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q3,lcore2@0000:ec:01.0-q3] + --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q1;rxq0@0000:e7:01.0-q2;rxq1@0000:e7:01.0-q3]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:ec:01.0-q0;txq1@0000:ec:01.0-q1;rxq0@0000:ec:01.0-q2;rxq1@0000:ec:01.0-q3]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx 9. Rerun step 4. @@ -180,25 +187,25 @@ and multi-queues when vhost uses the asynchronous operations with dsa dpdk drive 11. Rerun step 6. Test Case 2: VM2VM split ring inorder non-mergeable path and multi-queues payload check with dsa dpdk driver ---------------------------------------------------------------------------------------------------------------- +------------------------------------------------------------------------------------------------------------ This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring inorder -non-mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test. +non-mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. -1. bind 3 dsa device to vfio-pci like common step 1:: +1. bind 2 dsa device to vfio-pci like common step 1:: - # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 f1:01.0 + # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 2. Launch vhost by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0,max_queues=2 -a 0000:ec:01.0,max_queues=2 -a 0000:f1:01.0,max_queues=2 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q0,lcore2@0000:ec:01.0-q0,lcore2@0000:f1:01.0-q0] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0,max_queues=4 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@0000:e7:01.0-q0;rxq1@0000:e7:01.0-q1]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:e7:01.0-q2;rxq1@0000:e7:01.0-q3]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx 3. Launch virtio-user1 by below command:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \ - --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=1,queue_size=4096 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=0,in_order=1,queue_size=4096 \ -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 testpmd>set fwd rxonly testpmd>start @@ -210,7 +217,7 @@ non-mergeable path and multi-queues when vhost uses the asynchronous operations 5. Launch virtio-user0 and send packets:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=1,queue_size=4096 \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=0,in_order=1,queue_size=4096 \ -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 testpmd>set burst 1 testpmd>set txpkts 64,128,256,512 @@ -231,16 +238,16 @@ non-mergeable path and multi-queues when vhost uses the asynchronous operations testpmd>clear port stats all testpmd>start -8. Relaunch vhost with pa mode by below command:: +8. Relaunch vhost by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0 -a 0000:ec:01.0,max_queues=4 -a 0000:f1:01.0,max_queues=4 -a 0000:f6:01.0,max_queues=4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q2,lcore2@0000:ec:01.0-q2,lcore2@0000:f1:01.0-q2] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0,max_queues=2 -a 0000:ec:01.0,max_queues=2 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;rxq0@0000:e7:01.0-q1;rxq1@0000:e7:01.0-q1]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:ec:01.0-q0;txq1@0000:ec:01.0-q0;rxq0@0000:ec:01.0-q1;rxq1@0000:ec:01.0-q1]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx 9. Rerun step 4. -10. Virtio-user0 send packets:: +10. Virtio-user0 and send packets again:: testpmd>set burst 1 testpmd>set txpkts 64,128,256,512 @@ -256,25 +263,25 @@ non-mergeable path and multi-queues when vhost uses the asynchronous operations 11. Rerun step 6. Test Case 3: VM2VM split ring inorder mergeable path and multi-queues test non-indirect descriptor and payload check with dsa dpdk driver -------------------------------------------------------------------------------------------------------------------------------------------- +----------------------------------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid and non-indirect descriptor after packets forwarding in vhost-user/virtio-user -split ring inorder mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test. +split ring inorder mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. -1. bind 4 dsa device to vfio-pci like common step 1:: +1. bind 2 dsa device to vfio-pci like common step 1:: - # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 f1:01.0 f6:01.0 + # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 2. Launch vhost by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0,max_queues=1 -a 0000:ec:01.0,max_queues=2 -a 0000:f1:01.0,max_queues=3 -a 0000:f6:01.0,max_queues=4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q0,lcore2@0000:ec:01.0-q1,lcore2@0000:f1:01.0-q2,lcore2@0000:f6:01.0-q3] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0,max_queues=2 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@0000:e7:01.0-q0;rxq1@0000:e7:01.0-q1]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:e7:01.0-q0;rxq1@0000:e7:01.0-q1]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx 3. Launch virtio-user1 by below command:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \ - --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=1,queue_size=256 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=1,in_order=1,queue_size=256 \ -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 testpmd>set fwd rxonly testpmd>start @@ -286,7 +293,7 @@ split ring inorder mergeable path and multi-queues when vhost uses the asynchron 5. Launch virtio-user0 and send packets:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=1,queue_size=256 \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=1,in_order=1,queue_size=256 \ -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 testpmd>set burst 1 testpmd>start tx_first 27 @@ -301,35 +308,35 @@ split ring inorder mergeable path and multi-queues when vhost uses the asynchron 6. Start vhost, then quit pdump and three testpmd, about split virtqueue inorder mergeable path, it use the non-indirect descriptors, the 8k length pkt will occupies 5 ring:2000,2000,2000,2000 will need 4 consequent ring, still need one ring put header. So check 504 packets and 48128 bytes received by virtio-user1 and 502 packets with 64 length and 2 packets with 8K length in pdump-virtio-rx.pcap. -7. Relaunch vhost with pa mode by below command:: +7. Relaunch vhost by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0,max_queues=1 -a 0000:ec:01.0,max_queues=2 -a 0000:f1:01.0,max_queues=3 -a 0000:f6:01.0,max_queues=4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q0,lcore2@0000:ec:01.0-q1,lcore2@0000:f1:01.0-q2,lcore2@0000:f6:01.0-q3] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0,max_queues=4 -a 0000:ec:01.0,max_queues=4 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q1;rxq0@0000:e7:01.0-q2;rxq1@0000:e7:01.0-q3]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:ec:01.0-q0;txq1@0000:ec:01.0-q1;rxq0@0000:ec:01.0-q2;rxq1@0000:ec:01.0-q3]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx 8. Rerun step 3-6. Test Case 4: VM2VM split ring mergeable path and multi-queues test indirect descriptor and payload check with dsa dpdk driver ------------------------------------------------------------------------------------------------------------------------------- +----------------------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid and indirect descriptor after packets forwarding in vhost-user/virtio-user -split ring mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test. +split ring mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. -1. bind 4 dsa device to vfio-pci like common step 1:: +1. bind 2 dsa device to vfio-pci like common step 1:: - # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 f1:01.0 f6:01.0 + # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 2. Launch vhost by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0,max_queues=1 -a 0000:ec:01.0,max_queues=2 -a 0000:f1:01.0,max_queues=3 -a 0000:f6:01.0,max_queues=4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q0,lcore2@0000:ec:01.0-q1,lcore2@0000:f1:01.0-q2,lcore2@0000:f6:01.0-q3] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0,max_queues=1 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@0000:e7:01.0-q0;rxq1@0000:e7:01.0-q0]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:e7:01.0-q0;rxq1@0000:e7:01.0-q0]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx 3. Launch virtio-user1 by below command:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \ - --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=4096 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=1,in_order=0,queue_size=4096 \ -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 testpmd>set fwd rxonly testpmd>start @@ -341,7 +348,7 @@ split ring mergeable path and multi-queues when vhost uses the asynchronous oper 5. Launch virtio-user0 and send packets:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=256 \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=1,in_order=0,queue_size=256 \ -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 testpmd>set burst 1 testpmd>start tx_first 27 @@ -356,79 +363,79 @@ split ring mergeable path and multi-queues when vhost uses the asynchronous oper 6. Start vhost, then quit pdump and three testpmd, about split virtqueue inorder mergeable path, it use the indirect descriptors, the 8k length pkt will just occupies one ring. So check 512 packets and 112128 bytes received by virtio-user1 and 502 packets with 64 length and 10 packets with 8K length in pdump-virtio-rx.pcap. -7. Relaunch vhost with pa mode by below command:: +7. Relaunch vhost by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0,max_queues=1 -a 0000:ec:01.0,max_queues=2 -a 0000:f1:01.0,max_queues=3 -a 0000:f6:01.0,max_queues=4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \ - --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q0,lcore2@0000:ec:01.0-q1,lcore2@0000:f1:01.0-q2,lcore2@0000:f6:01.0-q3] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0,max_queues=4 -a 0000:ec:01.0,max_queues=2 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q1;rxq0@0000:e7:01.0-q2;rxq1@0000:e7:01.0-q3]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:ec:01.0-q0;txq1@0000:ec:01.0-q1;rxq0@0000:ec:01.0-q2;rxq1@0000:ec:01.0-q3]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx 8. Rerun step 3-6. Test Case 5: VM2VM split ring vectorized path and multi-queues payload check with vhost async operation and dsa dpdk driver ------------------------------------------------------------------------------------------------------------------------------- +--------------------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring vectorized path -and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test. +and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. -1. bind 3 dsa ports to vfio-pci:: +1. bind 2 dsa ports to vfio-pci:: - ls /dev/dsa #check wq configure, reset if exist - # ./usertools/dpdk-devbind.py -u e7:01.0 ec:01.0 f1:01.0 - # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 f1:01.0 + ls /dev/dsa #check wq configure, reset if exist + # ./usertools/dpdk-devbind.py -u e7:01.0 ec:01.0 + # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 2. Launch vhost by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q0,lcore2@0000:ec:01.0-q1,lcore2@0000:f1:01.0-q2] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0 -a 0000:ec:01.0 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@0000:e7:01.0-q0;rxq0@0000:e7:01.0-q0;rxq1@0000:e7:01.0-q0]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:e7:01.0-q1;txq1@0000:e7:01.0-q1;rxq1@0000:e7:01.0-q1]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx 3. Launch virtio-user1 by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \ - --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=4096 \ - -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 - testpmd>set fwd rxonly - testpmd>start + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set fwd rxonly + testpmd>start 4. Attach pdump secondary process to primary process of virtio-user1 by same file-prefix:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000' + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000' 5. Launch virtio-user0 and send packets:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=4096 \ - -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 - testpmd>set burst 1 - testpmd>set txpkts 64,128,256,512 - testpmd>start tx_first 27 - testpmd>stop - testpmd>set burst 32 - testpmd>start tx_first 7 - testpmd>stop - testpmd>set txpkts 64 - testpmd>start tx_first 1 - testpmd>stop + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set burst 1 + testpmd>set txpkts 64,128,256,512 + testpmd>start tx_first 27 + testpmd>stop + testpmd>set burst 32 + testpmd>start tx_first 7 + testpmd>stop + testpmd>set txpkts 64 + testpmd>start tx_first 1 + testpmd>stop 6. Start vhost testpmd, quit pdump and check virtio-user1 RX-packets is 566 and RX-bytes is 486016 and 502 packets with 960 length and 64 packets with 64 length in pdump-virtio-rx.pcap. 7. Clear virtio-user1 port stats:: - testpmd>stop - testpmd>clear port stats all - testpmd>start + testpmd>stop + testpmd>clear port stats all + testpmd>start -8. Relaunch vhost with pa mode by below command:: +8. Relaunch vhost by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -a 0000:f6:01.0 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q3,lcore2@0000:ec:01.0-q4,lcore2@0000:f1:01.0-q3] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0 -a 0000:ec:01.0 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@0000:e7:01.0-q3;txq1@0000:e7:01.0-q3;rxq0@0000:ec:01.0-q1;rxq1@0000:ec:01.0-q1]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:ec:01.0-q1;txq1@0000:ec:01.0-q1;rxq0@0000:e7:01.0-q3;rxq1@0000:e7:01.0-q3]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx 9. Rerun step 4. -10. Virtio-user0 send packets:: +10. Virtio-user0 and send packets again:: testpmd>set burst 1 testpmd>set txpkts 64,128,256,512 @@ -444,20 +451,20 @@ and multi-queues when vhost uses the asynchronous operations with dsa dpdk drive 11. Rerun step 6. Test Case 6: VM2VM packed ring non-mergeable path and multi-queues payload check with dsa dpdk driver ------------------------------------------------------------------------------------------------------- -This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring -non-mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test. +----------------------------------------------------------------------------------------------------- +This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring +non-mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. -1. bind 3 dsa device to vfio-pci like common step 1:: +1. bind 2 dsa device to vfio-pci like common step 1:: # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 f1:01.0 2. Launch vhost by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q0,lcore2@0000:ec:01.0-q1,lcore2@0000:f1:01.0-q2] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0 -a 0000:ec:01.0 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@0000:e7:01.0-q1;rxq1@0000:e7:01.0-q1]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:ec:01.0-q1;rxq1@0000:ec:01.0-q1]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx 3. Launch virtio-user1 by below command:: @@ -495,16 +502,16 @@ non-mergeable path and multi-queues when vhost uses the asynchronous operations testpmd>clear port stats all testpmd>start -8. Relaunch vhost with iova=pa by below command:: +8. Relaunch vhost with iova=va by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q4,lcore2@0000:ec:01.0-q5,lcore2@0000:f1:01.0-q6] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;rxq0@0000:e7:01.0-q1;rxq1@0000:e7:01.0-q1]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:e7:01.0-q2;txq1@0000:e7:01.0-q2;rxq0@0000:e7:01.0-q3;rxq1@0000:e7:01.0-q3]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx 9. Rerun step 4. -10. Virtio-user0 send packets:: +10. Virtio-user0 and send packets again:: testpmd>set burst 1 testpmd>set txpkts 64,128,256,512 @@ -520,9 +527,9 @@ non-mergeable path and multi-queues when vhost uses the asynchronous operations 11. Rerun step 6. Test Case 7: VM2VM packed ring inorder non-mergeable path and multi-queues payload check with dsa dpdk driver ---------------------------------------------------------------------------------------------------------------- +------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring inorder -non-mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test. +non-mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. 1. bind 4 dsa device to vfio-pci like common step 1:: @@ -530,10 +537,10 @@ non-mergeable path and multi-queues when vhost uses the asynchronous operations 2. Launch vhost by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0,max_queues=2 -a 0000:ec:01.0,max_queues=2 -a 0000:f1:01.0,max_queues=2 -a 0000:f6:01.0,max_queues=2 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q0,lcore2@0000:ec:01.0-q0,lcore2@0000:f1:01.0-q1,lcore2@0000:f6:01.0-q1] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0,max_queues=2 -a 0000:ec:01.0,max_queues=2 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@0000:e7:01.0-q0;rxq1@0000:e7:01.0-q1]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:ec:01.0-q0;rxq1@0000:ec:01.0-q1]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx 3. Launch virtio-user1 by below command:: @@ -571,16 +578,16 @@ non-mergeable path and multi-queues when vhost uses the asynchronous operations testpmd>clear port stats all testpmd>start -8. Relaunch vhost with iova=pa by below command:: +8. Relaunch vhost with iova=va by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -a 0000:f6:01.0 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q5,lcore2@0000:ec:01.0-q6,lcore2@0000:f1:01.0-q5,lcore2@0000:f6:01.0-q6] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0 -a 0000:ec:01.0 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@0000:e7:01.0-q5;txq1@0000:e7:01.0-q6;rxq0@0000:e7:01.0-q5;rxq1@0000:e7:01.0-q6]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:ec:01.0-q5;txq1@0000:ec:01.0-q6;rxq0@0000:ec:01.0-q5;rxq1@0000:ec:01.0-q6]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx 9. Rerun step 4. -10. virtio-user0 send packets:: +10. Virtio-user0 and send packets again:: testpmd>set burst 1 testpmd>set txpkts 64,128,256,512 @@ -596,20 +603,20 @@ non-mergeable path and multi-queues when vhost uses the asynchronous operations 11. Rerun step 6. Test Case 8: VM2VM packed ring mergeable path and multi-queues payload check with dsa dpdk driver --------------------------------------------------------------------------------------------------- -This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring -mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test. +------------------------------------------------------------------------------------------------- +This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring +mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. 1. bind 2 dsa device to vfio-pci like common step 1:: - # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 + # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 2. Launch vhost by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0,max_queues=4 -a 0000:ec:01.0,max_queues=4 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q0,lcore2@0000:e7:01.0-q1,lcore2@0000:e7:01.0-q2,lcore2@0000:ec:01.0-q0,lcore2@0000:ec:01.0-q1] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0,max_queues=1 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@0000:e7:01.0-q0;rxq1@0000:e7:01.0-q0]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:e7:01.0-q0;rxq1@0000:e7:01.0-q0]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx 3. Launch virtio-user1 by below command:: @@ -646,16 +653,16 @@ mergeable path and multi-queues when vhost uses the asynchronous operations with testpmd>clear port stats all testpmd>start -8. Relaunch vhost with iova=pa by below command:: +8. Relaunch vhost with iova=va by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0 -a 0000:ec:01.0 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q6,lcore2@0000:e7:01.0-q7,lcore2@0000:ec:01.0-q2,lcore2@0000:ec:01.0-q3,lcore2@0000:ec:01.0-q4] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q1;rxq0@0000:e7:01.0-q2;rxq1@0000:e7:01.0-q3]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:e7:01.0-q4;txq1@0000:e7:01.0-q5;rxq0@0000:e7:01.0-q6;rxq1@0000:e7:01.0-q7]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx 9. Rerun step 4. -10. Virtio-user0 send packets:: +10. Virtio-user0 and send packets again:: testpmd>set burst 1 testpmd>set txpkts 64,256,2000,64,256,2000 @@ -669,9 +676,9 @@ mergeable path and multi-queues when vhost uses the asynchronous operations with 11. Rerun step 6. Test Case 9: VM2VM packed ring inorder mergeable path and multi-queues payload check with dsa dpdk driver ------------------------------------------------------------------------------------------------------------ +--------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring inorder -mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test. +mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. 1. bind 4 dsa device to vfio-pci like common step 1:: @@ -682,9 +689,9 @@ mergeable path and multi-queues when vhost uses the asynchronous operations with 2. Launch vhost by below command:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -a 0000:f6:01.0 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q4,lcore2@0000:ec:01.0-q5,lcore2@0000:f1:01.0-q6,lcore2@0000:f6:01.0-q7] + --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@0000:e7:01.0-q0;rxq0@0000:e7:01.0-q0;rxq1@0000:e7:01.0-q0]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:e7:01.0-q1;txq1@0000:e7:01.0-q1;rxq1@0000:e7:01.0-q1]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx 3. Launch virtio-user1 by below command:: @@ -720,16 +727,16 @@ mergeable path and multi-queues when vhost uses the asynchronous operations with testpmd>clear port stats all testpmd>start -8. Relaunch vhost with iova=pa by below command:: +8. Relaunch vhost with iova=va by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0 -a 0000:ec:01.0 -a 0000:f1:01.0 -a 0000:f6:01.0 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:e7:01.0-q4,lcore2@0000:ec:01.0-q5,lcore2@0000:f1:01.0-q6,lcore2@0000:f6:01.0-q7] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0 -a 0000:ec:01.0 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q1;rxq0@0000:e7:01.0-q0;rxq1@0000:e7:01.0-q1]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:ec:01.0-q0;txq1@0000:ec:01.0-q1;rxq0@0000:ec:01.0-q0;rxq1@0000:ec:01.0-q1]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx 9. Rerun step 4. -10. virtio-user0 send packets:: +10. Virtio-user0 and send packets again:: testpmd>set burst 1 testpmd>set txpkts 64,256,2000,64,256,2000 @@ -743,21 +750,20 @@ mergeable path and multi-queues when vhost uses the asynchronous operations with 11. Rerun step 6. Test Case 10: VM2VM packed ring vectorized-tx path and multi-queues test indirect descriptor and payload check with dsa dpdk driver ------------------------------------------------------------------------------------------------------------------------------------- +----------------------------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid and indirect descriptor after packets forwarding in vhost-user/virtio-user packed ring vectorized-tx path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. -Both iova as VA and PA mode test. 1. bind 2 dsa device to vfio-pci like common step 1:: - # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 + # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 2. Launch vhost by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0,max_queues=2 -a 0000:ec:01.0,max_queues=2 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx \ - --lcore-dma=[lcore2@0000:e7:01.0-q0,lcore2@0000:ec:01.0-q1] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0,max_queues=2 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;rxq0@0000:e7:01.0-q0;rxq1@0000:e7:01.0-q0]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:e7:01.0-q1;txq1@0000:e7:01.0-q1;rxq0@0000:e7:01.0-q1;rxq1@0000:e7:01.0-q1]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx 3. Launch virtio-user1 by below command:: @@ -789,18 +795,106 @@ Both iova as VA and PA mode test. 6. Start vhost, then quit pdump and three testpmd, about packed virtqueue vectorized-tx path, it use the indirect descriptors, the 8k length pkt will just occupies one ring. So check 512 packets and 112128 bytes received by virtio-user1 and 502 packets with 64 length and 10 packets with 8K length in pdump-virtio-rx.pcap. -7.Relaunch vhost with iova=pa by below command:: +7.Relaunch vhost with iova=va by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0,max_queues=2 -a 0000:ec:01.0,max_queues=2 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \ - --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx \ - --lcore-dma=[lcore2@0000:e7:01.0-q0,lcore2@0000:ec:01.0-q1] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0,max_queues=4 -a 0000:ec:01.0,max_queues=2 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q1;rxq0@0000:e7:01.0-q2;rxq1@0000:e7:01.0-q3]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:ec:01.0-q0;txq1@0000:ec:01.0-q1]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx 8. Rerun step 3-6. -Test Case 11: VM2VM split ring non-mergeable path and multi-queues payload check with dsa kernel driver ---------------------------------------------------------------------------------------------------------- -This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring +Test Case 11: VM2VM packed ring vectorized path and payload check test with dsa dpdk driver +------------------------------------------------------------------------------------------- +This case uses testpmd to test the payload is valid and indirect descriptor after packets forwarding in vhost-user/virtio-user +packed ring vectorized path and multi-queues when vhost uses the asynchronous operations with dsa dpdk driver. + +1. Bind 2 dsa device to vfio-pci like common step 2:: + + # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0,max_queues=2 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;rxq0@0000:e7:01.0-q1;rxq1@0000:e7:01.0-q1]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;rxq0@0000:e7:01.0-q1;rxq1@0000:e7:01.0-q1]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx + +3. Launch virtio-user1 by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set fwd rxonly + testpmd>start + +4. Attach pdump secondary process to primary process by same file-prefix:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000' + +5. Launch virtio-user0 and send 8k length packets:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --force-max-simd-bitwidth=512 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + +6. Start vhost,then quit pdump, check 448 packets and 28672 bytes received by virtio-user1 and 448 packets with 64 length in pdump-virtio-rx.pcap. + +Test Case 12: VM2VM packed ring vectorized path payload check test with ring size is not power of 2 with dsa dpdk driver +------------------------------------------------------------------------------------------------------------------------ +This case uses testpmd to test the payload is valid and indirect descriptor after packets forwarding in vhost-user/virtio-user +packed ring vectorized path and multi-queues with ring size is not power of 2 when vhost uses the asynchronous operations with dsa dpdk driver. + +1. Bind 2 dsa device to vfio-pci like common step 2:: + + # ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0 + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:e7:01.0,max_queues=2 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;rxq0@0000:e7:01.0-q1;rxq1@0000:e7:01.0-q1]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;rxq0@0000:e7:01.0-q1;rxq1@0000:e7:01.0-q1]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx + +3. Launch virtio-user1 by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4097 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4097 --rxd=4097 + testpmd>set fwd rxonly + testpmd>start + +4. Attach pdump secondary process to primary process by same file-prefix:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000' + +5. Launch virtio-user0 and send 8k length packets:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --force-max-simd-bitwidth=512 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4097 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4097 --rxd=4097 + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + +6. Start vhost,then quit pdump, check 448 packets and 28672 bytes received by virtio-user1 and 448 packets with 64 length in pdump-virtio-rx.pcap. + +Test Case 13: VM2VM split ring non-mergeable path and multi-queues payload check with dsa kernel driver +------------------------------------------------------------------------------------------------------- +This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring non-mergeable path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver. 1. bind 1 dsa device to idxd like common step 2:: @@ -815,14 +909,14 @@ non-mergeable path and multi-queues when vhost uses the asynchronous operations 2. Launch vhost by below command:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.0,lcore2@wq0.1] + --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@wq0.0;rxq1@wq0.0]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq0.1;rxq1@wq0.1]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx 3. Launch virtio-user1 by below command:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \ - --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=0,queue_size=4096 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=0,in_order=0,queue_size=4096 \ -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 testpmd>set fwd rxonly testpmd>start @@ -834,7 +928,7 @@ non-mergeable path and multi-queues when vhost uses the asynchronous operations 5. Launch virtio-user0 and send packets:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=0,queue_size=4096 \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=0,in_order=0,queue_size=4096 \ -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 testpmd>set burst 1 testpmd>set txpkts 64,128,256,512 @@ -849,38 +943,8 @@ non-mergeable path and multi-queues when vhost uses the asynchronous operations 6. Start vhost testpmd, quit pdump and check virtio-user1 RX-packets is 566 and RX-bytes is 486016 and 502 packets with 960 length and 64 packets with 64 length in pdump-virtio-rx.pcap. -7. Clear virtio-user1 port stats:: - - testpmd>stop - testpmd>clear port stats all - testpmd>start - -8. Quit and relaunch vhost with diff channel by below command:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.2,lcore2@wq0.3] - -9. Rerun step 4. - -10. Virtio-user0 send packets:: - - testpmd>set burst 1 - testpmd>set txpkts 64,128,256,512 - testpmd>start tx_first 27 - testpmd>stop - testpmd>set burst 32 - testpmd>start tx_first 7 - testpmd>stop - testpmd>set txpkts 64 - testpmd>start tx_first 1 - testpmd>stop - -11. Rerun step 6. - -Test Case 12: VM2VM split ring inorder non-mergeable path and multi-queues payload check with dsa kernel driver ----------------------------------------------------------------------------------------------------------------- +Test Case 14: VM2VM split ring inorder non-mergeable path and multi-queues payload check with dsa kernel driver +--------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring inorder non-mergeable path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver. @@ -890,20 +954,20 @@ non-mergeable path and multi-queues when vhost uses the asynchronous operations # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2 ls /dev/dsa #check wq configure success 2. Launch vhost by below command:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.0,lcore2@wq1.0,lcore2@wq1.1] + --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@wq0.0;rxq1@wq0.1]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq0.2;rxq1@wq0.3]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx 3. Launch virtio-user1 by below command:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \ - --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=1,queue_size=4096 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=0,in_order=1,queue_size=4096 \ -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 testpmd>set fwd rxonly testpmd>start @@ -915,7 +979,7 @@ non-mergeable path and multi-queues when vhost uses the asynchronous operations 5. Launch virtio-user0 and send packets:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=1,queue_size=4096 \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=0,in_order=1,queue_size=4096 \ -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 testpmd>set burst 1 testpmd>set txpkts 64,128,256,512 @@ -930,38 +994,8 @@ non-mergeable path and multi-queues when vhost uses the asynchronous operations 6. Start vhost testpmd, quit pdump and check virtio-user1 RX-packets is 566 and RX-bytes is 486016 and 502 packets with 960 length and 64 packets with 64 length in pdump-virtio-rx.pcap. -7. Clear virtio-user1 port stats:: - - testpmd>stop - testpmd>clear port stats all - testpmd>start - -8. Quit and relaunch vhost with diff channel by below command:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;rxq1]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.3,lcore2@wq1.4,lcore2@wq1.5] - -9. Rerun step 4. - -10. virtio-user0 send packets:: - - testpmd>set burst 1 - testpmd>set txpkts 64,128,256,512 - testpmd>start tx_first 27 - testpmd>stop - testpmd>set burst 32 - testpmd>start tx_first 7 - testpmd>stop - testpmd>set txpkts 64 - testpmd>start tx_first 1 - testpmd>stop - -11. Rerun step 6. - -Test Case 13: VM2VM split ring inorder mergeable path and multi-queues test non-indirect descriptor and payload check with dsa kernel driver ---------------------------------------------------------------------------------------------------------------------------------------------- +Test Case 15: VM2VM split ring inorder mergeable path and multi-queues test non-indirect descriptor and payload check with dsa kernel driver +-------------------------------------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid and non-indirect descriptor after packets forwarding in vhost-user/virtio-user split ring inorder mergeable path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver. @@ -970,20 +1004,20 @@ split ring inorder mergeable path and multi-queues when vhost uses the asynchron ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u 6a:01.0 # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0 ls /dev/dsa #check wq configure success 2. Launch vhost by below command:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx --lcore-dma=[lcore2@wq0.0,lcore2@wq0.1,lcore2@wq0.2,lcore2@wq0.3] + --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@wq0.0;rxq1@wq0.1]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq0.0;rxq1@wq0.1]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx 3. Launch virtio-user1 by below command:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \ - --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=1,queue_size=4096 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=1,in_order=1,queue_size=4096 \ -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 testpmd>set fwd rxonly testpmd>start @@ -995,7 +1029,7 @@ split ring inorder mergeable path and multi-queues when vhost uses the asynchron 5. Launch virtio-user0 and send packets:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=1,queue_size=256 \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=1,in_order=1,queue_size=256 \ -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 testpmd>set burst 1 testpmd>start tx_first 27 @@ -1010,30 +1044,29 @@ split ring inorder mergeable path and multi-queues when vhost uses the asynchron 6. Start vhost, then quit pdump and three testpmd, about split virtqueue inorder mergeable path, it use the direct descriptors, the 8k length pkt will occupies 5 ring:2000,2000,2000,2000 will need 4 consequent ring, still need one ring put header. So check 504 packets and 48128 bytes received by virtio-user1 and 502 packets with 64 length and 2 packets with 8K length in pdump-virtio-rx.pcap. -Test Case 14: VM2VM split ring mergeable path and multi-queues test indirect descriptor and payload check with dsa kernel driver ---------------------------------------------------------------------------------------------------------------------------------- +Test Case 16: VM2VM split ring mergeable path and multi-queues test indirect descriptor and payload check with dsa kernel driver +-------------------------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid and indirect descriptor after packets forwarding in vhost-user/virtio-user split ring mergeable path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver. -1. bind 2 dsa device to idxd like common step 2:: +1. bind 1 dsa device to idxd like common step 2:: ls /dev/dsa #check wq configure, reset if exist - # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 - # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 + # ./usertools/dpdk-devbind.py -u 6a:01.0 + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0 ls /dev/dsa #check wq configure success 2. Launch vhost by below command:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx --lcore-dma=[lcore2@wq0.0,lcore2@wq1.1,lcore2@wq1.2,lcore2@wq1.3] + --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;rxq0@wq0.0;rxq1@wq0.0]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq0.1;txq1@wq0.1;rxq0@wq0.1;rxq1@wq0.1]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx 3. Launch virtio-user1 by below command:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \ - --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=4096 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=1,in_order=0,queue_size=256 \ -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 testpmd>set fwd rxonly testpmd>start @@ -1045,7 +1078,7 @@ split ring mergeable path and multi-queues when vhost uses the asynchronous oper 5. Launch virtio-user0 and send packets:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=256 \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=1,in_order=0,queue_size=256 \ -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 testpmd>set burst 1 testpmd>start tx_first 27 @@ -1060,90 +1093,59 @@ split ring mergeable path and multi-queues when vhost uses the asynchronous oper 6. Start vhost, then quit pdump and three testpmd, about split virtqueue inorder mergeable path, it use the indirect descriptors, the 8k length pkt will just occupies one ring. So check 512 packets and 112128 bytes received by virtio-user1 and 502 packets with 64 length and 10 packets with 8K length in pdump-virtio-rx.pcap. -Test Case 15: VM2VM split ring vectorized path and multi-queues payload check with vhost async operation and dsa kernel driver -------------------------------------------------------------------------------------------------------------------------------- +Test Case 17: VM2VM split ring vectorized path and multi-queues payload check with vhost async operation and dsa kernel driver +------------------------------------------------------------------------------------------------------------------------------ This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user split ring vectorized path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver. -1. bind 2 dsa ports to idxd:: +1. bind 1 dsa ports to idxd:: - ls /dev/dsa #check wq configure, reset if exist - # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 - # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 - ls /dev/dsa #check wq configure success + ls /dev/dsa #check wq configure, reset if exist + # ./usertools/dpdk-devbind.py -u 6a:01.0 + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0 + ls /dev/dsa #check wq configure success 2. Launch vhost by below command:: - #./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.0,lcore2@wq0.1,lcore2@wq0.2] + #./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;rxq0@wq0.0;rxq1@wq0.0]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;rxq0@wq0.0;rxq1@wq0.0]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx 3. Launch virtio-user1 by below command:: - #./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \ - --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=4096 \ - -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 - testpmd>set fwd rxonly - testpmd>start + #./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set fwd rxonly + testpmd>start 4. Attach pdump secondary process to primary process of virtio-user1 by same file-prefix:: - #./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000' + #./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000' 5. Launch virtio-user0 and send packets:: - #./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4096 \ - -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 - testpmd>set burst 1 - testpmd>set txpkts 64,128,256,512 - testpmd>start tx_first 27 - testpmd>stop - testpmd>set burst 32 - testpmd>start tx_first 7 - testpmd>stop - testpmd>set txpkts 64 - testpmd>start tx_first 1 - testpmd>stop + #./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set burst 1 + testpmd>set txpkts 64,128,256,512 + testpmd>start tx_first 27 + testpmd>stop + testpmd>set burst 32 + testpmd>start tx_first 7 + testpmd>stop + testpmd>set txpkts 64 + testpmd>start tx_first 1 + testpmd>stop 6. Start vhost testpmd, quit pdump and check virtio-user1 RX-packets is 566 and RX-bytes is 486016 and 502 packets with 960 length and 64 packets with 64 length in pdump-virtio-rx.pcap. -7. Clear virtio-user1 port stats:: - - testpmd>stop - testpmd>clear port stats all - testpmd>start - -8. Quit and relaunch vhost with diff channel by below command:: - - #./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;rxq1]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.3,lcore2@wq1.0,lcore2@wq1.1] - -9. Rerun step 4. - -10. virtio-user0 send packets:: - - testpmd>set burst 1 - testpmd>set txpkts 64,128,256,512 - testpmd>start tx_first 27 - testpmd>stop - testpmd>set burst 32 - testpmd>start tx_first 7 - testpmd>stop - testpmd>set txpkts 64 - testpmd>start tx_first 1 - testpmd>stop - -11. Rerun step 6. - -Test Case 16: VM2VM packed ring non-mergeable path and multi-queues payload check with dsa kernel driver ---------------------------------------------------------------------------------------------------------- -This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring +Test Case 18: VM2VM packed ring non-mergeable path and multi-queues payload check with dsa kernel driver +-------------------------------------------------------------------------------------------------------- +This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring non-mergeable path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver. 1. bind 2 dsa device to idxd like common step 2:: @@ -1158,9 +1160,9 @@ non-mergeable path and multi-queues when vhost uses the asynchronous operations 2. Launch vhost by below command:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.0,lcore2@wq0.1,lcore2@wq0.2] + --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@wq0.0;rxq1@wq0.0]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq0.1;rxq1@wq0.1]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx 3. Launch virtio-user1 by below command:: @@ -1193,38 +1195,8 @@ non-mergeable path and multi-queues when vhost uses the asynchronous operations 6. Start vhost testpmd, quit pdump and check virtio-user1 RX-packets is 566 and RX-bytes is 486016 and 502 packets with 960 length and 64 packets with 64 length in pdump-virtio-rx.pcap. -7. Clear virtio-user1 port stats:: - - testpmd>stop - testpmd>clear port stats all - testpmd>start - -8. Quit and relaunch vhost with diff channel by below command:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.1,lcore2@wq1.0,lcore2@wq1.1] - -9. Rerun step 4. - -10. Virtio-user0 send packets:: - - testpmd>set burst 1 - testpmd>set txpkts 64,128,256,512 - testpmd>start tx_first 27 - testpmd>stop - testpmd>set burst 32 - testpmd>start tx_first 7 - testpmd>stop - testpmd>set txpkts 64 - testpmd>start tx_first 1 - testpmd>stop - -11. Rerun step 6. - -Test Case 17: VM2VM packed ring inorder non-mergeable path and multi-queues payload check with dsa kernel driver ------------------------------------------------------------------------------------------------------------------- +Test Case 19: VM2VM packed ring inorder non-mergeable path and multi-queues payload check with dsa kernel driver +---------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring inorder non-mergeable path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver. @@ -1233,16 +1205,16 @@ non-mergeable path and multi-queues when vhost uses the asynchronous operations ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 2 ls /dev/dsa #check wq configure success 2. Launch vhost by below command:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.0,lcore2@wq0.1,lcore2@wq0.2,lcore2@wq0.3] + --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@wq0.0;rxq1@wq0.1]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq1.0;rxq1@wq1.1]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx 3. Launch virtio-user1 by below command:: @@ -1274,55 +1246,25 @@ non-mergeable path and multi-queues when vhost uses the asynchronous operations 6. Start vhost testpmd, quit pdump and check virtio-user1 RX-packets is 566 and RX-bytes is 486016 and 502 packets with 960 length and 64 packets with 64 length in pdump-virtio-rx.pcap. -7. Clear virtio-user1 port stats:: - - testpmd>stop - testpmd>clear port stats all - testpmd>start - -8. Quit and relaunch vhost with diff channel by below command:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.0,lcore2@wq1.0,lcore2@wq1.1,lcore2@wq1.2] - -9. Rerun step 4. - -10. virtio-user0 send packets:: - - testpmd>set burst 1 - testpmd>set txpkts 64,128,256,512 - testpmd>start tx_first 27 - testpmd>stop - testpmd>set burst 32 - testpmd>start tx_first 7 - testpmd>stop - testpmd>set txpkts 64 - testpmd>start tx_first 1 - -11. Rerun step 6. - -Test Case 18: VM2VM packed ring mergeable path and multi-queues payload check with dsa kernel driver ------------------------------------------------------------------------------------------------------ -This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring +Test Case 20: VM2VM packed ring mergeable path and multi-queues payload check with dsa kernel driver +---------------------------------------------------------------------------------------------------- +This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring mergeable path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver. -1. bind 2 dsa device to idxd:: +1. bind 1 dsa device to idxd:: ls /dev/dsa #check wq configure, reset if exist - # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 - # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2 + # ./usertools/dpdk-devbind.py -u 6a:01.0 + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0 ls /dev/dsa #check wq configure success 2. Launch vhost by below command:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.0,lcore2@wq0.1,lcore2@wq0.2,lcore2@wq1.3,lcore2@wq1.4] + --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;rxq0@wq0.1;rxq1@wq0.1]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq0.1;txq1@wq0.1;rxq0@wq0.0;rxq1@wq0.0]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx 3. Launch virtio-user1 by below command:: @@ -1352,36 +1294,8 @@ mergeable path and multi-queues when vhost uses the asynchronous operations with 6. Start vhost testpmd, quit pdump and check virtio-user1 check 502 packets and 279232 bytes and 54 packets with 4640 length and 448 packets with 64 length in pdump-virtio-rx.pcap. -7. Clear virtio-user1 port stats:: - - testpmd>stop - testpmd>clear port stats all - testpmd>start - -8. Quit and relaunch vhost with diff channel by below command:: - - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq1.3,lcore2@wq1.4,lcore2@wq1.5] - -9. Rerun step 4. - -10. Virtio-user0 send packets:: - - testpmd>set burst 1 - testpmd>set txpkts 64,256,2000,64,256,2000 - testpmd>start tx_first 27 - testpmd>stop - testpmd>set burst 32 - testpmd>set txpkts 64 - testpmd>start tx_first 7 - testpmd>stop - -11. Rerun step 6. - -Test Case 19: VM2VM packed ring inorder mergeable path and multi-queues payload check with dsa kernel driver -------------------------------------------------------------------------------------------------------------- +Test Case 21: VM2VM packed ring inorder mergeable path and multi-queues payload check with dsa kernel driver +------------------------------------------------------------------------------------------------------------ This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring inorder mergeable path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver. @@ -1390,16 +1304,16 @@ mergeable path and multi-queues when vhost uses the asynchronous operations with ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 2 ls /dev/dsa #check wq configure success 2. Launch vhost by below command:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.0,lcore2@wq1.0] + --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@wq0.0;rxq0@wq0.0;rxq1@wq0.0]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq1.0;txq1@wq1.0;rxq1@wq1.0]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx 3. Launch virtio-user1 by below command:: @@ -1429,36 +1343,60 @@ mergeable path and multi-queues when vhost uses the asynchronous operations with 6. Start vhost testpmd, quit pdump and check virtio-user1 RX-packets is 502 packets and 279232 bytes and 54 packets with 4640 length and 448 packets with 64 length in pdump-virtio-rx.pcap. -7. Clear virtio-user1 port stats:: +Test Case 22: VM2VM packed ring vectorized-tx path and multi-queues test indirect descriptor and payload check with dsa kernel driver +------------------------------------------------------------------------------------------------------------------------------------- +This case uses testpmd to test the payload is valid and indirect descriptor after packets forwarding in vhost-user/virtio-user +packed ring vectorized-tx path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver. - testpmd>stop - testpmd>clear port stats all +1. Bind 2 dsa device to idxd like common step 2:: + + ls /dev/dsa #check wq configure, reset if exist + # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 2 + ls /dev/dsa #check wq configure success + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;rxq0@wq0.1;rxq1@wq0.1]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq1.0;txq1@wq1.0;rxq0@wq1.1;rxq1@wq1.1]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx + +3. Launch virtio-user1 by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 + testpmd>set fwd rxonly testpmd>start -8. Quit and relaunch vhost with diff channel by below command:: +4. Attach pdump secondary process to primary process by same file-prefix:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@wq0.0,lcore2@wq0.1,lcore2@wq0.2,lcore2@wq0.3] + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000' -9. Rerun step 4. +5. Launch virtio-user0 and send 8k length packets:: -10. virtio-user0 send packets:: + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --force-max-simd-bitwidth=512 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 testpmd>set burst 1 - testpmd>set txpkts 64,256,2000,64,256,2000 testpmd>start tx_first 27 testpmd>stop testpmd>set burst 32 - testpmd>set txpkts 64 testpmd>start tx_first 7 testpmd>stop + testpmd>set txpkts 2000,2000,2000,2000 + testpmd>start tx_first 1 + testpmd>stop -11. Rerun step 6. +6. Start vhost, then quit pdump and three testpmd, about packed virtqueue vectorized-tx path, it use the indirect descriptors, the 8k length pkt will just occupies one ring. +So check 512 packets and 112128 bytes received by virtio-user1 and 502 packets with 64 length and 10 packets with 8K length in pdump-virtio-rx.pcap. -Test Case 20: VM2VM packed ring vectorized-tx path and multi-queues test indirect descriptor and payload check with dsa kernel driver --------------------------------------------------------------------------------------------------------------------------------------- +Test Case 23: VM2VM packed ring vectorized path and multi-queues test indirect descriptor and payload check with dsa kernel driver +---------------------------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid and indirect descriptor after packets forwarding in vhost-user/virtio-user packed ring vectorized-tx path and multi-queues when vhost uses the asynchronous operations with dsa kernel driver. @@ -1467,23 +1405,22 @@ packed ring vectorized-tx path and multi-queues when vhost uses the asynchronous ls /dev/dsa #check wq configure, reset if exist # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 2 ls /dev/dsa #check wq configure success 2. Launch vhost by below command:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --no-pci \ - --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx \ - --lcore-dma=[lcore11@wq0.0,lcore11@wq0.1,lcore11@wq1.0,lcore11@wq1.1] + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;rxq0@wq0.1;rxq1@wq0.1]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq1.0;txq1@wq1.0;rxq0@wq1.1;rxq1@wq1.1]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx 3. Launch virtio-user1 by below command:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 --force-max-simd-bitwidth=512 \ - --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256 \ - -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 testpmd>set fwd rxonly testpmd>start @@ -1494,8 +1431,8 @@ packed ring vectorized-tx path and multi-queues when vhost uses the asynchronous 5. Launch virtio-user0 and send 8k length packets:: # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --force-max-simd-bitwidth=512 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256 \ - -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 testpmd>set burst 1 testpmd>start tx_first 27 @@ -1507,48 +1444,47 @@ packed ring vectorized-tx path and multi-queues when vhost uses the asynchronous testpmd>start tx_first 1 testpmd>stop -6. Start vhost, then quit pdump and three testpmd, about packed virtqueue vectorized-tx path, it use the indirect descriptors, the 8k length pkt will just occupies one ring. -So check 512 packets and 112128 bytes received by virtio-user1 and 502 packets with 64 length and 10 packets with 8K length in pdump-virtio-rx.pcap. +6. Start vhost,then quit pdump, check 502 packets and 32128 bytes received by virtio-user1 and 502 packets with 64 length in pdump-virtio-rx.pcap. -Test Case 21: VM2VM split ring mergeable path and multi-queues test indirect descriptor with dsa dpdk and kernel driver -------------------------------------------------------------------------------------------------------------------------- +Test Case 24: VM2VM packed ring vectorized path payload check test with ring size is not power of 2 with dsa kernel driver +-------------------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid and indirect descriptor after packets forwarding in vhost-user/virtio-user -split ring mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk and kernel driver. +packed ring vectorized path and multi-queues with ring size is not power of 2 when vhost uses the asynchronous operations with dsa kernel driver. -1. bind 2 dsa ports to idxd and 2 dsa ports to vfio-pci:: +1. Bind 2 dsa device to idxd like common step 2:: ls /dev/dsa #check wq configure, reset if exist - # ./usertools/dpdk-devbind.py -u e7:01.0 ec:01.0 f1:01.0 f6:01.0 - # ./usertools/dpdk-devbind.py -b idxd e7:01.0 ec:01.0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 1 + # ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 + # ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 2 ls /dev/dsa #check wq configure success - # ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 f6:01.0 2. Launch vhost by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:f1:01.0,max_queues=1 -a 0000:f6:01.0,max_queues=1 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq1;rxq0;rxq1]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx --lcore-dma=[lcore2@wq0.0,lcore2@wq1.0,lcore2@0000:f1:01.0-q0,lcore2@0000:f6:01.0-q0] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 --no-pci \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;rxq0@wq0.1;rxq1@wq0.1]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq1.0;txq1@wq1.0;rxq0@wq1.1;rxq1@wq1.1]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=txd=4096 --rxd=txd=4096 --no-flush-rx 3. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \ - --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=4096 \ - -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4097 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4097 --rxd=4097 testpmd>set fwd rxonly testpmd>start -4. Attach pdump secondary process to primary process of virtio-user1 by same file-prefix:: +4. Attach pdump secondary process to primary process by same file-prefix:: - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000' + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000' -5. Launch virtio-user0 and send packets:: +5. Launch virtio-user0 and send 8k length packets:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --force-max-simd-bitwidth=512 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4097 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4097 --rxd=4097 - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=256 \ - -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 testpmd>set burst 1 testpmd>start tx_first 27 testpmd>stop @@ -1559,30 +1495,79 @@ split ring mergeable path and multi-queues when vhost uses the asynchronous oper testpmd>start tx_first 1 testpmd>stop +6. Start vhost,then quit pdump, check 502 packets and 32128 bytes received by virtio-user1 and 502 packets with 64 length in pdump-virtio-rx.pcap. + +Test Case 25: VM2VM split ring mergeable path and multi-queues test indirect descriptor with dsa dpdk and kernel driver +----------------------------------------------------------------------------------------------------------------------- +This case uses testpmd to test the payload is valid and indirect descriptor after packets forwarding in vhost-user/virtio-user +split ring mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk and kernel driver. + +1. bind 1 dsa ports to idxd and 1 dsa ports to vfio-pci:: + + ls /dev/dsa #check wq configure, reset if exist + # ./usertools/dpdk-devbind.py -u e7:01.0 f1:01.0 + # ./usertools/dpdk-devbind.py -b idxd e7:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 1 0 + ls /dev/dsa #check wq configure success + # ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 + +2. Launch vhost by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:f1:01.0,max_queues=1 -a 0000:f6:01.0,max_queues=1 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@wq0.0;rxq0@wq0.0;rxq1@wq0.0]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:f1:01.0-q0;txq1@0000:f1:01.0-q0;rxq1@0000:f1:01.0-q0]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx + +3. Launch virtio-user1 by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=1,in_order=0,queue_size=256 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 + testpmd>set fwd rxonly + testpmd>start + +4. Attach pdump secondary process to primary process of virtio-user1 by same file-prefix:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/tmp/pdump-virtio-rx.pcap,mbuf-size=8000' + +5. Launch virtio-user0 and send packets:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=1,in_order=0,queue_size=256 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 + testpmd>set burst 1 + testpmd>start tx_first 27 + testpmd>stop + testpmd>set burst 32 + testpmd>start tx_first 7 + testpmd>stop + testpmd>set txpkts 2000,2000,2000,2000 + testpmd>start tx_first 1 + testpmd>stop + 6. Start vhost, then quit pdump and three testpmd, about split virtqueue inorder mergeable path, it use the indirect descriptors, the 8k length pkt will just occupies one ring. So check 512 packets and 112128 bytes received by virtio-user1 and 502 packets with 64 length and 10 packets with 8K length in pdump-virtio-rx.pcap. -Test Case 22: VM2VM packed ring inorder mergeable path and multi-queues payload check with dsa dpdk and kernel driver ------------------------------------------------------------------------------------------------------------------------ +Test Case 26: VM2VM packed ring inorder mergeable path and multi-queues payload check with dsa dpdk and kernel driver +--------------------------------------------------------------------------------------------------------------------- This case uses testpmd to test the payload is valid after packets forwarding in vhost-user/virtio-user packed ring inorder mergeable path and multi-queues when vhost uses the asynchronous operations with dsa dpdk and kernel driver. -1. bind 2 dsa device to vfio-pci and 2 dsa port to idxd like common step 1-2:: +1. bind 1 dsa device to vfio-pci and 1 dsa port to idxd like common step 1-2:: ls /dev/dsa #check wq configure, reset if exist - # ./usertools/dpdk-devbind.py -u e7:01.0 ec:01.0 f1:01.0 f6:01.0 - # ./usertools/dpdk-devbind.py -b idxd e7:01.0 ec:01.0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0 - # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1 + # ./usertools/dpdk-devbind.py -u e7:01.0 f1:01.0 + # ./usertools/dpdk-devbind.py -b idxd e7:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0 ls /dev/dsa #check wq configure success - # ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 f6:01.0 + # ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 2. Launch vhost by below command:: - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:f1:01.0 -a 0000:f6:01.0 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:f1:01.0-q0,lcore2@0000:f6:01.0-q1,lcore2@wq0.0,lcore2@wq1.0] + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:f1:01.0,max_queues=1 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@0000:f1:01.0-q0;txq1@wq0.0;rxq0@0000:f1:01.0-q0;rxq1@wq0.0]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq0.0;txq1@0000:f1:01.0-q0;rxq0@wq0.0;rxq1@0000:f1:01.0-q0]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx 3. Launch virtio-user1 by below command:: @@ -1612,30 +1597,46 @@ mergeable path and multi-queues when vhost uses the asynchronous operations with 6. Start vhost testpmd, quit pdump and check virtio-user1 RX-packets is 502 packets and 279232 bytes and 54 packets with 4640 length and 448 packets with 64 length in pdump-virtio-rx.pcap. -7. Clear virtio-user1 port stats:: +Test Case 27: VM2VM packed ring vectorized-tx path test batch processing with dsa dpdk and kernel driver +-------------------------------------------------------------------------------------------------------- +This case uses testpmd to test that one packet can forwarding in vhost-user/virtio-user packed ring vectorized-tx path +when vhost uses the asynchronous operations with dsa dpdk and kernel driver. - testpmd>stop - testpmd>clear port stats all - testpmd>start +1. bind 1 dsa device to vfio-pci and 1 dsa port to idxd like common step 1-2:: -8. Quit and relaunch vhost with diff dsa channel by below command:: + ls /dev/dsa #check wq configure, reset if exist + # ./usertools/dpdk-devbind.py -u e7:01.0 f1:01.0 + # ./usertools/dpdk-devbind.py -b idxd e7:01.0 + # ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0 + ls /dev/dsa #check wq configure success + # ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 - # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:f1:01.0 -a 0000:f6:01.0 \ - --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1;rxq0;rxq1]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx --lcore-dma=[lcore2@0000:f1:01.0-q2,lcore2@0000:f1:01.0-q5,lcore2@0000:f6:01.0-q4,lcore2@wq0.1,lcore2@wq0.3] +2. Launch vhost by below command:: -9. Rerun step 4. + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -a 0000:f1:01.0,max_queues=1 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@0000:f1:01.0-q0;txq1@wq0.0;rxq0@0000:f1:01.0-q0;rxq1@wq0.0]' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq0.0;txq1@0000:f1:01.0-q0;rxq0@wq0.0;rxq1@0000:f1:01.0-q0]' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx -10. Virtio-user0 send packets:: +3. Launch virtio-user1 by below command:: - testpmd>set burst 1 - testpmd>set txpkts 64,256,2000,64,256,2000 - testpmd>start tx_first 27 - testpmd>stop - testpmd>set burst 32 - testpmd>set txpkts 64 - testpmd>start tx_first 7 - testpmd>stop + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 + testpmd>set fwd rxonly + testpmd>start -11. Rerun step 6. +4. Attach pdump secondary process to primary process by same file-prefix:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000' + +5. Launch virtio-user0 and send 1 packet:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --force-max-simd-bitwidth=512 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 + testpmd>set burst 1 + testpmd>start tx_first 1 + testpmd>stop + +6. Start vhost, then quit pdump and three testpmd, check 2 packet and 128 bytes received by virtio-user1 and 2 packet with 64 length in pdump-virtio-rx.pcap. From patchwork Fri Nov 11 08:15:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 119784 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 03988A0542; Fri, 11 Nov 2022 09:21:57 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F1BF14014F; Fri, 11 Nov 2022 09:21:56 +0100 (CET) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 864BB40141 for ; Fri, 11 Nov 2022 09:21:54 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1668154914; x=1699690914; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=PENxAr32li0GV4HXcwjEn1grb9t66kypf8p/BwW/vDk=; b=SAxr0MheIwhEN9MDtiav7g/G/uZKgFusv3a+I/OROdJjLwGEmhEa2hsR mJESA2IM9k34HQpiPo2qLavhEV2wJTjtf6iGpJ61RKUUb1uV+29KAt6nZ +OuNGcokneHTOPS9itld4PmWFjWrFu1HJ/nyE6sRGHdOtH9TknvHrHAN1 BQwwbLh9FUD4kahNiBp5e57h5OdY84Nr8ifmvEdB+PyVbLBAd78ZCpkgu ioJUaBrFtkiOW6duX8qPk2CSUhhxiF8rgzaeoqQG7nBJFCVOcZHgOVLzh hgxWUQYeSuLtHNaMzSrFX1qerMkMZhOhUcJH0ows6BJEVHZsfHqZyTBuP A==; X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="397849961" X-IronPort-AV: E=Sophos;i="5.96,156,1665471600"; d="scan'208";a="397849961" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Nov 2022 00:21:53 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="762599192" X-IronPort-AV: E=Sophos;i="5.96,156,1665471600"; d="scan'208";a="762599192" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Nov 2022 00:21:51 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 2/2] tests/vm2vm_virtio_user_dsa: add new testsuite Date: Fri, 11 Nov 2022 16:15:11 +0800 Message-Id: <20221111081511.2426252-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Upstream the new testsuite TestSuite_vm2vm_virtio_user_dsa.py. Signed-off-by: Wei Ling --- tests/TestSuite_vm2vm_virtio_user_dsa.py | 2070 ++++++++++++++++++++++ 1 file changed, 2070 insertions(+) create mode 100644 tests/TestSuite_vm2vm_virtio_user_dsa.py diff --git a/tests/TestSuite_vm2vm_virtio_user_dsa.py b/tests/TestSuite_vm2vm_virtio_user_dsa.py new file mode 100644 index 00000000..36d8087c --- /dev/null +++ b/tests/TestSuite_vm2vm_virtio_user_dsa.py @@ -0,0 +1,2070 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2022 Intel Corporation +# + +import re + +from framework.packet import Packet +from framework.pmd_output import PmdOutput +from framework.test_case import TestCase + +from .dsadev_common import DsaDev_common as DC + + +class TestVM2VMVirtioUserDsa(TestCase): + def set_up_all(self): + self.dut_ports = self.dut.get_ports() + self.port_socket = self.dut.get_numa_id(self.dut_ports[0]) + self.cores_list = self.dut.get_core_list(config="all", socket=self.port_socket) + self.vhost_core_list = self.cores_list[0:2] + self.virtio0_core_list = self.cores_list[2:4] + self.virtio1_core_list = self.cores_list[4:6] + self.vhost_user = self.dut.new_session(suite="vhost-user") + self.virtio_user0 = self.dut.new_session(suite="virtio-user0") + self.virtio_user1 = self.dut.new_session(suite="virtio-user1") + self.pdump_user = self.dut.new_session(suite="pdump-user") + self.vhost_user_pmd = PmdOutput(self.dut, self.vhost_user) + self.virtio_user0_pmd = PmdOutput(self.dut, self.virtio_user0) + self.virtio_user1_pmd = PmdOutput(self.dut, self.virtio_user1) + self.path = self.dut.apps_name["test-pmd"] + self.testpmd_name = self.path.split("/")[-1] + self.app_pdump = self.dut.apps_name["pdump"] + self.pdump_name = self.app_pdump.split("/")[-1] + self.dump_virtio_pcap = "/tmp/pdump-virtio-rx.pcap" + self.dump_vhost_pcap = "/tmp/pdump-vhost-rx.pcap" + + self.DC = DC(self) + + def set_up(self): + self.path = self.dut.apps_name["test-pmd"] + self.testpmd_name = self.path.split("/")[-1] + self.app_pdump = self.dut.apps_name["pdump"] + self.pdump_name = self.app_pdump.split("/")[-1] + + self.dut.send_expect("rm -rf ./vhost-net*", "#") + self.dut.send_expect("rm -rf %s" % self.dump_virtio_pcap, "#") + self.dut.send_expect("rm -rf %s" % self.dump_vhost_pcap, "#") + self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#") + self.dut.send_expect("killall -s INT %s" % self.pdump_name, "#") + self.use_dsa_list = [] + + self.DC.reset_all_work_queue() + self.DC.bind_all_dsa_to_kernel() + + @property + def check_2M_env(self): + out = self.dut.send_expect( + "cat /proc/meminfo |grep Hugepagesize|awk '{print($2)}'", "# " + ) + return True if out == "2048" else False + + def start_vhost_testpmd( + self, + cores, + eal_param="", + param="", + no_pci=False, + ports="", + port_options="", + iova_mode="va", + ): + if iova_mode: + eal_param += " --iova=" + iova_mode + if not no_pci and port_options != "": + self.vhost_user_pmd.start_testpmd( + cores=cores, + eal_param=eal_param, + param=param, + port_options=port_options, + ports=ports, + prefix="vhost", + fixed_prefix=True, + ) + elif not no_pci and port_options == "": + self.vhost_user_pmd.start_testpmd( + cores=cores, + eal_param=eal_param, + param=param, + ports=ports, + prefix="vhost", + fixed_prefix=True, + ) + else: + self.vhost_user_pmd.start_testpmd( + cores=cores, + eal_param=eal_param, + param=param, + no_pci=no_pci, + prefix="vhost", + fixed_prefix=True, + ) + + def start_virtio_testpmd_with_vhost_net1(self, cores, eal_param="", param=""): + """ + launch the testpmd as virtio with vhost_net1 + """ + if self.check_2M_env: + eal_param += " --single-file-segments" + self.virtio_user1_pmd.start_testpmd( + cores=cores, + eal_param=eal_param, + param=param, + no_pci=True, + prefix="virtio-user1", + fixed_prefix=True, + ) + self.virtio_user1_pmd.execute_cmd("set fwd rxonly") + self.virtio_user1_pmd.execute_cmd("start") + + def start_virtio_testpmd_with_vhost_net0(self, cores, eal_param="", param=""): + """ + launch the testpmd as virtio with vhost_net0 + """ + if self.check_2M_env: + eal_param += " --single-file-segments" + self.virtio_user0_pmd.start_testpmd( + cores=cores, + eal_param=eal_param, + param=param, + no_pci=True, + prefix="virtio-user0", + fixed_prefix=True, + ) + + def start_pdump_to_capture_pkt(self): + """ + launch pdump app with dump_port and file_prefix + the pdump app should start after testpmd started + if dump the vhost-testpmd, the vhost-testpmd should started before launch pdump + if dump the virtio-testpmd, the virtio-testpmd should started before launch pdump + """ + command_line = ( + self.app_pdump + + "-l 1-2 -n 4 --file-prefix=virtio-user1 -v -- " + + "--pdump 'device_id=net_virtio_user1,queue=*,rx-dev=%s,mbuf-size=8000'" + ) + self.pdump_user.send_expect(command_line % (self.dump_virtio_pcap), "Port") + + def check_virtio_user1_stats(self, check_dict): + """ + check the virtio-user1 show port stats + """ + out = self.virtio_user1_pmd.execute_cmd("show port stats all") + self.logger.info(out) + rx_packets = re.search("RX-packets:\s*(\d*)", out) + rx_bytes = re.search("RX-bytes:\s*(\d*)", out) + rx_num = int(rx_packets.group(1)) + byte_num = int(rx_bytes.group(1)) + packet_count = 0 + byte_count = 0 + for key, value in check_dict.items(): + packet_count += value + byte_count += key * value + self.verify( + rx_num == packet_count, + "receive pakcet number: {} is not equal as send:{}".format( + rx_num, packet_count + ), + ) + self.verify( + byte_num == byte_count, + "receive pakcet byte:{} is not equal as send:{}".format( + byte_num, byte_count + ), + ) + + def check_packet_payload_valid(self, check_dict): + """ + check the payload is valid + """ + self.pdump_user.send_expect("^c", "# ", 60) + self.dut.session.copy_file_from( + src=self.dump_virtio_pcap, dst=self.dump_virtio_pcap + ) + pkt = Packet() + pkts = pkt.read_pcapfile(self.dump_virtio_pcap) + for key, value in check_dict.items(): + count = 0 + for i in range(len(pkts)): + if len(pkts[i]) == key: + count += 1 + self.verify( + value == count, + "pdump file: {} have not include enough packets {}".format(count, key), + ) + + def clear_virtio_user1_stats(self): + self.virtio_user1_pmd.execute_cmd("stop") + self.virtio_user1_pmd.execute_cmd("clear port stats all") + self.virtio_user1_pmd.execute_cmd("start") + out = self.virtio_user1_pmd.execute_cmd("show port stats all") + self.logger.info(out) + + def send_502_960byte_and_64_64byte_pkts(self): + """ + send 251 960byte and 32 64byte length packets from virtio_user0 testpmd + """ + self.virtio_user0_pmd.execute_cmd("set burst 1") + self.virtio_user0_pmd.execute_cmd("set txpkts 64,128,256,512") + self.virtio_user0_pmd.execute_cmd("start tx_first 27") + self.virtio_user0_pmd.execute_cmd("stop") + self.virtio_user0_pmd.execute_cmd("set burst 32") + self.virtio_user0_pmd.execute_cmd("start tx_first 7") + self.virtio_user0_pmd.execute_cmd("stop") + self.virtio_user0_pmd.execute_cmd("set txpkts 64") + self.virtio_user0_pmd.execute_cmd("start tx_first 1") + self.virtio_user0_pmd.execute_cmd("stop") + self.vhost_user_pmd.execute_cmd("start") + self.vhost_user_pmd.execute_cmd("show port stats all") + + def send_502_64byte_and_64_8000byte_pkts(self): + """ + send 54 4640byte and 448 64byte length packets from virtio_user0 testpmd + """ + self.virtio_user0_pmd.execute_cmd("set burst 1") + self.virtio_user0_pmd.execute_cmd("start tx_first 27") + self.virtio_user0_pmd.execute_cmd("stop") + self.virtio_user0_pmd.execute_cmd("set burst 32") + self.virtio_user0_pmd.execute_cmd("start tx_first 7") + self.virtio_user0_pmd.execute_cmd("stop") + self.virtio_user0_pmd.execute_cmd("set txpkts 2000,2000,2000,2000") + self.virtio_user0_pmd.execute_cmd("start tx_first 1") + self.virtio_user0_pmd.execute_cmd("stop") + self.vhost_user_pmd.execute_cmd("start") + self.vhost_user_pmd.execute_cmd("show port stats all") + + def send_54_4640byte_and_448_64byte_pkts(self): + """ + send 54 4640byte and 448 64byte length packets from virtio_user0 testpmd + """ + self.virtio_user0_pmd.execute_cmd("set burst 1") + self.virtio_user0_pmd.execute_cmd("set txpkts 64,256,2000,64,256,2000") + self.virtio_user0_pmd.execute_cmd("start tx_first 27") + self.virtio_user0_pmd.execute_cmd("stop") + self.virtio_user0_pmd.execute_cmd("set burst 32") + self.virtio_user0_pmd.execute_cmd("set txpkts 64") + self.virtio_user0_pmd.execute_cmd("start tx_first 7") + self.virtio_user0_pmd.execute_cmd("stop") + self.vhost_user_pmd.execute_cmd("start") + self.vhost_user_pmd.execute_cmd("show port stats all") + + def send_448_64byte_and_54_4640byte_pkts(self): + """ + send 448 64byte and 54 4640byte length packets from virtio_user0 testpmd + """ + self.virtio_user0_pmd.execute_cmd("set burst 32") + self.virtio_user0_pmd.execute_cmd("set txpkts 64") + self.virtio_user0_pmd.execute_cmd("start tx_first 7") + self.virtio_user0_pmd.execute_cmd("stop") + self.virtio_user0_pmd.execute_cmd("set burst 1") + self.virtio_user0_pmd.execute_cmd("set txpkts 64,256,2000,64,256,2000") + self.virtio_user0_pmd.execute_cmd("start tx_first 27") + self.virtio_user0_pmd.execute_cmd("stop") + self.vhost_user_pmd.execute_cmd("start") + self.vhost_user_pmd.execute_cmd("show port stats all") + + def send_1_64byte_pkts(self): + """ + send 1 64byte length packets from virtio_user0 testpmd + """ + self.virtio_user0_pmd.execute_cmd("set burst 1") + self.virtio_user0_pmd.execute_cmd("start tx_first 1") + self.virtio_user0_pmd.execute_cmd("stop") + self.vhost_user_pmd.execute_cmd("start") + self.vhost_user_pmd.execute_cmd("show port stats all") + + def test_split_non_mergeable_multi_queues_payload_check_with_dpdk_driver( + self, + ): + """ + Test Case 1: VM2VM vhost-user/virtio-user split ring non-mergeable path and multi-queues payload check with dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci" + ) + dmas1 = "txq0@%s-q0;rxq1@%s-q0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q1;rxq1@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + port_options = {self.use_dsa_list[0]: "max_queues=2"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + port_options=port_options, + ports=self.use_dsa_list[0:1], + iova_mode="va", + ) + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=0,in_order=0,queue_size=4096" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=0,in_order=0,queue_size=4096" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + self.send_502_960byte_and_64_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + self.vhost_user_pmd.execute_cmd("quit", "#", 60) + self.clear_virtio_user1_stats() + dmas1 = "txq0@%s-q1;txq1@%s-q1;rxq0@%s-q1;rxq1@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q1;txq1@%s-q1;rxq1@%s-q1;rxq1@%s-q1" % ( + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + port_options = { + self.use_dsa_list[0]: "max_queues=4", + self.use_dsa_list[1]: "max_queues=4", + } + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + port_options=port_options, + ports=self.use_dsa_list, + iova_mode="va", + ) + self.start_pdump_to_capture_pkt() + self.send_502_960byte_and_64_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_split_inorder_non_mergeable_multi_queues_payload_check_with_dpdk_driver( + self, + ): + """ + Test Case 2: VM2VM split ring inorder non-mergeable path and multi-queues payload check with dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci" + ) + dmas1 = "txq0@%s-q0;rxq1@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q2;rxq1@%s-q3" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + port_options = {self.use_dsa_list[0]: "max_queues=4"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + port_options=port_options, + ports=self.use_dsa_list[0:1], + iova_mode="va", + ) + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=0,in_order=1,queue_size=4096" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=0,in_order=1,queue_size=4096" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + self.send_502_960byte_and_64_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + self.vhost_user_pmd.execute_cmd("quit", "#", 60) + self.clear_virtio_user1_stats() + dmas1 = "txq0@%s-q1;txq1@%s-q1;rxq0@%s-q1;rxq1@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q1;txq1@%s-q1;rxq1@%s-q1;rxq1@%s-q1" % ( + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + port_options = { + self.use_dsa_list[0]: "max_queues=2", + self.use_dsa_list[1]: "max_queues=2", + } + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + port_options=port_options, + ports=self.use_dsa_list, + iova_mode="va", + ) + self.start_pdump_to_capture_pkt() + self.send_502_960byte_and_64_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_split_inorder_mergeable_multi_queues_non_indirect_descriptor_payload_check_with_dpdk_driver( + self, + ): + """ + Test Case 3: VM2VM split ring inorder mergeable path and multi-queues test non-indirect descriptor and payload check with dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci" + ) + dmas1 = "txq0@%s-q0;rxq1@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q0;rxq1@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx" + port_options = {self.use_dsa_list[0]: "max_queues=2"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + port_options=port_options, + ports=self.use_dsa_list[0:1], + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=1,in_order=1,queue_size=256" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=1,in_order=1,queue_size=256" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + self.send_502_64byte_and_64_8000byte_pkts() + check_dict = {64: 502, 8000: 2} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + self.virtio_user0_pmd.execute_cmd("quit", "#", 60) + self.virtio_user1_pmd.execute_cmd("quit", "#", 60) + self.vhost_user_pmd.execute_cmd("quit", "#", 60) + dmas1 = "txq0@%s-q0;txq1@%s-q1;rxq0@%s-q2;rxq1@%s-q3" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q0;txq1@%s-q1;rxq1@%s-q2;rxq1@%s-q3" % ( + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + port_options = { + self.use_dsa_list[0]: "max_queues=4", + self.use_dsa_list[1]: "max_queues=4", + } + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + port_options=port_options, + ports=self.use_dsa_list, + iova_mode="va", + ) + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + self.send_502_64byte_and_64_8000byte_pkts() + check_dict = {64: 502, 8000: 2} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_split_mergeable_multi_queues_indirect_descriptor_payload_check_with_dpdk_driver( + self, + ): + """ + Test Case 4: VM2VM split ring mergeable path and multi-queues test indirect descriptor and payload check with dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci" + ) + dmas1 = "txq0@%s-q0;rxq1@%s-q0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q0;rxq1@%s-q0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx" + port_options = {self.use_dsa_list[0]: "max_queues=1"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + port_options=port_options, + ports=self.use_dsa_list[0:1], + iova_mode="va", + ) + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=1,in_order=0,queue_size=256" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=1,in_order=0,queue_size=256" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + self.send_502_64byte_and_64_8000byte_pkts() + check_dict = {64: 502, 8000: 10} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + self.virtio_user0_pmd.execute_cmd("quit", "#", 60) + self.virtio_user1_pmd.execute_cmd("quit", "#", 60) + self.vhost_user_pmd.execute_cmd("quit", "#", 60) + dmas1 = "txq0@%s-q0;txq1@%s-q1;rxq0@%s-q2;rxq1@%s-q3" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q0;txq1@%s-q1;rxq1@%s-q2;rxq1@%s-q3" % ( + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + port_options = { + self.use_dsa_list[0]: "max_queues=4", + self.use_dsa_list[1]: "max_queues=4", + } + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + port_options=port_options, + ports=self.use_dsa_list, + iova_mode="va", + ) + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + self.send_502_64byte_and_64_8000byte_pkts() + check_dict = {64: 502, 8000: 10} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_split_vectorized_multi_queues_payload_check_with_vhost_async_dpdk_driver( + self, + ): + """ + Test Case 5: VM2VM split ring vectorized path and multi-queues payload check with vhost async operation and dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci" + ) + dmas1 = "txq0@%s-q0;rxq0@%s-q0;rxq1@%s-q0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q1;rxq0@%s-q1;rxq1@%s-q0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=4096" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=4096" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_502_960byte_and_64_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + self.vhost_user_pmd.execute_cmd("quit", "#", 60) + self.clear_virtio_user1_stats() + dmas1 = "txq0@%s-q3;txq1@%s-q3;rxq0@%s-q1;rxq1@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + dmas2 = "txq0@%s-q1;txq1@%s-q1;rxq1@%s-q3;rxq1@%s-q3" % ( + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + port_options = { + self.use_dsa_list[0]: "max_queues=4", + self.use_dsa_list[1]: "max_queues=4", + } + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + port_options=port_options, + ports=self.use_dsa_list, + iova_mode="va", + ) + self.start_pdump_to_capture_pkt() + self.send_502_960byte_and_64_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_non_mergeable_multi_queues_payload_check_with_dpdk_driver( + self, + ): + """ + Test Case 6: VM2VM packed ring non-mergeable path and multi-queues payload check with dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci" + ) + dmas1 = "txq0@%s-q1;rxq1@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q1;rxq1@%s-q1" % ( + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=0,queue_size=4096" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=0,queue_size=4096" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_502_960byte_and_64_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + self.vhost_user_pmd.execute_cmd("quit", "#", 60) + self.clear_virtio_user1_stats() + dmas1 = "txq0@%s-q0;txq1@%s-q1;rxq0@%s-q0;rxq1@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q0;txq1@%s-q1;rxq1@%s-q0;rxq1@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + port_options = { + self.use_dsa_list[0]: "max_queues=4", + self.use_dsa_list[1]: "max_queues=4", + } + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + port_options=port_options, + ports=self.use_dsa_list[0:1], + iova_mode="va", + ) + self.start_pdump_to_capture_pkt() + self.send_502_960byte_and_64_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_inorder_non_mergeable_multi_queues_payload_check_with_dpdk_driver( + self, + ): + """ + Test Case 7: VM2VM packed ring inorder non-mergeable path and multi-queues payload check with dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci" + ) + dmas1 = "txq0@%s-q0;rxq1@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q0;rxq1@%s-q1" % ( + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + port_options = { + self.use_dsa_list[0]: "max_queues=2", + self.use_dsa_list[1]: "max_queues=2", + } + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + port_options=port_options, + ports=self.use_dsa_list, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,queue_size=4096" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,queue_size=4096" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_502_960byte_and_64_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + self.vhost_user_pmd.execute_cmd("quit", "#", 60) + self.clear_virtio_user1_stats() + dmas1 = "txq0@%s-q5;txq1@%s-q6;rxq0@%s-q5;rxq1@%s-q6" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q5;txq1@%s-q6;rxq1@%s-q5;rxq1@%s-q6" % ( + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list, + iova_mode="va", + ) + self.start_pdump_to_capture_pkt() + self.send_502_960byte_and_64_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_mergeable_multi_queues_payload_check_with_dpdk_driver( + self, + ): + """ + Test Case 8: VM2VM packed ring mergeable path and multi-queues payload check with dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci" + ) + dmas1 = "txq0@%s-q0;rxq1@%s-q0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q0;rxq1@%s-q0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + port_options = {self.use_dsa_list[0]: "max_queues=1"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + port_options=port_options, + ports=self.use_dsa_list[0:1], + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=0,queue_size=4096" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=0,queue_size=4096" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_54_4640byte_and_448_64byte_pkts() + check_dict = {4640: 54, 64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + self.vhost_user_pmd.execute_cmd("quit", "#", 60) + self.clear_virtio_user1_stats() + dmas1 = "txq0@%s-q0;txq1@%s-q1;rxq0@%s-q2;rxq1@%s-q3" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q4;txq1@%s-q5;rxq1@%s-q6;rxq1@%s-q7" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list[0:1], + iova_mode="va", + ) + self.start_pdump_to_capture_pkt() + self.send_54_4640byte_and_448_64byte_pkts() + check_dict = {4640: 54, 64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_inorder_mergeable_multi_queues_payload_check_with_dpdk_driver( + self, + ): + """ + Test Case 9: VM2VM packed ring inorder mergeable path and multi-queues payload check with dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci" + ) + dmas1 = "txq0@%s-q0;rxq0@%s-q0;rxq1@%s-q0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q0;txq1@%s-q0;rxq1@%s-q0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,queue_size=4096" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,queue_size=4096" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_54_4640byte_and_448_64byte_pkts() + check_dict = {4640: 54, 64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + self.vhost_user_pmd.execute_cmd("quit", "#", 60) + self.clear_virtio_user1_stats() + dmas1 = "txq0@%s-q0;txq1@%s-q1;rxq0@%s-q0;rxq1@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q0;txq1@%s-q1;rxq1@%s-q0;rxq1@%s-q1" % ( + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list, + iova_mode="va", + ) + self.start_pdump_to_capture_pkt() + self.send_54_4640byte_and_448_64byte_pkts() + check_dict = {4640: 54, 64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_vectorized_tx_multi_queues_indirect_descriptor_payload_check_with_dpdk_driver( + self, + ): + """ + Test Case 10: VM2VM packed ring vectorized-tx path and multi-queues test indirect descriptor and payload check with dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci" + ) + dmas1 = "txq0@%s-q0;txq1@%s-q0;rxq0@%s-q0;rxq1@%s-q0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q1;txq1@%s-q1;rxq0@%s-q1;rxq1@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx" + port_options = {self.use_dsa_list[0]: "max_queues=2"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list[0:1], + port_options=port_options, + iova_mode="va", + ) + + virtio1_eal_param = "--force-max-simd-bitwidth=512 --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--force-max-simd-bitwidth=512 --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_502_64byte_and_64_8000byte_pkts() + check_dict = {64: 502, 8000: 10} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + self.virtio_user0_pmd.execute_cmd("quit", "#", 60) + self.virtio_user1_pmd.execute_cmd("quit", "#", 60) + self.vhost_user_pmd.execute_cmd("quit", "#", 60) + dmas1 = "txq0@%s-q0;txq1@%s-q1;rxq0@%s-q2;rxq1@%s-q3" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q0;txq1@%s-q1" % ( + self.use_dsa_list[1], + self.use_dsa_list[1], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + port_options = { + self.use_dsa_list[0]: "max_queues=4", + self.use_dsa_list[1]: "max_queues=2", + } + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list, + port_options=port_options, + iova_mode="va", + ) + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + self.send_502_64byte_and_64_8000byte_pkts() + check_dict = {64: 502, 8000: 10} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_vectorized_multi_queues_payload_check_with_dpdk_driver( + self, + ): + """ + Test Case 11: VM2VM packed ring vectorized path and payload check test with dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci" + ) + dmas1 = "txq0@%s-q0;txq1@%s-q0;rxq0@%s-q0;rxq1@%s-q0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q1;txq1@%s-q1;rxq0@%s-q1;rxq1@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + port_options = {self.use_dsa_list[0]: "max_queues=2"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list[0:1], + port_options=port_options, + iova_mode="va", + ) + + virtio1_eal_param = "--force-max-simd-bitwidth=512 --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4096" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--force-max-simd-bitwidth=512 --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4096" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_448_64byte_and_54_4640byte_pkts() + check_dict = {64: 448, 4640: 0} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_vectorized_ringsize_not_powerof_2_multi_queues_payload_check_with_dpdk_driver( + self, + ): + """ + Test Case 12: VM2VM packed ring vectorized path payload check test with ring size is not power of 2 with dsa dpdk driver + """ + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=2, driver_name="vfio-pci" + ) + dmas1 = "txq0@%s-q0;txq1@%s-q0;rxq0@%s-q1;rxq1@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@%s-q0;txq1@%s-q0;rxq0@%s-q1;rxq1@%s-q1" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + port_options = {self.use_dsa_list[0]: "max_queues=2"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list[0:1], + port_options=port_options, + iova_mode="va", + ) + + virtio1_eal_param = "--force-max-simd-bitwidth=512 --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4097" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4097 --rxd=4097" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--force-max-simd-bitwidth=512 --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4097" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4097 --rxd=4097" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_448_64byte_and_54_4640byte_pkts() + check_dict = {64: 448, 4640: 0} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_split_non_mergeable_multi_queues_payload_check_with_kernel_driver( + self, + ): + """ + Test Case 13: VM2VM split ring non-mergeable path and multi-queues payload check with dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=2, dsa_index=0) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@wq0.0;rxq1@wq0.0]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq0.1;rxq1@wq0.1]'" + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=0,in_order=0,queue_size=4096" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=0,in_order=0,queue_size=4096" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_502_960byte_and_64_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_split_inorder_non_mergeable_multi_queues_payload_check_with_kernel_driver( + self, + ): + """ + Test Case 14: VM2VM split ring inorder non-mergeable path and multi-queues payload check with dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=4, dsa_index=0) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@wq0.0;rxq1@wq0.1]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq0.2;rxq1@wq0.3]'" + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=0,in_order=1,queue_size=4096" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=0,in_order=1,queue_size=4096" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_502_960byte_and_64_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_split_inorder_mergeable_multi_queues_non_indirect_descriptor_payload_check_with_kernel_driver( + self, + ): + """ + Test Case 15: VM2VM split ring inorder mergeable path and multi-queues test non-indirect descriptor and payload check with dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=4, dsa_index=0) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@wq0.0;rxq1@wq0.1]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq0.2;rxq1@wq0.3]'" + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=1,in_order=1,queue_size=256" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=1,in_order=1,queue_size=256" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_502_64byte_and_64_8000byte_pkts() + check_dict = {64: 502, 8000: 2} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_split_mergeable_multi_queues_indirect_descriptor_payload_check_with_kernel_driver( + self, + ): + """ + Test Case 16: VM2VM split ring mergeable path and multi-queues test indirect descriptor and payload check with dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=2, dsa_index=0) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;rxq0@wq0.0;rxq0@wq0.0]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq0.1;txq1@wq0.1;rxq0@wq0.1;rxq0@wq0.1]'" + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=1,in_order=0,queue_size=256" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=1,in_order=0,queue_size=256" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_502_64byte_and_64_8000byte_pkts() + check_dict = {64: 502, 8000: 10} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_split_vectorized_multi_queues_payload_check_with_vhost_async_operation_with_kernel_driver( + self, + ): + """ + Test Case 17: VM2VM split ring vectorized path and multi-queues payload check with vhost async operation and dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=4, dsa_index=0) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;rxq0@wq0.0;rxq1@wq0.0]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;rxq0@wq0.0;rxq1@wq0.0]'" + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=4096" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=4096" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_502_960byte_and_64_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_non_mergeable_multi_queues_payload_check_with_kernel_driver( + self, + ): + """ + Test Case 18: VM2VM packed ring non-mergeable path and multi-queues payload check with dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=2, dsa_index=0) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@wq0.0;rxq1@wq0.0]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq0.1;rxq1@wq0.1]'" + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=0,queue_size=4096" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=0,queue_size=4096" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_502_960byte_and_64_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_inorder_non_mergeable_multi_queues_payload_check_with_kernel_driver( + self, + ): + """ + Test Case 19: VM2VM packed ring inorder non-mergeable path and multi-queues payload check with dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=2, dsa_index=0) + self.DC.create_work_queue(work_queue_number=2, dsa_index=1) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@wq0.0;rxq1@wq0.1]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq1.0;rxq1@wq1.1]'" + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,queue_size=4096" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,queue_size=4096" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_502_960byte_and_64_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_mergeable_multi_queues_payload_check_with_kernel_driver( + self, + ): + """ + Test Case 20: VM2VM packed ring mergeable path and multi-queues payload check with dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=2, dsa_index=0) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;rxq0@wq0.1;rxq1@wq0.1]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq0.1;txq1@wq0.1;rxq0@wq0.0;rxq1@wq0.0]'" + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=0,queue_size=4096" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=0,queue_size=4096" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_502_960byte_and_64_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_inorder_mergeable_multi_queues_payload_check_with_kernel_driver( + self, + ): + """ + Test Case 21: VM2VM packed ring inorder mergeable path and multi-queues payload check with dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=1, dsa_index=0) + self.DC.create_work_queue(work_queue_number=1, dsa_index=1) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@wq0.0;rxq0@wq0.0;rxq1@wq0.0]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq1.0;txq1@wq1.0;rxq1@wq1.0]'" + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,queue_size=4096" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,queue_size=4096" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_502_960byte_and_64_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_vectorized_tx_multi_queues_indirect_descriptor_payload_check_with_kernel_driver( + self, + ): + """ + Test Case 22: VM2VM packed ring vectorized-tx path and multi-queues test indirect descriptor and payload check with dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=2, dsa_index=0) + self.DC.create_work_queue(work_queue_number=2, dsa_index=1) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;rxq0@wq0.1;rxq1@wq0.1]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq1.0;txq1@wq1.0;rxq0@wq1.1;rxq1@wq1.1]'" + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_502_64byte_and_64_8000byte_pkts() + check_dict = {64: 502, 8000: 10} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_vectorized_multi_queues_indirect_descriptor_payload_check_with_kernel_driver( + self, + ): + """ + Test Case 23: VM2VM packed ring vectorized path and multi-queues test indirect descriptor and payload check with dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=2, dsa_index=0) + self.DC.create_work_queue(work_queue_number=2, dsa_index=1) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;rxq0@wq0.1;rxq1@wq0.1]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq1.0;txq1@wq1.0;rxq0@wq1.1;rxq1@wq1.1]'" + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4096" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4096" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_502_64byte_and_64_8000byte_pkts() + check_dict = {64: 502, 8000: 0} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_vectorized_ringsize_not_powerof_2_multi_queues_payload_check_with_kernel_driver( + self, + ): + """ + Test Case 24: VM2VM packed ring vectorized path payload check test with ring size is not power of 2 with dsa kernel driver + """ + self.DC.create_work_queue(work_queue_number=2, dsa_index=0) + self.DC.create_work_queue(work_queue_number=2, dsa_index=1) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;rxq0@wq0.1;rxq1@wq0.1]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@wq1.0;txq1@wq1.0;rxq0@wq1.1;rxq1@wq1.1]'" + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=True, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4097" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4097 --rxd=4097" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4097" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4097 --rxd=4097" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_502_64byte_and_64_8000byte_pkts() + check_dict = {64: 502, 8000: 0} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_split_mergeable_multi_queues_indirect_descriptor_payload_check_with_dpdk_and_kernel_driver( + self, + ): + """ + Test Case 25: VM2VM split ring mergeable path and multi-queues test indirect descriptor with dsa dpdk and kernel driver + """ + self.DC.create_work_queue(work_queue_number=2, dsa_index=0) + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci", dsa_index_list=[1] + ) + dmas1 = "txq0@wq0.0;rxq0@wq0.0;rxq1@wq0.0" + dmas2 = "txq0@%s-q0;txq1@%s-q0;rxq1@%s-q0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx" + port_options = {self.use_dsa_list[0]: "max_queues=1"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list, + port_options=port_options, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=1,in_order=0,queue_size=256" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=1,in_order=0,queue_size=256" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_502_64byte_and_64_8000byte_pkts() + check_dict = {64: 502, 8000: 10} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_inorder_mergeable_multi_queues_payload_check_with_dpdk_and_kernel_driver( + self, + ): + """ + Test Case 26: VM2VM packed ring inorder mergeable path and multi-queues payload check with dsa dpdk and kernel driver + """ + self.DC.create_work_queue(work_queue_number=2, dsa_index=0) + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci", dsa_index_list=[1] + ) + dmas1 = "txq0@%s-q0;txq1@wq0.0;rxq0@%s-q0;rxq1@wq0.0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@wq0.0;txq1@%s-q0;rxq0@wq0.0;rxq1@%s-q0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + port_options = {self.use_dsa_list[0]: "max_queues=1"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list, + port_options=port_options, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,queue_size=4096" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,queue_size=4096" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_54_4640byte_and_448_64byte_pkts() + check_dict = {4640: 54, 64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_vectorized_tx_batch_processing_with_dpdk_and_kernel_driver( + self, + ): + """ + Test Case 27: VM2VM packed ring vectorized-tx path test batch processing with dsa dpdk and kernel driver + """ + self.DC.create_work_queue(work_queue_number=2, dsa_index=0) + self.use_dsa_list = self.DC.bind_dsa_to_dpdk( + dsa_number=1, driver_name="vfio-pci", dsa_index_list=[1] + ) + dmas1 = "txq0@%s-q0;txq1@wq0.0;rxq0@%s-q0;rxq1@wq0.0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + dmas2 = "txq0@wq0.0;txq1@%s-q0;rxq0@wq0.0;rxq1@%s-q0" % ( + self.use_dsa_list[0], + self.use_dsa_list[0], + ) + vhost_eal_param = ( + "--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[%s]' " + "--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[%s]'" + % (dmas1, dmas2) + ) + vhost_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx" + port_options = {self.use_dsa_list[0]: "max_queues=1"} + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + no_pci=False, + ports=self.use_dsa_list, + port_options=port_options, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256" + virtio1_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256" + virtio0_param = "--nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_1_64byte_pkts() + check_dict = {64: 2} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def quit_all_testpmd(self): + self.vhost_user_pmd.execute_cmd("quit", "#", 60) + self.virtio_user0_pmd.execute_cmd("quit", "#", 60) + self.virtio_user1_pmd.execute_cmd("quit", "#", 60) + self.pdump_user.send_expect("^c", "# ", 60) + + def tear_down(self): + self.quit_all_testpmd() + self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#") + self.dut.send_expect("killall -s INT %s" % self.pdump_name, "#") + self.DC.reset_all_work_queue() + self.DC.bind_all_dsa_to_kernel() + + def tear_down_all(self): + pass