From patchwork Thu Apr 21 07:02:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 109948 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3FAC5A00C3; Thu, 21 Apr 2022 09:02:54 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3747F410E1; Thu, 21 Apr 2022 09:02:54 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id 0398540040 for ; Thu, 21 Apr 2022 09:02:51 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650524572; x=1682060572; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=vtjKBOzV5GWIYnIPEvc54SGkW4Qjd4pL9/9ob5VtkN4=; b=IyGXpiDueEKLodDZanx77NyRCocnTj//f9C3NDsPoUvjRbZwWgd9QK5W sUM6UDuq7t/ujWP5ne1ziiR4JJGG8k0vRzlkEP82ugMxEsT7On1lAptIt KgqoleTz7u1P2nUiLUJrELb/WnPAtyMSbu8QBgufQ3D6IpAMa0pmPJ/oZ /nw6C2AhyQRudrTiW3vQdXuQu600rbBm1iSnWRrqr3ARsJLI2e0UGttAX K3FEaiAGdqt1eS1ftAZa3KWhaopTvMOK6qsnVa/RN+rueMxOEKpQyIMkO pvlXcVBZiGiLr+Htq4GUrRApV1pfmFYg3NoxTkxePyE+BAig4Ah6Ae5Vg A==; X-IronPort-AV: E=McAfee;i="6400,9594,10323"; a="246155710" X-IronPort-AV: E=Sophos;i="5.90,278,1643702400"; d="scan'208";a="246155710" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2022 00:02:51 -0700 X-IronPort-AV: E=Sophos;i="5.90,278,1643702400"; d="scan'208";a="577059921" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2022 00:02:49 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 1/5] test_plans/index: add new testsuite Date: Thu, 21 Apr 2022 15:02:46 +0800 Message-Id: <20220421070246.1554401-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path), add vm2vm_virtio_user_cbdma_test_plan into test_plans/index.rst. Signed-off-by: Wei Ling --- test_plans/index.rst | 1 + 1 file changed, 1 insertion(+) diff --git a/test_plans/index.rst b/test_plans/index.rst index f8118d14..05794f98 100644 --- a/test_plans/index.rst +++ b/test_plans/index.rst @@ -297,6 +297,7 @@ The following are the test plans for the DPDK DTS automated test system. port_control_test_plan port_representor_test_plan vm2vm_virtio_user_test_plan + vm2vm_virtio_user_cbdma_test_plan vmdq_dcb_test_plan acl_test_plan power_negative_test_plan From patchwork Thu Apr 21 07:02:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 109949 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 61511A00C3; Thu, 21 Apr 2022 09:03:05 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 59933410FB; Thu, 21 Apr 2022 09:03:05 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 4978540040 for ; Thu, 21 Apr 2022 09:03:03 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650524583; x=1682060583; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=KjgfGw7+AfDTMZlT0sNhXSu6+/kr29M3v+eoeVTaH10=; b=dEZe7VLzR5qR11ulgwXJFs6WiinHFRS6a0p8s4cRn+58PP/dtLfDmYWP xKLVQGKpKJ7nhTvMRNzJSNgVvXWkr6nJjSPhkx3sCphf4jLyN5L0BLyEl 0U/D2TnFR1PHOVIJecM0WxagP3fv+6MWnENVHTKCvyUAHFk9flN1+d0lg 39od/fT3pCAMkefxQBpfOY2UVwO47jG13BbMbsQdKnzIiSZAyddaM9W44 QsdLUpnKaX1HBO8lt+MhEB27RpcIEnjPCgaa4iyNZND+iqhSb3N03eUwx vR62fpsmYxS2zXOz9WbFR7PmQ3VkMmyWofkXBxhzmCfOJg/mCnYhThMKh Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10323"; a="264026257" X-IronPort-AV: E=Sophos;i="5.90,278,1643702400"; d="scan'208";a="264026257" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2022 00:03:02 -0700 X-IronPort-AV: E=Sophos;i="5.90,278,1643702400"; d="scan'208";a="577060006" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2022 00:02:59 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 2/5] test_plans/vm2vm_virtio_user_test_plan: delete cbdma testcases Date: Thu, 21 Apr 2022 15:02:55 +0800 Message-Id: <20220421070255.1554459-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path), delete cbdma related cases form test_plans/vm2vm_virtio_user_test_plan.rst. Signed-off-by: Wei Ling --- test_plans/vm2vm_virtio_user_test_plan.rst | 1157 ++++---------------- 1 file changed, 205 insertions(+), 952 deletions(-) diff --git a/test_plans/vm2vm_virtio_user_test_plan.rst b/test_plans/vm2vm_virtio_user_test_plan.rst index c3aa328e..061594c4 100644 --- a/test_plans/vm2vm_virtio_user_test_plan.rst +++ b/test_plans/vm2vm_virtio_user_test_plan.rst @@ -41,39 +41,81 @@ This test plan test several features in VM2VM topo: 1. Split virtqueue vm2vm in-order mergeable, in-order non-mergeable, mergeable, non-mergeable, vector_rx path test. 2. Packed virtqueue vm2vm in-order mergeable, in-order non-mergeable, mergeable, non-mergeable, vectorized path (ringsize not powerof 2) test. 3. Split ring and packed ring vm2vm test when vhost enqueue operation with multi-CBDMA channels. -4. Test indirect descriptor feature. For example, the split ring mergeable inorder path use non-indirect descriptor, the 2000,2000,2000,2000 chain packets will need 4 consequent ring, still need one ring put header. +4. Test indirect descriptor feature. For example, the split ring mergeable inorder path use non-indirect descriptor, +the 2000,2000,2000,2000 chain packets will need 4 consequent ring, still need one ring put header. the split ring mergeable path use indirect descriptor, the 2000,2000,2000,2000 chain packets will only occupy one ring. -Test flow +For more about dpdk-testpmd sample, please refer to the DPDK docments: +https://doc.dpdk.org/guides/testpmd_app_ug/run_app.html + +For virtio-user vdev parameter, you can refer to the DPDK docments: +https://doc.dpdk.org/guides/nics/virtio.html#virtio-paths-selection-and-usage + +Prerequisites +============= + +Topology +-------- +Test flow: Virtio-user-->Vhost-user-->Testpmd-->Vhost-user-->Virtio-user + +Hardware +-------- +Supportted NICs: ALL + +Software +-------- +Trex: http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz + +General set up +-------------- +1. Compile DPDK:: + + # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library= + # ninja -C -j 110 + +2. Get the PCI device ID and DMA device ID of DUT, for example, 0000:18:00.0 is PCI device ID:: + + # ./usertools/dpdk-devbind.py -s + + Network devices using kernel driver + =================================== + 0000:18:00.0 'Device 159b' if=ens785f0 drv=ice unused=vfio-pci + +Test case ========= -Virtio-user <-> Vhost-user <-> Testpmd <-> Vhost-user <-> Virtio-user + +Common steps +------------ +1. Attach pdump secondary process to primary process by same file-prefix:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 \ + -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000' + Test Case 1: packed virtqueue vm2vm mergeable path test -======================================================= +------------------------------------------------------- +This case uses test of 2 virtio-user with packed ring mergeable path and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user and vhost side receive packets. 1. Launch vhost by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx 2. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ - --no-pci --file-prefix=virtio1 \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=0 \ -- -i --nb-cores=1 --txd=256 --rxd=256 testpmd>set fwd rxonly testpmd>start -3. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000' +3. Start pdump to capture virtio-user1 packets, as common step 1. 4. Launch virtio-user0 and send 8k length packets:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ - --no-pci --file-prefix=virtio \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=0 \ -- -i --nb-cores=1 --txd=256 --rxd=256 testpmd>set burst 1 @@ -95,19 +137,19 @@ Test Case 1: packed virtqueue vm2vm mergeable path test 7. Launch testpmd by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ - -i --nb-cores=1 --no-flush-rx + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --no-flush-rx testpmd>set fwd rxonly testpmd>start 8. Attach pdump secondary process to primary process by same file-prefix:: - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-size=8000' + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost \ + -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-size=8000' 9. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=0 \ -- -i --nb-cores=1 --txd=256 --rxd=256 testpmd>set burst 1 @@ -124,33 +166,33 @@ Test Case 1: packed virtqueue vm2vm mergeable path test testpmd>set txpkts 2000 testpmd>start tx_first 1 -10. Quit pdump,vhost received packets in pdump-vhost-rx.pcap, check headers and payload of all packets in pdump-virtio-rx.pcap and pdump-vhost-rx.pcap and ensure the content are same. +10. Quit pdump,vhost received packets in pdump-vhost-rx.pcap, check headers and payload of all packets in + pdump-virtio-rx.pcap and pdump-vhost-rx.pcap and ensure the content are same. Test Case 2: packed virtqueue vm2vm inorder mergeable path test -=============================================================== +--------------------------------------------------------------- +This case uses test of 2 virtio-user with packed ring inorder mergeable path and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user and vhost side receive packets. 1. Launch testpmd by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx 2. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ - --no-pci --file-prefix=virtio1 \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=1 \ -- -i --nb-cores=1 --txd=256 --rxd=256 testpmd>set fwd rxonly testpmd>start -3. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/root/pdump-rx.pcap,mbuf-size=8000' +3. Start pdump to capture virtio-user1 packets, as common step 1. 4. Launch virtio-user0 and send 8k length packets:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=1 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -168,19 +210,19 @@ Test Case 2: packed virtqueue vm2vm inorder mergeable path test 6. Launch testpmd by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ - -i --nb-cores=1 --no-flush-rx + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --no-flush-rx testpmd>set fwd rxonly testpmd>start 7. Attach pdump secondary process to primary process by same file-prefix:: - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-size=8000' + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost \ + -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-size=8000' 8. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=1 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -194,31 +236,32 @@ Test Case 2: packed virtqueue vm2vm inorder mergeable path test testpmd>set txpkts 2000,2000,2000,2000 testpmd>start tx_first 1 -9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap, check headers and payload of all packets in pdump-virtio-rx.pcap and pdump-vhost-rx.pcap and ensure the content are same. +9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap, check headers and payload of all packets in + pdump-virtio-rx.pcap and pdump-vhost-rx.pcap and ensure the content are same. Test Case 3: packed virtqueue vm2vm non-mergeable path test -=========================================================== +----------------------------------------------------------- +This case uses test of 2 virtio-user with packed ring non-mergeable path and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user and vhost side receive packets. 1. Launch testpmd by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx 2. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci --file-prefix=virtio1 \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=0 \ -- -i --nb-cores=1 --txd=256 --rxd=256 -3. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/root/pdump-rx.pcap,mbuf-size=8000' +3. Start pdump to capture virtio-user1 packets, as common step 1. 4. Launch virtio-user0 and send 8k length packets:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=0 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -235,7 +278,7 @@ Test Case 3: packed virtqueue vm2vm non-mergeable path test 6. Launch testpmd by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx testpmd>set fwd rxonly @@ -243,11 +286,12 @@ Test Case 3: packed virtqueue vm2vm non-mergeable path test 7. Attach pdump secondary process to primary process by same file-prefix:: - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-size=8000' + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost \ + -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-size=8000' 8. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=0 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -257,33 +301,34 @@ Test Case 3: packed virtqueue vm2vm non-mergeable path test testpmd>set burst 32 testpmd>start tx_first 7 -9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap, check headers and payload of all packets in pdump-virtio-rx.pcap and pdump-vhost-rx.pcap and ensure the content are same. +9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap, check headers and payload of all packets in + pdump-virtio-rx.pcap and pdump-vhost-rx.pcap and ensure the content are same. Test Case 4: packed virtqueue vm2vm inorder non-mergeable path test -=================================================================== +------------------------------------------------------------------- +This case uses test of 2 virtio-user with packed ring inorder non-mergeable path and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user and vhost side receive packets. 1. Launch testpmd by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx 2. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci --file-prefix=virtio1 \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \ -- -i --rx-offloads=0x10 --nb-cores=1 --txd=256 --rxd=256 testpmd>set fwd rxonly testpmd>start -3. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/root/pdump-rx.pcap,mbuf-size=8000' +3. Start pdump to capture virtio-user1 packets, as common step 1. 4. Launch virtio-user0 and send 8k length packets:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,packed_vec=1 \ -- -i --rx-offloads=0x10 --nb-cores=1 --txd=256 --rxd=256 @@ -300,7 +345,7 @@ Test Case 4: packed virtqueue vm2vm inorder non-mergeable path test 6. Launch testpmd by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx testpmd>set fwd rxonly @@ -308,11 +353,12 @@ Test Case 4: packed virtqueue vm2vm inorder non-mergeable path test 7. Attach pdump secondary process to primary process by same file-prefix:: - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-size=8000' + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost \ + -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-size=8000' 8. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,packed_vec=1 \ -- -i --rx-offloads=0x10 --nb-cores=1 --txd=256 --rxd=256 @@ -322,33 +368,34 @@ Test Case 4: packed virtqueue vm2vm inorder non-mergeable path test testpmd>set burst 32 testpmd>start tx_first 7 -9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap,check headers and payload of all packets in pdump-virtio-rx.pcap and pdump-vhost-rx.pcap and ensure the content are same. +9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap,check headers and payload of all packets in + pdump-virtio-rx.pcap and pdump-vhost-rx.pcap and ensure the content are same. Test Case 5: split virtqueue vm2vm mergeable path test -====================================================== +------------------------------------------------------ +This case uses test of 2 virtio-user with split ring mergeable path and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user and vhost side receive packets. 1. Launch vhost by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx 2. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci --file-prefix=virtio1 \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=0 \ -- -i --nb-cores=1 --txd=256 --rxd=256 testpmd>set fwd rxonly testpmd>start -3. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000' +3. Start pdump to capture virtio-user1 packets, as common step 1. 4. Launch virtio-user0 and send 8k length packets:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=0 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -371,7 +418,7 @@ Test Case 5: split virtqueue vm2vm mergeable path test 7. Launch testpmd by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx testpmd>set fwd rxonly @@ -379,11 +426,12 @@ Test Case 5: split virtqueue vm2vm mergeable path test 8. Attach pdump secondary process to primary process by same file-prefix:: - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-size=8000' + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost \ + -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-size=8000' 9. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=0 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -401,33 +449,34 @@ Test Case 5: split virtqueue vm2vm mergeable path test testpmd>set txpkts 2000 testpmd>start tx_first 1 -9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap, check headers and payload of all packets in pdump-virtio-rx.pcap and pdump-vhost-rx.pcap and ensure the content are same. +9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap, check headers and payload of all packets in + pdump-virtio-rx.pcap and pdump-vhost-rx.pcap and ensure the content are same. Test Case 6: split virtqueue vm2vm inorder mergeable path test -============================================================== +-------------------------------------------------------------- +This case uses test of 2 virtio-user with split ring inorder mergeable path and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user and vhost side receive packets. 1. Launch testpmd by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx 2. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci --file-prefix=virtio1 \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=1 \ -- -i --nb-cores=1 --txd=256 --rxd=256 testpmd>set fwd rxonly testpmd>start -3. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/root/pdump-rx.pcap,mbuf-size=8000' +3. Start pdump to capture virtio-user1 packets, as common step 1. 4. Launch virtio-user0 and send 8k length packets:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=1 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -444,7 +493,7 @@ Test Case 6: split virtqueue vm2vm inorder mergeable path test 6. Launch testpmd by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx testpmd>set fwd rxonly @@ -452,11 +501,12 @@ Test Case 6: split virtqueue vm2vm inorder mergeable path test 7. Attach pdump secondary process to primary process by same file-prefix:: - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-size=8000' + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost \ + -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-size=8000' 8. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=1 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -470,31 +520,32 @@ Test Case 6: split virtqueue vm2vm inorder mergeable path test testpmd>set txpkts 2000,2000,2000,2000 testpmd>start tx_first 1 -9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap,check headers and payload of all packets in pdump-virtio-rx.pcap and pdump-vhost-rx.pcap and ensure the content are same. +9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap,check headers and payload of all packets in + pdump-virtio-rx.pcap and pdump-vhost-rx.pcap and ensure the content are same. Test Case 7: split virtqueue vm2vm non-mergeable path test -========================================================== +---------------------------------------------------------- +This case uses test of 2 virtio-user with split ring non-mergeable path and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user and vhost side receive packets. 1. Launch testpmd by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx 2. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci --file-prefix=virtio1 \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0 \ -- -i --nb-cores=1 --txd=256 --rxd=256 --enable-hw-vlan-strip -3. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/root/pdump-rx.pcap,mbuf-size=8000' +3. Start pdump to capture virtio-user1 packets, as common step 1. 4. Launch virtio-user0 and send 8k length packets:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0 \ -- -i --nb-cores=1 --txd=256 --rxd=256 --enable-hw-vlan-strip @@ -511,19 +562,19 @@ Test Case 7: split virtqueue vm2vm non-mergeable path test 6. Launch testpmd by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ - -i --nb-cores=1 --no-flush-rx + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --no-flush-rx testpmd>set fwd rxonly testpmd>start 7. Attach pdump secondary process to primary process by same file-prefix:: - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-size=8000' + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost \ + -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-size=8000' 8. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0 \ -- -i --nb-cores=1 --txd=256 --rxd=256 --enable-hw-vlan-strip @@ -533,33 +584,34 @@ Test Case 7: split virtqueue vm2vm non-mergeable path test testpmd>set burst 32 testpmd>start tx_first 7 -9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap,check headers and payload of all packets in pdump-virtio-rx.pcap and pdump-vhost-rx.pcap and ensure the content are same. +9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap,check headers and payload of all packets in + pdump-virtio-rx.pcap and pdump-vhost-rx.pcap and ensure the content are same. Test Case 8: split virtqueue vm2vm inorder non-mergeable path test -================================================================== +------------------------------------------------------------------ +This case uses test of 2 virtio-user with split ring inorder non-mergeable path and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user and vhost side receive packets. 1. Launch testpmd by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx 2. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci --file-prefix=virtio1 \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=1 \ -- -i --nb-cores=1 --txd=256 --rxd=256 testpmd>set fwd rxonly testpmd>start -3. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/root/pdump-rx.pcap,mbuf-size=8000' +3. Start pdump to capture virtio-user1 packets, as common step 1. 4. Launch virtio-user0 and send 8k length packets:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=1 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -576,7 +628,7 @@ Test Case 8: split virtqueue vm2vm inorder non-mergeable path test 6. Launch testpmd by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx testpmd>set fwd rxonly @@ -584,11 +636,12 @@ Test Case 8: split virtqueue vm2vm inorder non-mergeable path test 7. Attach pdump secondary process to primary process by same file-prefix:: - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-size=8000' + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost \ + -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-size=8000' 8. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=1 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -598,32 +651,31 @@ Test Case 8: split virtqueue vm2vm inorder non-mergeable path test testpmd>set burst 32 testpmd>start tx_first 7 -9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap,check headers and payload of all packets in pdump-virtio-rx.pcap and pdump-vhost-rx.pcap and ensure the content are same. +9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap,check headers and payload of all packets in + pdump-virtio-rx.pcap and pdump-vhost-rx.pcap and ensure the content are same. Test Case 9: split virtqueue vm2vm vector_rx path test -====================================================== +------------------------------------------------------ +This case uses test of 2 virtio-user with split ring vector_rx path and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user and vhost side receive packets. 1. Launch testpmd by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx 2. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ - --no-pci --file-prefix=virtio1 \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=256 \ -- -i --nb-cores=1 --txd=256 --rxd=256 -3. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/root/pdump-rx.pcap,mbuf-size=8000' +3. Start pdump to capture virtio-user1 packets, as common step 1. 4. Launch virtio-user0 and send 8k length packets:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ - --no-pci --file-prefix=virtio \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=256 \ -- -i --nb-cores=1 --txd=256 --rxd=256 testpmd>set burst 1 @@ -639,20 +691,19 @@ Test Case 9: split virtqueue vm2vm vector_rx path test 6. Launch testpmd by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ - -i --nb-cores=1 --no-flush-rx + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --no-flush-rx testpmd>set fwd rxonly testpmd>start 7. Attach pdump secondary process to primary process by same file-prefix:: - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-size=8000' + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost \ + -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-size=8000' 8. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ - --no-pci \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=256 \ -- -i --nb-cores=1 --txd=256 --rxd=256 testpmd>set burst 1 @@ -661,33 +712,34 @@ Test Case 9: split virtqueue vm2vm vector_rx path test testpmd>set burst 32 testpmd>start tx_first 7 -9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap,check headers and payload of all packets in pdump-virtio-rx.pcap and pdump-vhost-rx.pcap and ensure the content are same. +9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap,check headers and payload of all packets in + pdump-virtio-rx.pcap and pdump-vhost-rx.pcap and ensure the content are same. Test Case 10: packed virtqueue vm2vm vectorized path test -========================================================= +--------------------------------------------------------- +This case uses test of 2 virtio-user with packed ring vectorized path and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user and vhost side receive packets. 1. Launch testpmd by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx 2. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci --file-prefix=virtio1 --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=256 \ -- -i --nb-cores=1 --txd=256 --rxd=256 testpmd>set fwd rxonly testpmd>start -3. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/root/pdump-rx.pcap,mbuf-size=8000' +3. Start pdump to capture virtio-user1 packets, as common step 1. 4. Launch virtio-user0 and send 8k length packets:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=256 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -704,19 +756,19 @@ Test Case 10: packed virtqueue vm2vm vectorized path test 6. Launch testpmd by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ - -i --nb-cores=1 --no-flush-rx + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --no-flush-rx testpmd>set fwd rxonly testpmd>start 7. Attach pdump secondary process to primary process by same file-prefix:: - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-size=8000' + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost \ + -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-size=8000' 8. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=256 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -726,33 +778,34 @@ Test Case 10: packed virtqueue vm2vm vectorized path test testpmd>set burst 32 testpmd>start tx_first 7 -9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap,check headers and payload of all packets in pdump-virtio-rx.pcap and pdump-vhost-rx.pcap and ensure the content are same. +9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap,check headers and payload of all packets in + pdump-virtio-rx.pcap and pdump-vhost-rx.pcap and ensure the content are same. Test Case 11: packed virtqueue vm2vm vectorized path test with ring size is not power of 2 -========================================================================================== +------------------------------------------------------------------------------------------ +This case uses test of 2 virtio-user with packed ring vectorized path with ring size is not power of 2 and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user and vhost side receive packets. 1. Launch testpmd by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx 2. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci --file-prefix=virtio1 --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=255 \ -- -i --nb-cores=1 --txd=255 --rxd=255 testpmd>set fwd rxonly testpmd>start -3. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/root/pdump-rx.pcap,mbuf-size=8000' +3. Start pdump to capture virtio-user1 packets, as common step 1. 4. Launch virtio-user0 and send 8k length packets:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=255 \ -- -i --nb-cores=1 --txd=255 --rxd=255 @@ -769,20 +822,19 @@ Test Case 11: packed virtqueue vm2vm vectorized path test with ring size is not 6. Launch testpmd by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ - --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ - -i --nb-cores=1 --no-flush-rx + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --no-flush-rx testpmd>set fwd rxonly testpmd>start 7. Attach pdump secondary process to primary process by same file-prefix:: - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-size=8000' + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost \ + -- --pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-size=8000' 8. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ - --no-pci --force-max-simd-bitwidth=512 \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=255 \ -- -i --nb-cores=1 --txd=255 --rxd=255 testpmd>set burst 1 @@ -791,811 +843,13 @@ Test Case 11: packed virtqueue vm2vm vectorized path test with ring size is not testpmd>set burst 32 testpmd>start tx_first 7 -9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap,check headers and payload of all packets in pdump-virtio-rx.pcap and pdump-vhost-rx.pcap and ensure the content are same. - -Test Case 12: split virtqueue vm2vm non-mergeable path multi-queues payload check with cbdma enabled -==================================================================================================== - -1. bind 4 cbdma port to vfio-pci and launch vhost by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1]' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:80:04.2;txq1@0000:80:04.3]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx - testpmd>vhost enable tx all - -2. Launch virtio-user1 by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ - --no-pci --file-prefix=virtio1 \ - --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=0,queue_size=4096 \ - -- -i --enable-hw-vlan-strip --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 - testpmd>set fwd rxonly - testpmd>start - -3. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000' - -4. Launch virtio-user0 and send packets:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ - --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=0,queue_size=4096 \ - -- -i --enable-hw-vlan-strip --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 - testpmd>set burst 1 - testpmd>set txpkts 64,128,256,512 - testpmd>start tx_first 27 - testpmd>stop - testpmd>set burst 32 - testpmd>start tx_first 7 - testpmd>stop - testpmd>set txpkts 64 - testpmd>start tx_first 1 - testpmd>stop - -5. Start vhost testpmd, check virtio-user1 RX-packets is 566 and RX-bytes is 486016, 502 packets with 960 length and 64 packets with 64 length in pdump-virtio-rx.pcap. - -6. Clear virtio-user1 port stats:: - - testpmd>stop - testpmd>clear port stats all - testpmd>start - -7. Quit and relaunch vhost with iova=pa by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1]' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:80:04.2;txq1@0000:80:04.3]' \ - --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx - testpmd>vhost enable tx all - -8. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000' - -9. Virtio-user0 send packets:: - - testpmd>set burst 1 - testpmd>set txpkts 64,128,256,512 - testpmd>start tx_first 27 - testpmd>stop - testpmd>set burst 32 - testpmd>start tx_first 7 - testpmd>stop - testpmd>set txpkts 64 - testpmd>start tx_first 1 - testpmd>stop - -10. Rerun step 5. - -Test Case 13: split virtqueue vm2vm mergeable path multi-queues payload check with cbdma enabled -================================================================================================ - -1. Launch vhost by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1]' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.2;txq1@0000:00:04.3]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx - testpmd>vhost enable tx all - -2. Launch virtio-user1 by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ - --no-pci --file-prefix=virtio1 \ - --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=4096 \ - -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 - testpmd>set fwd rxonly - testpmd>start - -3. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000' - -4. Launch virtio-user0 and send packets:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ - --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=4096 \ - -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 - testpmd>set burst 1 - testpmd>set txpkts 64,256,2000,64,256,2000 - testpmd>start tx_first 27 - testpmd>stop - testpmd>set burst 32 - testpmd>set txpkts 64 - testpmd>start tx_first 7 - testpmd>stop - -5. Start vhost testpmd, check 502 packets and 279232 bytes received by virtio-user1 and 54 packets with 4640 length and 448 packets with 64 length in pdump-virtio-rx.pcap. - -6. Clear virtio-user1 port stats:: - - testpmd>stop - testpmd>clear port stats all - testpmd>start - -7. Quit and relaunch vhost with iova=pa by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1]' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.2;txq1@0000:00:04.3]' \ - --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx - testpmd>vhost enable tx all - -8. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000' - -9. Virtio-user0 send packets:: - - testpmd>set burst 1 - testpmd>set txpkts 64,256,2000,64,256,2000 - testpmd>start tx_first 27 - testpmd>stop - testpmd>set burst 32 - testpmd>set txpkts 64 - testpmd>start tx_first 7 - testpmd>stop - -10. Rerun step 5. - -Test Case 14: split virtqueue vm2vm inorder non-mergeable path multi-queues payload check with cbdma enabled -============================================================================================================ - -1. Launch vhost by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1]' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.2;txq1@0000:00:04.3]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx - testpmd>vhost enable tx all - -2. Launch virtio-user1 by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ - --no-pci --file-prefix=virtio1 \ - --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=1,queue_size=4096 \ - -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 - testpmd>set fwd rxonly - testpmd>start - -3. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000' - -4. Launch virtio-user0 and send packets:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ - --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=1,queue_size=4096 \ - -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 - testpmd>set burst 1 - testpmd>set txpkts 64,256,2000,64,256,2000 - testpmd>start tx_first 27 - testpmd>stop - testpmd>set burst 32 - testpmd>set txpkts 64 - testpmd>start tx_first 7 - testpmd>stop - -5. Start vhost testpmd, check 448 packets and 28672 bytes received by virtio-user1 and 448 packets with 64 length in pdump-virtio-rx.pcap. - -6. Clear virtio-user1 port stats:: - - testpmd>stop - testpmd>clear port stats all - testpmd>start - -7. Quit and relaunch vhost with iova=pa by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1]' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.2;txq1@0000:00:04.3]' \ - --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx - testpmd>vhost enable tx all - -8. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000' - -9. Virtio-user0 send packets:: - - testpmd>set burst 1 - testpmd>set txpkts 64,256,2000,64,256,2000 - testpmd>start tx_first 27 - testpmd>stop - testpmd>set burst 32 - testpmd>set txpkts 64 - testpmd>start tx_first 7 - testpmd>stop - -10. Rerun step 5. - -Test Case 15: split virtqueue vm2vm vectorized path multi-queues payload check with cbdma enabled -================================================================================================== - -1. Launch vhost by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1]' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.2;txq1@0000:00:04.3]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx - testpmd>vhost enable tx all - -2. Launch virtio-user1 by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \ - --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=4096 \ - -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 - testpmd>set fwd rxonly - testpmd>start - -3. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000' - -4. Launch virtio-user0 and send packets:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=0,queue_size=4096 \ - -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 - testpmd>set burst 32 - testpmd>set txpkts 64 - testpmd>start tx_first 7 - testpmd>stop - testpmd>set burst 1 - testpmd>set txpkts 64,256,2000,64,256,2000 - testpmd>start tx_first 27 - testpmd>stop - -5. Start vhost testpmd, check 448 packets and 28672 bytes received by virtio-user1 and 448 packets with 64 length in pdump-virtio-rx.pcap. - -6. Clear virtio-user1 port stats:: - - testpmd>stop - testpmd>clear port stats all - testpmd>start - -7. Quit and relaunch vhost with iova=pa by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1]' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.2;txq1@0000:00:04.3]' \ - --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx - testpmd>vhost enable tx all - -8. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000' - -9. Virtio-user0 send packets:: - - testpmd>set burst 32 - testpmd>set txpkts 64 - testpmd>start tx_first 7 - testpmd>stop - testpmd>set burst 1 - testpmd>set txpkts 64,256,2000,64,256,2000 - testpmd>start tx_first 27 - testpmd>stop - -10. Rerun step 5. - -Test Case 16: Split virtqueue vm2vm inorder mergeable path test non-indirect descriptor with cbdma enable -========================================================================================================= - -1. Launch testpmd by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1]' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.2;txq1@0000:00:04.3]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx - testpmd>vhost enable tx all - -2. Launch virtio-user1 by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ - --no-pci --file-prefix=virtio1 \ - --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=1,queue_size=256 \ - -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 - testpmd>set fwd rxonly - testpmd>start - -3. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000' - -4. Launch virtio-user0 and send packets(include 251 small packets and 32 8K packets):: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ - --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=1,queue_size=256 \ - -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 - testpmd>set burst 1 - testpmd>start tx_first 27 - testpmd>stop - testpmd>set burst 32 - testpmd>start tx_first 7 - testpmd>stop - testpmd>set txpkts 2000,2000,2000,2000 - testpmd>start tx_first 1 - testpmd>stop - -5. Start vhost, then quit pdump and three testpmd, about split virtqueue inorder mergeable path, it use the non-indirect descriptors, the 8k length pkt will occupies 5 ring:2000,2000,2000,2000 will need 4 consequent ring, -still need one ring put header. So check 504 packets and 48128 bytes received by virtio-user1 and 502 packets with 64 length and 2 packets with 8K length in pdump-virtio-rx.pcap. - -6. Relaunch vhost with iova=pa by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1]' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.2;txq1@0000:00:04.3]' \ - --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx - testpmd>vhost enable tx all - -7. Rerun step 2-5. - -Test Case 17: Split virtqueue vm2vm mergeable path test indirect descriptor with cbdma enable -============================================================================================= - -1. Launch testpmd by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1]' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.2;txq1@0000:00:04.3]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx - testpmd>vhost enable tx all - -2. Launch virtio-user1 by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ - --no-pci --file-prefix=virtio1 \ - --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=256 \ - -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 - testpmd>set fwd rxonly - testpmd>start - -3. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000' - -4. Launch virtio-user0 and send packets(include 251 small packets and 32 8K packets):: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ - --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=256 \ - -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 - testpmd>set burst 1 - testpmd>start tx_first 27 - testpmd>stop - testpmd>set burst 32 - testpmd>start tx_first 7 - testpmd>stop - testpmd>set txpkts 2000,2000,2000,2000 - testpmd>start tx_first 1 - testpmd>stop - -5. Start vhost, then quit pdump and three testpmd, about split virtqueue mergeable path, it use the indirect descriptors, the 8k length pkt will just occupies one ring. -So check 512 packets and 112128 bytes received by virtio-user1 and 502 packets with 64 length and 10 packets with 8K length in pdump-virtio-rx.pcap. - -6. Relaunch vhost with iova=pa by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1]' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.2;txq1@0000:00:04.3]' \ - --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx - testpmd>vhost enable tx all - -7. Rerun step 2-5. - -Test Case 18: packed virtqueue vm2vm non-mergeable path multi-queues payload check with cbdma enabled -===================================================================================================== - -1. Launch vhost by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1]' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:80:04.2;txq1@0000:80:04.3]' \ - --iova=va -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx - testpmd>vhost enable tx all - -2. Launch virtio-user1 by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ - --no-pci --file-prefix=virtio1 \ - --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=0,queue_size=4096 \ - -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 - testpmd>set fwd rxonly - testpmd>start - -3. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000' - -4. Launch virtio-user0 and send packets:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ - --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=0,queue_size=4096 \ - -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 - testpmd>set burst 32 - testpmd>set txpkts 64 - testpmd>start tx_first 7 - testpmd>stop - testpmd>set burst 1 - testpmd>set txpkts 64,256,2000,64,256,2000 - testpmd>start tx_first 27 - testpmd>stop - -5. Start vhost testpmd, check virtio-user1 RX-packets is 448 and RX-bytes is 28672, 448 packets with 64 length in pdump-virtio-rx.pcap. - -6. Clear virtio-user1 port stats:: - - testpmd>stop - testpmd>clear port stats all - testpmd>start - -7. Quit and relaunch vhost with iova=pa by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1]' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:80:04.2;txq1@0000:80:04.3]' \ - --iova=pa -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx - testpmd>vhost enable tx all - -8. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000' - -9. Virtio-user0 send packets:: - - testpmd>set burst 32 - testpmd>set txpkts 64 - testpmd>start tx_first 7 - testpmd>stop - testpmd>set burst 1 - testpmd>set txpkts 64,256,2000,64,256,2000 - testpmd>start tx_first 27 - testpmd>stop - -10. Rerun step 5. - -Test Case 19: packed virtqueue vm2vm mergeable path multi-queues payload check with cbdma enabled -================================================================================================= - -1. Launch vhost by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1]' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.2;txq1@0000:00:04.3]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx - testpmd>vhost enable tx all - -2. Launch virtio-user1 by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ - --no-pci --file-prefix=virtio1 \ - --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=0,queue_size=4096 \ - -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 - testpmd>set fwd rxonly - testpmd>start - -3. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000' - -4. Launch virtio-user0 and send packets:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ - --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=0,queue_size=4096 \ - -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 - testpmd>set burst 1 - testpmd>set txpkts 64,256,2000,64,256,2000 - testpmd>start tx_first 27 - testpmd>stop - testpmd>set burst 32 - testpmd>set txpkts 64 - testpmd>start tx_first 7 - testpmd>stop - -5. Start vhost testpmd, then quit pdump, check 502 packets and 279232 bytes received by virtio-user1 and 54 packets with 4640 length and 448 packets with 64 length in pdump-virtio-rx.pcap. - -6. Clear virtio-user1 port stats:: - - testpmd>stop - testpmd>clear port stats all - testpmd>start - -7. Quit and relaunch vhost with iova=pa by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1]' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.2;txq1@0000:00:04.3]' \ - --iova=pa -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx - testpmd>vhost enable tx all - -8. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000' - -9. Virtio-user0 send packets:: - - testpmd>set burst 1 - testpmd>set txpkts 64,256,2000,64,256,2000 - testpmd>start tx_first 27 - testpmd>stop - testpmd>set burst 32 - testpmd>set txpkts 64 - testpmd>start tx_first 7 - testpmd>stop - -10. Rerun step 5. +9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap,check headers and payload of all packets in + pdump-virtio-rx.pcap and pdump-vhost-rx.pcap and ensure the content are same. -Test Case 20: packed virtqueue vm2vm inorder mergeable path multi-queues payload check with cbdma enabled -========================================================================================================= - -1. Launch vhost by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1]' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.2;txq1@0000:00:04.3]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx - testpmd>vhost enable tx all - -2. Launch virtio-user1 by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ - --no-pci --file-prefix=virtio1 \ - --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,queue_size=4096 \ - -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 - testpmd>set fwd rxonly - testpmd>start - -3. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000' - -4. Launch virtio-user0 and send packets:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ - --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,queue_size=4096 \ - -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 - testpmd>set burst 1 - testpmd>set txpkts 64,256,2000,64,256,2000 - testpmd>start tx_first 27 - testpmd>stop - testpmd>set burst 32 - testpmd>set txpkts 64 - testpmd>start tx_first 7 - testpmd>stop - -5. Start vhost testpmd, then quit pdump, check 502 packets and 279232 bytes received by virtio-user1 and 54 packets with 4640 length and 448 packets with 64 length in pdump-virtio-rx.pcap. - -6. Clear virtio-user1 port stats:: - - testpmd>stop - testpmd>clear port stats all - testpmd>start - -7. Quit and relaunch vhost with iova=pa by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1]' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.2;txq1@0000:00:04.3]' \ - --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx - testpmd>vhost enable tx all - -8. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000' - -9. Virtio-user0 send packets:: - - testpmd>set burst 1 - testpmd>set txpkts 64,256,2000,64,256,2000 - testpmd>start tx_first 27 - testpmd>stop - testpmd>set burst 32 - testpmd>set txpkts 64 - testpmd>start tx_first 7 - testpmd>stop - -10. Rerun step 5. - -Test Case 21: packed virtqueue vm2vm inorder non-mergeable path multi-queues payload check with cbdma enabled -============================================================================================================= - -1. Launch vhost by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1]' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.2;txq1@0000:00:04.3]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx - testpmd>vhost enable tx all - -2. Launch virtio-user1 by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ - --no-pci --file-prefix=virtio1 \ - --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,queue_size=4096 \ - -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 - testpmd>set fwd rxonly - testpmd>start - -3. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000' - -4. Launch virtio-user0 and send 8k length packets:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ - --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,queue_size=4096 \ - -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 - - testpmd>set burst 32 - testpmd>set txpkts 64 - testpmd>start tx_first 7 - testpmd>stop - testpmd>set burst 1 - testpmd>set txpkts 64,256,2000,64,256,2000 - testpmd>start tx_first 27 - testpmd>stop - -5. Start vhost testpmd, then quit pdump, check 448 packets and 28672 bytes received by virtio-user1 and 448 packets with 64 length in pdump-virtio-rx.pcap. - -6. Clear virtio-user1 port stats:: - - testpmd>stop - testpmd>clear port stats all - testpmd>start - -7. Quit and relaunch vhost with iova=pa by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1]' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.2;txq1@0000:00:04.3]' \ - --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx - testpmd>vhost enable tx all - -8. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000' - -9. Virtio-user0 send packets:: - - testpmd>set burst 32 - testpmd>set txpkts 64 - testpmd>start tx_first 7 - testpmd>stop - testpmd>set burst 1 - testpmd>set txpkts 64,256,2000,64,256,2000 - testpmd>start tx_first 27 - testpmd>stop - -10. Rerun step 5. - -Test Case 22: packed virtqueue vm2vm vectorized path multi-queues payload check with cbdma enabled -=================================================================================================== - -1. Launch vhost by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1]' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.2;txq1@0000:00:04.3]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx - testpmd>vhost enable tx all - -2. Launch virtio-user1 by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 --force-max-simd-bitwidth=512 \ - --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4096 \ - -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 - testpmd>set fwd rxonly - testpmd>start - -3. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000' - -4. Launch virtio-user0 and send 8k length packets:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --force-max-simd-bitwidth=512 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4096 \ - -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 - testpmd>set burst 32 - testpmd>set txpkts 64 - testpmd>start tx_first 7 - testpmd>stop - testpmd>set burst 1 - testpmd>set txpkts 64,256,2000,64,256,2000 - testpmd>start tx_first 27 - testpmd>stop - -5. Start vhost testpmd, then quit pdump, check 448 packets and 28672 bytes received by virtio-user1 and 448 packets with 64 length in pdump-virtio-rx.pcap. - -6. Clear virtio-user1 port stats:: - - testpmd>stop - testpmd>clear port stats all - testpmd>start - -7. Quit and relaunch vhost with iova=pa by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1]' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.2;txq1@0000:00:04.3]' \ - --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx - testpmd>vhost enable tx all - -8. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000' - -9. Virtio-user0 send packets:: - - testpmd>set burst 32 - testpmd>set txpkts 64 - testpmd>start tx_first 7 - testpmd>stop - testpmd>set burst 1 - testpmd>set txpkts 64,256,2000,64,256,2000 - testpmd>start tx_first 27 - testpmd>stop - -10. Rerun step 5. - -Test Case 23: packed virtqueue vm2vm vectorized path multi-queues payload check with ring size is not power of 2 and cbdma enabled -================================================================================================================================== - -1. Launch vhost by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1]' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.2;txq1@0000:00:04.3]' \ - --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx - testpmd>vhost enable tx all - -2. Launch virtio-user1 by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --force-max-simd-bitwidth=512 --no-pci --file-prefix=virtio1 \ - --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4097 \ - -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4097 --rxd=4097 - testpmd>set fwd rxonly - testpmd>start - -3. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000' - -4. Launch virtio-user0 and send 8k length packets:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --force-max-simd-bitwidth=512 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4097 \ - -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4097 --rxd=4097 - testpmd>set burst 32 - testpmd>set txpkts 64 - testpmd>start tx_first 7 - testpmd>stop - testpmd>set burst 1 - testpmd>set txpkts 64,256,2000,64,256,2000 - testpmd>start tx_first 27 - testpmd>stop - -5. Start vhost testpmd, then quit pdump, check 448 packets and 28672 bytes received by virtio-user1 and 448 packets with 64 length in pdump-virtio-rx.pcap. - -6. Clear virtio-user1 port stats:: - - testpmd>stop - testpmd>clear port stats all - testpmd>start - -7. Quit and relaunch vhost with iova=pa by below command:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \ - --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1]' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@0000:00:04.2;txq1@0000:00:04.3]' \ - --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx - testpmd>vhost enable tx all - -8. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000' - -9. Virtio-user0 send packets:: - - testpmd>set burst 32 - testpmd>set txpkts 64 - testpmd>start tx_first 7 - testpmd>stop - testpmd>set burst 1 - testpmd>set txpkts 64,256,2000,64,256,2000 - testpmd>start tx_first 27 - testpmd>stop - -10. Rerun step 5. - -Test Case 24: packed virtqueue vm2vm vectorized-tx path multi-queues test indirect descriptor -============================================================================================= +Test Case 12: packed virtqueue vm2vm vectorized-tx path multi-queues test indirect descriptor +--------------------------------------------------------------------------------------------- +This case uses test of 2 virtio-user with packed ring vectorized-tx path and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user side receive packets. 1. Launch vhost by below command:: @@ -1611,13 +865,11 @@ Test Case 24: packed virtqueue vm2vm vectorized-tx path multi-queues test indire testpmd>set fwd rxonly testpmd>start -3. Attach pdump secondary process to primary process by same file-prefix:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000' +3. Start pdump to capture virtio-user1 packets, as common step 1. 4. Launch virtio-user0 and send 8k length packets:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --force-max-simd-bitwidth=512 --no-pci --file-prefix=virtio \ + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --force-max-simd-bitwidth=512 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -1631,5 +883,6 @@ Test Case 24: packed virtqueue vm2vm vectorized-tx path multi-queues test indire testpmd>start tx_first 1 testpmd>stop -5. Start vhost, then quit pdump and three testpmd, about packed virtqueue vectorized-tx path, it use the indirect descriptors, the 8k length pkt will just occupies one ring. -So check 256 packets and 56064 bytes received by virtio-user1 and 251 packets with 64 length and 5 packets with 8K length in pdump-virtio-rx.pcap. +5. Start vhost, then quit pdump and three testpmd, about packed virtqueue vectorized-tx path, it use the indirect descriptors, + the 8k length pkt will just occupies one ring. So check 256 packets and 56064 bytes received by virtio-user1 and 251 packets + with 64 length and 5 packets with 8K length in pdump-virtio-rx.pcap. From patchwork Thu Apr 21 07:03:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 109950 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8ABF3A00C3; Thu, 21 Apr 2022 09:03:16 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 816EA427EA; Thu, 21 Apr 2022 09:03:16 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id B919240040 for ; Thu, 21 Apr 2022 09:03:14 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650524594; x=1682060594; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=BKOlB70r6+g4Vd+XSCXr3+3sBU0iZl/khCph9apOCH8=; b=iciN7yj0khUFWvpbJMn8IwCXTTG13iUHF7VUe6CGtX5gpLZdUbRJEQOR DA4wFI1VvhHvHND1QROwP6VvRW3GTho58sU5YYkJMZnWB7+L62VLLWuL4 3zgDwGWqU5mhZFjV8mjPJKzHNzKVQcbOz9hH1dZSzWS05Api49YMNpzlR Bm8Cc5F53sg1shRCsX1a2rAkcp0VUIYLND1VkZ/KplEdfj3f85vSKvJjx 1Iz2GCaXUbUs+WTeJPq7e7l3WIG3UUwfMPQPZY69QYL4qa326jByVEElV YIseeCAN1g2Vd3OXbc1oougnViM+tDfQOgwJvCfM96Z5wqODIfsjK0gch w==; X-IronPort-AV: E=McAfee;i="6400,9594,10323"; a="261861317" X-IronPort-AV: E=Sophos;i="5.90,278,1643702400"; d="scan'208";a="261861317" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2022 00:03:13 -0700 X-IronPort-AV: E=Sophos;i="5.90,278,1643702400"; d="scan'208";a="577060113" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2022 00:03:11 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 3/5] tests/vm2vm_virtio_user: delete cbdma testcases Date: Thu, 21 Apr 2022 15:03:07 +0800 Message-Id: <20220421070307.1554517-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path), delete cbdma related testcases form tests/TestSuite_vm2vm_virtio_user.py. Signed-off-by: Wei Ling --- tests/TestSuite_vm2vm_virtio_user.py | 867 +-------------------------- 1 file changed, 13 insertions(+), 854 deletions(-) diff --git a/tests/TestSuite_vm2vm_virtio_user.py b/tests/TestSuite_vm2vm_virtio_user.py index 1e4969cb..7379df03 100644 --- a/tests/TestSuite_vm2vm_virtio_user.py +++ b/tests/TestSuite_vm2vm_virtio_user.py @@ -57,9 +57,6 @@ class TestVM2VMVirtioUser(TestCase): self.virtio_prefix_1 = "virtio1" socket_num = len(set([int(core["socket"]) for core in self.dut.cores])) self.socket_mem = ",".join(["1024"] * socket_num) - self.get_core_list() - self.rebuild_flag = False - self.app_pdump = self.dut.apps_name["pdump"] self.dut_ports = self.dut.get_ports() self.cbdma_dev_infos = [] self.ports_socket = self.dut.get_numa_id(self.dut_ports[0]) @@ -71,8 +68,13 @@ class TestVM2VMVirtioUser(TestCase): self.vhost_user_pmd = PmdOutput(self.dut, self.vhost_user) self.virtio_user0_pmd = PmdOutput(self.dut, self.virtio_user0) self.virtio_user1_pmd = PmdOutput(self.dut, self.virtio_user1) - self.dut.restore_interfaces() self.dump_port = "device_id=net_virtio_user1" + self.app_pdump = self.dut.apps_name["pdump"] + self.testpmd_name = self.dut.apps_name['test-pmd'].split("/")[-1] + self.cores_list = self.dut.get_core_list("all", socket=self.ports_socket) + self.core_list_vhost = self.cores_list[0:2] + self.core_list_virtio0 = self.cores_list[2:4] + self.core_list_virtio1 = self.cores_list[4:6] def set_up(self): """ @@ -83,19 +85,7 @@ class TestVM2VMVirtioUser(TestCase): self.dut.send_expect("rm -rf ./vhost-net*", "#") self.dut.send_expect("rm -rf %s" % self.dump_virtio_pcap, "#") self.dut.send_expect("rm -rf %s" % self.dump_vhost_pcap, "#") - - def get_core_list(self): - """ - create core mask - """ - self.core_config = "1S/6C/1T" - self.cores_list = self.dut.get_core_list(self.core_config) - self.verify( - len(self.cores_list) >= 6, "There no enough cores to run this suite" - ) - self.core_list_vhost = self.cores_list[0:2] - self.core_list_virtio0 = self.cores_list[2:4] - self.core_list_virtio1 = self.cores_list[4:6] + self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#") def launch_vhost_testpmd( self, vdev_num, fixed_prefix=False, fwd_mode="io", vdevs=None, no_pci=True @@ -123,25 +113,6 @@ class TestVM2VMVirtioUser(TestCase): ) self.vhost_user_pmd.execute_cmd("set fwd %s" % fwd_mode) - def lanuch_vhost_testpmd_with_cbdma(self, vdevs=None, iova="va"): - """ - start testpmd with cbdma - """ - eal_params = vdevs + " --iova={}".format(iova) - param = "--nb-cores=1 --rxq={} --txq={} --txd={} --rxd={} --no-flush-rx".format( - self.queue_num, self.queue_num, self.txd_num, self.txd_num - ) - self.vhost_user_pmd.start_testpmd( - cores=self.core_list_vhost, - param=param, - no_pci=False, - ports=[], - eal_param=eal_params, - prefix="vhost", - fixed_prefix=True, - ) - self.vhost_user_pmd.execute_cmd("vhost enable tx all") - @property def check_2M_env(self): out = self.dut.send_expect( @@ -211,35 +182,6 @@ class TestVM2VMVirtioUser(TestCase): self.virtio_user0_pmd.execute_cmd("set burst 32") self.virtio_user0_pmd.execute_cmd("start tx_first 7") - def start_virtio_testpmd_with_vhost_net0_cbdma( - self, path_mode, extern_params, ringsize - ): - """ - launch the testpmd as virtio with vhost_net0 - """ - eal_params = ( - " --socket-mem {} --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues={}," - "{},queue_size={} ".format( - self.socket_mem, self.queue_num, path_mode, ringsize - ) - ) - if self.check_2M_env: - eal_params += " --single-file-segments " - if "vectorized_path" in self.running_case: - eal_params += " --force-max-simd-bitwidth=512" - params = "--nb-cores=1 --txd={} --rxd={} {}".format( - ringsize, ringsize, extern_params - ) - self.virtio_user0_pmd.start_testpmd( - cores=self.core_list_virtio0, - param=params, - eal_param=eal_params, - no_pci=True, - ports=[], - prefix=self.virtio_prefix_0, - fixed_prefix=True, - ) - def check_packet_payload_valid_with_cbdma( self, filename, @@ -302,18 +244,7 @@ class TestVM2VMVirtioUser(TestCase): f", actual value : {actual_8k_pkt_num}", ) - def get_dump_file_of_virtio_user_cbdma(self, path_mode, extern_param, ringsize): - - self.lanuch_vhost_testpmd_with_cbdma(vdevs=self.vdevs) - self.start_virtio_testpmd_with_vhost_net1(path_mode, extern_param, ringsize) - self.launch_pdump_to_capture_pkt( - self.dump_port, self.virtio_prefix_1, self.dump_virtio_pcap - ) - self.start_virtio_testpmd_with_vhost_net0_cbdma( - path_mode, extern_param, ringsize - ) - - def send_32_2k_pkts_from_virtio0(self): + def send_32_2k_pkts(self): """ send 32 2k length packets from virtio_user0 testpmd """ @@ -563,7 +494,7 @@ class TestVM2VMVirtioUser(TestCase): # then resend 32 large pkts, all will received self.logger.info("check pcap file info about virtio") self.get_dump_file_of_virtio_user(path_mode, extern_params, ringsize) - self.send_32_2k_pkts_from_virtio0() + self.send_32_2k_pkts() self.check_packet_payload_valid( self.dump_virtio_pcap, small_pkts_num, large_8k_pkts_num, large_2k_pkts_num ) @@ -571,7 +502,7 @@ class TestVM2VMVirtioUser(TestCase): # get dump pcap file of vhost self.logger.info("check pcap file info about vhost") self.get_dump_file_of_vhost_user(path_mode, extern_params, ringsize) - self.send_32_2k_pkts_from_virtio0() + self.send_32_2k_pkts() self.check_packet_payload_valid( self.dump_vhost_pcap, small_pkts_num, large_8k_pkts_num, large_2k_pkts_num ) @@ -747,7 +678,7 @@ class TestVM2VMVirtioUser(TestCase): # then virtio send 32 large pkts, the virtio will all received self.logger.info("check pcap file info about virtio") self.get_dump_file_of_virtio_user(path_mode, extern_params, ringsize) - self.send_32_2k_pkts_from_virtio0() + self.send_32_2k_pkts() self.check_packet_payload_valid( self.dump_virtio_pcap, small_pkts_num, large_8k_pkts_num, large_2k_pkts_num ) @@ -755,7 +686,7 @@ class TestVM2VMVirtioUser(TestCase): # get dump pcap file of vhost self.logger.info("check pcap file info about vhost") self.get_dump_file_of_vhost_user(path_mode, extern_params, ringsize) - self.send_32_2k_pkts_from_virtio0() + self.send_32_2k_pkts() self.check_packet_payload_valid( self.dump_vhost_pcap, small_pkts_num, large_8k_pkts_num, large_2k_pkts_num ) @@ -881,781 +812,11 @@ class TestVM2VMVirtioUser(TestCase): self.logger.info("diff the pcap file of vhost and virtio") self.check_vhost_and_virtio_pkts_content() - def get_cbdma_ports_info_and_bind_to_dpdk(self): - """ - get all cbdma ports - """ - out = self.dut.send_expect( - "./usertools/dpdk-devbind.py --status-dev dma", "# ", 30 - ) - device_info = out.split("\n") - for device in device_info: - pci_info = re.search("\s*(0000:\S*:\d*.\d*)", device) - if pci_info is not None: - dev_info = pci_info.group(1) - # the numa id of ioat dev, only add the device which - # on same socket with nic dev - bus = int(dev_info[5:7], base=16) - if bus >= 128: - cur_socket = 1 - else: - cur_socket = 0 - if self.ports_socket == cur_socket: - self.cbdma_dev_infos.append(pci_info.group(1)) - self.verify( - len(self.cbdma_dev_infos) >= 8, - "There no enough cbdma device to run this suite", - ) - self.device_str = " ".join(self.cbdma_dev_infos[0 : self.cbdma_nic_dev_num]) - self.dut.send_expect( - "./usertools/dpdk-devbind.py --force --bind=%s %s" - % (self.drivername, self.device_str), - "# ", - 60, - ) - - def bind_cbdma_device_to_kernel(self): - if self.device_str is not None: - self.dut.send_expect("modprobe ioatdma", "# ") - self.dut.send_expect( - "./usertools/dpdk-devbind.py -u %s" % self.device_str, "# ", 30 - ) - self.dut.send_expect( - "./usertools/dpdk-devbind.py --force --bind=ioatdma %s" - % self.device_str, - "# ", - 60, - ) - - def relanuch_vhost_testpmd_iova_pa(self): - - self.vhost_user_pmd.execute_cmd("quit", "#", 60) - self.virtio_user1_pmd.execute_cmd("stop") - self.virtio_user1_pmd.execute_cmd("clear port stats all") - out = self.virtio_user1_pmd.execute_cmd("show port stats 0") - self.verify("RX-packets: 0" in out, "expect: virtio-user1 RX-packets is 0 ") - self.virtio_user1_pmd.execute_cmd("start") - self.lanuch_vhost_testpmd_with_cbdma(vdevs=self.vdevs, iova="pa") - self.launch_pdump_to_capture_pkt( - self.dump_port, self.virtio_prefix_1, self.dump_virtio_pcap - ) - - def relanuch_vhost_iova_pa_and_virtio(self, path_mode, extern_param, ringsize): - - self.quit_all_testpmd() - self.lanuch_vhost_testpmd_with_cbdma(vdevs=self.vdevs, iova="pa") - self.start_virtio_testpmd_with_vhost_net1(path_mode, extern_param, ringsize) - self.launch_pdump_to_capture_pkt( - self.dump_port, self.virtio_prefix_1, self.dump_virtio_pcap - ) - self.start_virtio_testpmd_with_vhost_net0_cbdma( - path_mode, extern_param, ringsize - ) - - def test_vm2vm_virtio_user_split_virtqueue_non_mergeable_path_multi_queues_check_chain_pkts_with_cbdma( - self, - ): - """ - Test Case 12: split virtqueue vm2vm non-mergeable path multi-queues payload check with cbdma enabled - """ - self.cbdma_nic_dev_num = 4 - self.get_cbdma_ports_info_and_bind_to_dpdk() - pkts_64_num = 64 - pkts_960_num = 502 - total_pkts_num = pkts_64_num + pkts_960_num - self.queue_num = 2 - self.txd_num = 4096 - path_mode = "server=1,packed_vq=0,mrg_rxbuf=0,in_order=0" - ringsize = 4096 - extern_params = "--rxq=2 --txq=2 --enable-hw-vlan-strip " - # get dump pcap file of virtio - # the virtio0 will send imix small pkts - self.logger.info("check pcap file info about virtio") - self.vdevs = ( - f"--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[0]};txq1@{self.cbdma_dev_infos[1]}]' " - f"--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[2]};txq1@{self.cbdma_dev_infos[3]}]'" - ) - - self.get_dump_file_of_virtio_user_cbdma(path_mode, extern_params, ringsize) - self.send_251_960byte_and_32_64byte_pkts() - self.vhost_user_pmd.execute_cmd("start") - # execute stop and port stop all to avoid testpmd tail_pkts issue. - self.vhost_user_pmd.execute_cmd("stop") - self.vhost_user_pmd.execute_cmd("port stop all") - out = self.virtio_user1_pmd.execute_cmd("show port stats all") - self.verify( - "RX-packets: 566" in out and "RX-bytes: 486016" in out, - "expect: virtio-user1 RX-packets is 566 and RX-bytes is 486016", - ) - self.check_packet_payload_valid_with_cbdma( - self.dump_virtio_pcap, total_pkts_num, pkts_64_num, pkts_960_num - ) - - if not self.check_2M_env: - self.logger.info("relanuch vhost testpmd iova pa") - self.relanuch_vhost_testpmd_iova_pa() - self.send_251_960byte_and_32_64byte_pkts() - - self.vhost_user_pmd.execute_cmd("start") - self.vhost_user_pmd.execute_cmd("stop") - out = self.virtio_user1_pmd.execute_cmd("show port stats all") - self.verify( - "RX-packets: 566" in out and "RX-bytes: 486016" in out, - "expect: virtio-user1 RX-packets is 566 and RX-bytes is 486016", - ) - self.check_packet_payload_valid_with_cbdma( - self.dump_virtio_pcap, total_pkts_num, pkts_64_num, pkts_960_num - ) - - def test_vm2vm_virtio_user_split_virtqueue_mergeable_path_multi_queues_check_chain_pkts_with_cbdma( - self, - ): - """ - Test Case 13: split virtqueue vm2vm mergeable path multi-queues payload check with cbdma enabled - """ - self.cbdma_nic_dev_num = 4 - self.get_cbdma_ports_info_and_bind_to_dpdk() - large_4640_pkts_num = 54 - large_64_pkts_num = 448 - total_pkts_num = large_4640_pkts_num + large_64_pkts_num - self.queue_num = 2 - self.txd_num = 4096 - path_mode = "server=1,packed_vq=0,mrg_rxbuf=1,in_order=0" - ringsize = 4096 - extern_params = "--rxq=2 --txq=2" - - self.logger.info("check pcap file info about virtio") - self.vdevs = ( - f"--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[0]};txq1@{self.cbdma_dev_infos[1]}]' " - f"--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[2]};txq1@{self.cbdma_dev_infos[3]}]'" - ) - - self.get_dump_file_of_virtio_user_cbdma(path_mode, extern_params, ringsize) - self.send_27_4640byte_and_224_64byte_pkts() - self.vhost_user_pmd.execute_cmd("start") - # execute stop and port stop all to avoid testpmd tail_pkts issue. - self.vhost_user_pmd.execute_cmd("stop") - self.vhost_user_pmd.execute_cmd("port stop all") - out = self.virtio_user1_pmd.execute_cmd("show port stats all") - self.verify( - "RX-packets: 502" in out and "RX-bytes: 279232" in out, - "expect: virtio-user1 RX-packets is 502 and RX-bytes is 279232", - ) - self.check_packet_payload_valid_with_cbdma( - self.dump_virtio_pcap, - total_pkts_num, - pkts_64_num=large_64_pkts_num, - pkts_4640_num=large_4640_pkts_num, - ) - - if not self.check_2M_env: - self.logger.info("relanuch vhost testpmd iova pa") - self.relanuch_vhost_testpmd_iova_pa() - self.send_27_4640byte_and_224_64byte_pkts() - - self.vhost_user_pmd.execute_cmd("start") - self.vhost_user_pmd.execute_cmd("stop") - out = self.virtio_user1_pmd.execute_cmd("show port stats all") - self.verify( - "RX-packets: 502" in out and "RX-bytes: 279232" in out, - "expect: virtio-user1 RX-packets is 502 and RX-bytes is 279232", - ) - self.check_packet_payload_valid_with_cbdma( - self.dump_virtio_pcap, - total_pkts_num, - pkts_64_num=large_64_pkts_num, - pkts_4640_num=large_4640_pkts_num, - ) - - def test_vm2vm_virtio_user_split_virtqueue_inorder_non_mergeable_path_multi_queues_check_chain_pkts_with_cbdma( - self, - ): - """ - Test Case 14: split virtqueue vm2vm inorder non-mergeable path multi-queues payload check with cbdma enabled - """ - self.cbdma_nic_dev_num = 4 - self.get_cbdma_ports_info_and_bind_to_dpdk() - large_4640_pkts_num = 0 - large_64_pkts_num = 448 - total_pkts_num = large_4640_pkts_num + large_64_pkts_num - self.queue_num = 2 - self.txd_num = 4096 - path_mode = "server=1,packed_vq=0,mrg_rxbuf=0,in_order=1" - ringsize = 4096 - extern_params = "--rxq=2 --txq=2" - - self.logger.info("check pcap file info about virtio") - self.vdevs = ( - f"--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[0]};txq1@{self.cbdma_dev_infos[1]}]' " - f"--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[2]};txq1@{self.cbdma_dev_infos[3]}]'" - ) - - self.get_dump_file_of_virtio_user_cbdma(path_mode, extern_params, ringsize) - self.send_27_4640byte_and_224_64byte_pkts() - self.vhost_user_pmd.execute_cmd("start") - # execute stop and port stop all to avoid testpmd tail_pkts issue. - self.vhost_user_pmd.execute_cmd("stop") - self.vhost_user_pmd.execute_cmd("port stop all") - out = self.virtio_user1_pmd.execute_cmd("show port stats all") - self.verify( - "RX-packets: 448" in out and "RX-bytes: 28672" in out, - "expect: virtio-user1 RX-packets is 448 and RX-bytes is 28672", - ) - self.check_packet_payload_valid_with_cbdma( - self.dump_virtio_pcap, - total_pkts_num, - pkts_64_num=large_64_pkts_num, - pkts_4640_num=large_4640_pkts_num, - ) - - if not self.check_2M_env: - self.logger.info("relanuch vhost testpmd iova pa") - self.relanuch_vhost_testpmd_iova_pa() - self.send_27_4640byte_and_224_64byte_pkts() - self.vhost_user_pmd.execute_cmd("start") - self.vhost_user_pmd.execute_cmd("stop") - out = self.virtio_user1_pmd.execute_cmd("show port stats all") - self.verify( - "RX-packets: 448" in out and "RX-bytes: 28672" in out, - "expect: virtio-user1 RX-packets is 448 and RX-bytes is 28672", - ) - self.check_packet_payload_valid_with_cbdma( - self.dump_virtio_pcap, - total_pkts_num, - pkts_64_num=large_64_pkts_num, - pkts_4640_num=large_4640_pkts_num, - ) - - def test_vm2vm_virtio_user_split_virtqueue_vectorized_multi_queues_check_chain_pkts_with_cbdma( - self, - ): - """ - Test Case 15: split virtqueue vm2vm vectorized path multi-queues payload check with cbdma enabled - """ - self.cbdma_nic_dev_num = 4 - self.get_cbdma_ports_info_and_bind_to_dpdk() - large_4640_pkts_num = 0 - large_64_pkts_num = 448 - total_pkts_num = large_4640_pkts_num + large_64_pkts_num - self.queue_num = 2 - self.txd_num = 4096 - path_mode = "server=1,packed_vq=0,mrg_rxbuf=0,in_order=0,vectorized=1" - ringsize = 4096 - extern_params = "--rxq=2 --txq=2" - - self.logger.info("check pcap file info about virtio") - self.vdevs = ( - f"--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[0]};txq1@{self.cbdma_dev_infos[1]}]' " - f"--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[2]};txq1@{self.cbdma_dev_infos[3]}]'" - ) - - self.get_dump_file_of_virtio_user_cbdma(path_mode, extern_params, ringsize) - self.send_224_64byte_and_27_4640byte_pkts() - self.vhost_user_pmd.execute_cmd("start") - # execute stop and port stop all to avoid testpmd tail_pkts issue. - self.vhost_user_pmd.execute_cmd("stop") - self.vhost_user_pmd.execute_cmd("port stop all") - out = self.virtio_user1_pmd.execute_cmd("show port stats all") - self.verify( - "RX-packets: 448" in out and "RX-bytes: 28672" in out, - "expect: virtio-user1 RX-packets is 448 and RX-bytes is 28672", - ) - self.check_packet_payload_valid_with_cbdma( - self.dump_virtio_pcap, - total_pkts_num, - pkts_64_num=large_64_pkts_num, - pkts_4640_num=large_4640_pkts_num, - ) - - if not self.check_2M_env: - self.logger.info("relanuch vhost testpmd iova pa") - self.relanuch_vhost_testpmd_iova_pa() - self.send_224_64byte_and_27_4640byte_pkts() - - self.vhost_user_pmd.execute_cmd("start") - self.vhost_user_pmd.execute_cmd("stop") - out = self.virtio_user1_pmd.execute_cmd("show port stats all") - self.verify( - "RX-packets: 448" in out and "RX-bytes: 28672" in out, - "expect: virtio-user1 RX-packets is 448 and RX-bytes is 28672", - ) - self.check_packet_payload_valid_with_cbdma( - self.dump_virtio_pcap, - total_pkts_num, - pkts_64_num=large_64_pkts_num, - pkts_4640_num=large_4640_pkts_num, - ) - - def test_vm2vm_virtio_user_split_virtqueue_inorder_mergeable_path_test_non_indirect_desc_with_cbdma( - self, - ): - """ - Test Case 16: split virtqueue vm2vm inorder mergeable path test non-indirect descriptor with cbdma enable - """ - self.cbdma_nic_dev_num = 4 - self.get_cbdma_ports_info_and_bind_to_dpdk() - large_8k_pkts_num = 2 - large_64_pkts_num = 502 - total_pkts_num = large_8k_pkts_num + large_64_pkts_num - self.queue_num = 2 - self.txd_num = 256 - path_mode = "server=1,packed_vq=0,mrg_rxbuf=1,in_order=1" - ringsize = 256 - extern_params = "--rxq=2 --txq=2" - - self.logger.info("check pcap file info about virtio") - self.vdevs = ( - f"--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[0]};txq1@{self.cbdma_dev_infos[1]}]' " - f"--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[2]};txq1@{self.cbdma_dev_infos[3]}]'" - ) - - self.get_dump_file_of_virtio_user_cbdma(path_mode, extern_params, ringsize) - self.send_251_64_and_32_8k_pkts() - self.vhost_user_pmd.execute_cmd("start") - # execute stop and port stop all to avoid testpmd tail_pkts issue. - self.vhost_user_pmd.execute_cmd("stop") - self.vhost_user_pmd.execute_cmd("port stop all") - out = self.virtio_user1_pmd.execute_cmd("show port stats all") - self.verify( - "RX-packets: 504" in out and "RX-bytes: 48128" in out, - "expect: virtio-user1 RX-packets is 504 and RX-bytes is 48128", - ) - self.check_packet_payload_valid_with_cbdma( - self.dump_virtio_pcap, - total_pkts_num, - pkts_64_num=large_64_pkts_num, - pkts_8k_num=large_8k_pkts_num, - ) - - if not self.check_2M_env: - self.logger.info("relanuch vhost testpmd iova pa and virtio") - self.relanuch_vhost_iova_pa_and_virtio(path_mode, extern_params, ringsize) - self.send_251_64_and_32_8k_pkts() - self.vhost_user_pmd.execute_cmd("start") - self.vhost_user_pmd.execute_cmd("stop") - out = self.virtio_user1_pmd.execute_cmd("show port stats all") - self.verify( - "RX-packets: 504" in out and "RX-bytes: 48128" in out, - "expect: virtio-user1 RX-packets is 504 and RX-bytes is 48128", - ) - self.check_packet_payload_valid_with_cbdma( - self.dump_virtio_pcap, - total_pkts_num, - pkts_64_num=large_64_pkts_num, - pkts_8k_num=large_8k_pkts_num, - ) - - def test_vm2vm_virtio_user_split_virtqueue_mergeable_path_test_indirect_desc_with_cbdma( - self, - ): - """ - Test Case 17: split virtqueue vm2vm mergeable path test indirect descriptor with cbdma enable - """ - self.cbdma_nic_dev_num = 4 - self.get_cbdma_ports_info_and_bind_to_dpdk() - large_8k_pkts_num = 10 - large_64_pkts_num = 502 - total_pkts_num = large_8k_pkts_num + large_64_pkts_num - self.queue_num = 2 - self.txd_num = 256 - path_mode = "server=1,packed_vq=0,mrg_rxbuf=1,in_order=0" - ringsize = 256 - extern_params = "--rxq=2 --txq=2" - - self.logger.info("check pcap file info about virtio") - self.vdevs = ( - f"--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[0]};txq1@{self.cbdma_dev_infos[1]}]' " - f"--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[2]};txq1@{self.cbdma_dev_infos[3]}]'" - ) - - self.get_dump_file_of_virtio_user_cbdma(path_mode, extern_params, ringsize) - self.send_251_64_and_32_8k_pkts() - self.vhost_user_pmd.execute_cmd("start") - # execute stop and port stop all to avoid testpmd tail_pkts issue. - self.vhost_user_pmd.execute_cmd("stop") - self.vhost_user_pmd.execute_cmd("port stop all") - out = self.virtio_user1_pmd.execute_cmd("show port stats all") - self.verify( - "RX-packets: 512" in out and "RX-bytes: 112128" in out, - "expect: virtio-user1 RX-packets is 512 and RX-bytes is 112128", - ) - self.check_packet_payload_valid_with_cbdma( - self.dump_virtio_pcap, - total_pkts_num, - pkts_64_num=large_64_pkts_num, - pkts_8k_num=large_8k_pkts_num, - ) - - if not self.check_2M_env: - self.logger.info("relanuch vhost testpmd iova pa and virtio") - self.relanuch_vhost_iova_pa_and_virtio(path_mode, extern_params, ringsize) - self.send_251_64_and_32_8k_pkts() - self.vhost_user_pmd.execute_cmd("start") - self.vhost_user_pmd.execute_cmd("stop") - out = self.virtio_user1_pmd.execute_cmd("show port stats all") - self.verify( - "RX-packets: 512" in out and "RX-bytes: 112128" in out, - "expect: virtio-user1 RX-packets is 512 and RX-bytes is 112128", - ) - self.check_packet_payload_valid_with_cbdma( - self.dump_virtio_pcap, - total_pkts_num, - pkts_64_num=large_64_pkts_num, - pkts_8k_num=large_8k_pkts_num, - ) - - def test_vm2vm_virtio_user_packed_virtqueue_non_mergeable_path_multi_queues_check_chain_pkts_with_cbdma( - self, - ): - """ - Test Case 18: packed virtqueue vm2vm non mergeable path multi-queues payload check with cbdma enabled - """ - self.cbdma_nic_dev_num = 4 - self.get_cbdma_ports_info_and_bind_to_dpdk() - pkts_64_num = 448 - pkts_4640_num = 0 - total_pkts_num = pkts_64_num + pkts_4640_num - self.queue_num = 2 - self.txd_num = 4096 - path_mode = "server=1,packed_vq=1,mrg_rxbuf=0,in_order=0" - ringsize = 4096 - extern_params = "--rxq=2 --txq=2" - - self.logger.info("check pcap file info about virtio") - self.vdevs = ( - f"--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[0]};txq1@{self.cbdma_dev_infos[1]}]' " - f"--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[2]};txq1@{self.cbdma_dev_infos[3]}]'" - ) - self.get_dump_file_of_virtio_user_cbdma(path_mode, extern_params, ringsize) - self.send_224_64byte_and_27_4640byte_pkts() - self.vhost_user_pmd.execute_cmd("start") - # execute stop and port stop all to avoid testpmd tail_pkts issue. - self.vhost_user_pmd.execute_cmd("stop") - self.vhost_user_pmd.execute_cmd("port stop all") - out = self.virtio_user1_pmd.execute_cmd("show port stats all") - self.verify( - "RX-packets: 448" in out and "RX-bytes: 28672" in out, - "expect: virtio-user1 RX-packets is 448 and RX-bytes is 28672", - ) - self.check_packet_payload_valid_with_cbdma( - self.dump_virtio_pcap, total_pkts_num, pkts_64_num, pkts_4640_num - ) - - if not self.check_2M_env: - self.logger.info("relanuch vhost testpmd iova pa") - self.relanuch_vhost_testpmd_iova_pa() - self.send_224_64byte_and_27_4640byte_pkts() - - self.vhost_user_pmd.execute_cmd("start") - self.vhost_user_pmd.execute_cmd("stop") - out = self.virtio_user1_pmd.execute_cmd("show port stats all") - self.verify( - "RX-packets: 448" in out and "RX-bytes: 28672" in out, - "expect: virtio-user1 RX-packets is 448 and RX-bytes is 28672", - ) - self.check_packet_payload_valid_with_cbdma( - self.dump_virtio_pcap, total_pkts_num, pkts_64_num, pkts_4640_num - ) - - def test_vm2vm_virtio_user_packed_virtqueue_mergeable_path_multi_queues_check_chain_pkts_with_cbdma( - self, - ): - """ - Test Case 19: packed virtqueue vm2vm mergeable path multi-queues payload check with cbdma enabled - """ - self.cbdma_nic_dev_num = 4 - self.get_cbdma_ports_info_and_bind_to_dpdk() - large_4640_pkts_num = 54 - large_64_pkts_num = 448 - total_pkts_num = large_4640_pkts_num + large_64_pkts_num - self.queue_num = 2 - self.txd_num = 4096 - path_mode = "server=1,packed_vq=1,mrg_rxbuf=1,in_order=0" - ringsize = 4096 - extern_params = "--rxq=2 --txq=2" - - self.logger.info("check pcap file info about virtio") - self.vdevs = ( - f"--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[0]};txq1@{self.cbdma_dev_infos[1]}]' " - f"--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[2]};txq1@{self.cbdma_dev_infos[3]}]'" - ) - - self.get_dump_file_of_virtio_user_cbdma(path_mode, extern_params, ringsize) - self.send_27_4640byte_and_224_64byte_pkts() - self.vhost_user_pmd.execute_cmd("start") - # execute stop and port stop all to avoid testpmd tail_pkts issue. - self.vhost_user_pmd.execute_cmd("stop") - self.vhost_user_pmd.execute_cmd("port stop all") - out = self.virtio_user1_pmd.execute_cmd("show port stats all") - self.verify( - "RX-packets: 502" in out and "RX-bytes: 279232" in out, - "expect: virtio-user1 RX-packets is 502 and RX-bytes is 279232", - ) - self.check_packet_payload_valid_with_cbdma( - self.dump_virtio_pcap, - total_pkts_num, - pkts_64_num=large_64_pkts_num, - pkts_4640_num=large_4640_pkts_num, - ) - if not self.check_2M_env: - self.logger.info("relanuch vhost testpmd iova pa") - self.relanuch_vhost_testpmd_iova_pa() - self.send_27_4640byte_and_224_64byte_pkts() - - self.vhost_user_pmd.execute_cmd("start") - self.vhost_user_pmd.execute_cmd("stop") - out = self.virtio_user1_pmd.execute_cmd("show port stats all") - self.verify( - "RX-packets: 502" in out and "RX-bytes: 279232" in out, - "expect: virtio-user1 RX-packets is 502 and RX-bytes is 279232", - ) - self.check_packet_payload_valid_with_cbdma( - self.dump_virtio_pcap, - total_pkts_num, - pkts_64_num=large_64_pkts_num, - pkts_4640_num=large_4640_pkts_num, - ) - - def test_vm2vm_virtio_user_packed_virtqueue_inorder_mergeable_path_multi_queues_check_chain_pkts_with_cbdma( - self, - ): - """ - Test Case 20: packed virtqueue vm2vm inorder mergeable path multi-queues payload check with cbdma enabled - """ - self.cbdma_nic_dev_num = 4 - self.get_cbdma_ports_info_and_bind_to_dpdk() - large_4640_pkts_num = 54 - large_64_pkts_num = 448 - total_pkts_num = large_4640_pkts_num + large_64_pkts_num - self.queue_num = 2 - self.txd_num = 4096 - path_mode = "server=1,packed_vq=1,mrg_rxbuf=1,in_order=1" - ringsize = 4096 - extern_params = "--rxq=2 --txq=2" - - self.logger.info("check pcap file info about virtio") - self.vdevs = ( - f"--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[0]};txq1@{self.cbdma_dev_infos[1]}]' " - f"--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[2]};txq1@{self.cbdma_dev_infos[3]}]'" - ) - - self.get_dump_file_of_virtio_user_cbdma(path_mode, extern_params, ringsize) - self.send_27_4640byte_and_224_64byte_pkts() - self.vhost_user_pmd.execute_cmd("start") - # execute stop and port stop all to avoid testpmd tail_pkts issue. - self.vhost_user_pmd.execute_cmd("stop") - self.vhost_user_pmd.execute_cmd("port stop all") - out = self.virtio_user1_pmd.execute_cmd("show port stats all") - self.verify( - "RX-packets: 502" in out and "RX-bytes: 279232" in out, - "expect: virtio-user1 RX-packets is 502 and RX-bytes is 279232", - ) - self.check_packet_payload_valid_with_cbdma( - self.dump_virtio_pcap, - total_pkts_num, - pkts_64_num=large_64_pkts_num, - pkts_4640_num=large_4640_pkts_num, - ) - if not self.check_2M_env: - self.logger.info("relanuch vhost testpmd iova pa") - self.relanuch_vhost_testpmd_iova_pa() - self.send_27_4640byte_and_224_64byte_pkts() - - self.vhost_user_pmd.execute_cmd("start") - self.vhost_user_pmd.execute_cmd("stop") - out = self.virtio_user1_pmd.execute_cmd("show port stats all") - self.verify( - "RX-packets: 502" in out and "RX-bytes: 279232" in out, - "expect: virtio-user1 RX-packets is 502 and RX-bytes is 279232", - ) - self.check_packet_payload_valid_with_cbdma( - self.dump_virtio_pcap, - total_pkts_num, - pkts_64_num=large_64_pkts_num, - pkts_4640_num=large_4640_pkts_num, - ) - - def test_vm2vm_virtio_user_packed_virtqueue_inorder_non_mergeable_path_multi_queues_check_chain_pkts_with_cbdma( - self, - ): - """ - Test Case 21: packed virtqueue vm2vm inorder non-mergeable path multi-queues payload check with cbdma enabled - """ - self.cbdma_nic_dev_num = 4 - self.get_cbdma_ports_info_and_bind_to_dpdk() - large_4640_pkts_num = 0 - large_64_pkts_num = 448 - total_pkts_num = large_4640_pkts_num + large_64_pkts_num - self.queue_num = 2 - self.txd_num = 4096 - path_mode = "server=1,packed_vq=1,mrg_rxbuf=0,in_order=1" - ringsize = 4096 - extern_params = "--rxq=2 --txq=2" - - self.logger.info("check pcap file info about virtio") - self.vdevs = ( - f"--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[0]};txq1@{self.cbdma_dev_infos[1]}]' " - f"--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[2]};txq1@{self.cbdma_dev_infos[3]}]'" - ) - - self.get_dump_file_of_virtio_user_cbdma(path_mode, extern_params, ringsize) - self.send_224_64byte_and_27_4640byte_pkts() - self.vhost_user_pmd.execute_cmd("start") - # execute stop and port stop all to avoid testpmd tail_pkts issue. - self.vhost_user_pmd.execute_cmd("stop") - self.vhost_user_pmd.execute_cmd("port stop all") - out = self.virtio_user1_pmd.execute_cmd("show port stats all") - self.verify( - "RX-packets: 448" in out and "RX-bytes: 28672" in out, - "expect: virtio-user1 RX-packets is 448 and RX-bytes is 28672", - ) - self.check_packet_payload_valid_with_cbdma( - self.dump_virtio_pcap, - total_pkts_num, - pkts_64_num=large_64_pkts_num, - pkts_4640_num=large_4640_pkts_num, - ) - if not self.check_2M_env: - self.logger.info("relanuch vhost testpmd iova pa") - self.relanuch_vhost_testpmd_iova_pa() - self.send_224_64byte_and_27_4640byte_pkts() - - self.vhost_user_pmd.execute_cmd("start") - self.vhost_user_pmd.execute_cmd("stop") - out = self.virtio_user1_pmd.execute_cmd("show port stats all") - self.verify( - "RX-packets: 448" in out and "RX-bytes: 28672" in out, - "expect: virtio-user1 RX-packets is 448 and RX-bytes is 28672", - ) - self.check_packet_payload_valid_with_cbdma( - self.dump_virtio_pcap, - total_pkts_num, - pkts_64_num=large_64_pkts_num, - pkts_4640_num=large_4640_pkts_num, - ) - - def test_vm2vm_virtio_user_packed_virtqueue_vectorized_path_multi_queues_check_chain_pkts_with_cbdma( - self, - ): - """ - Test Case 22: packed virtqueue vm2vm vectorized path multi-queues payload check with cbdma enabled - """ - self.cbdma_nic_dev_num = 4 - self.get_cbdma_ports_info_and_bind_to_dpdk() - large_4640_pkts_num = 0 - large_64_pkts_num = 448 - total_pkts_num = large_4640_pkts_num + large_64_pkts_num - self.queue_num = 2 - self.txd_num = 4096 - path_mode = "server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1" - ringsize = 4096 - extern_params = "--rxq=2 --txq=2" - - self.logger.info("check pcap file info about virtio") - self.vdevs = ( - f"--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[0]};txq1@{self.cbdma_dev_infos[1]}]' " - f"--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[2]};txq1@{self.cbdma_dev_infos[3]}]'" - ) - - self.get_dump_file_of_virtio_user_cbdma(path_mode, extern_params, ringsize) - self.send_224_64byte_and_27_4640byte_pkts() - self.vhost_user_pmd.execute_cmd("start") - # execute stop and port stop all to avoid testpmd tail_pkts issue. - self.vhost_user_pmd.execute_cmd("stop") - self.vhost_user_pmd.execute_cmd("port stop all") - out = self.virtio_user1_pmd.execute_cmd("show port stats all") - self.verify( - "RX-packets: 448" in out and "RX-bytes: 28672" in out, - "expect: virtio-user1 RX-packets is 448 and RX-bytes is 28672", - ) - self.check_packet_payload_valid_with_cbdma( - self.dump_virtio_pcap, - total_pkts_num, - pkts_64_num=large_64_pkts_num, - pkts_4640_num=large_4640_pkts_num, - ) - if not self.check_2M_env: - self.logger.info("relanuch vhost testpmd iova pa") - self.relanuch_vhost_testpmd_iova_pa() - self.send_224_64byte_and_27_4640byte_pkts() - - self.vhost_user_pmd.execute_cmd("start") - self.vhost_user_pmd.execute_cmd("stop") - out = self.virtio_user1_pmd.execute_cmd("show port stats all") - self.verify( - "RX-packets: 448" in out and "RX-bytes: 28672" in out, - "expect: virtio-user1 RX-packets is 448 and RX-bytes is 28672", - ) - self.check_packet_payload_valid_with_cbdma( - self.dump_virtio_pcap, - total_pkts_num, - pkts_64_num=large_64_pkts_num, - pkts_4640_num=large_4640_pkts_num, - ) - - def test_vm2vm_virtio_user_packed_virtqueue_vectorized_path_ringsize_not_power_of_2_multi_queues_check_chain_pkts_with_cbdma( - self, - ): - """ - Test Case 23: packed virtqueue vm2vm vectorized path ringsize_not_power_of_2 multi-queues payload check with cbdma enabled - """ - self.cbdma_nic_dev_num = 4 - self.get_cbdma_ports_info_and_bind_to_dpdk() - large_4640_pkts_num = 0 - large_64_pkts_num = 448 - total_pkts_num = large_4640_pkts_num + large_64_pkts_num - self.queue_num = 2 - self.txd_num = 4096 - path_mode = "server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1" - ringsize = 4097 - extern_params = "--rxq=2 --txq=2" - - self.logger.info("check pcap file info about virtio") - self.vdevs = ( - f"--vdev 'eth_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[0]};txq1@{self.cbdma_dev_infos[1]}]' " - f"--vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@{self.cbdma_dev_infos[2]};txq1@{self.cbdma_dev_infos[3]}]'" - ) - - self.get_dump_file_of_virtio_user_cbdma(path_mode, extern_params, ringsize) - self.send_224_64byte_and_27_4640byte_pkts() - self.vhost_user_pmd.execute_cmd("start") - # execute stop and port stop all to avoid testpmd tail_pkts issue. - self.vhost_user_pmd.execute_cmd("stop") - self.vhost_user_pmd.execute_cmd("port stop all") - out = self.virtio_user1_pmd.execute_cmd("show port stats all") - self.verify( - "RX-packets: 448" in out and "RX-bytes: 28672" in out, - "expect: virtio-user1 RX-packets is 448 and RX-bytes is 28672", - ) - self.check_packet_payload_valid_with_cbdma( - self.dump_virtio_pcap, - total_pkts_num, - pkts_64_num=large_64_pkts_num, - pkts_4640_num=large_4640_pkts_num, - ) - if not self.check_2M_env: - self.logger.info("relanuch vhost testpmd iova pa") - self.relanuch_vhost_testpmd_iova_pa() - self.send_224_64byte_and_27_4640byte_pkts() - self.vhost_user_pmd.execute_cmd("start") - self.vhost_user_pmd.execute_cmd("stop") - out = self.virtio_user1_pmd.execute_cmd("show port stats all") - self.verify( - "RX-packets: 448" in out and "RX-bytes: 28672" in out, - "expect: virtio-user1 RX-packets is 448 and RX-bytes is 28672", - ) - self.check_packet_payload_valid_with_cbdma( - self.dump_virtio_pcap, - total_pkts_num, - pkts_64_num=large_64_pkts_num, - pkts_4640_num=large_4640_pkts_num, - ) - def test_vm2vm_virtio_user_packed_virtqueue_vectorized_path_test_indirect_desc( self, ): """ - Test Case 24: packed virtqueue vm2vm vectorized-tx path multi-queues test indirect descriptor + Test Case 12: packed virtqueue vm2vm vectorized-tx path multi-queues test indirect descriptor """ large_8k_pkts_num = 5 large_64_pkts_num = 251 @@ -1698,7 +859,6 @@ class TestVM2VMVirtioUser(TestCase): Run after each test case. """ self.quit_all_testpmd() - self.bind_cbdma_device_to_kernel() self.dut.kill_all() time.sleep(2) @@ -1706,5 +866,4 @@ class TestVM2VMVirtioUser(TestCase): """ Run after each test suite. """ - self.bind_nic_driver(self.dut_ports, self.drivername) self.close_all_session() From patchwork Thu Apr 21 07:03:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 109951 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BC87DA00C3; Thu, 21 Apr 2022 09:03:28 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B744241101; Thu, 21 Apr 2022 09:03:28 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 2157040040 for ; Thu, 21 Apr 2022 09:03:25 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650524606; x=1682060606; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=Y3UrkTCa9ehJevuabo30CBwuUB4ydXQsUx4IEyB8J+w=; b=WL5YtIDkMZmglMQ2QTR6jPKbt2nrfQ+qSbbdR8bpgBrYKqXKeYQegzdD Idh8ebWkcVv1HY6j/s2tN3A6R7PFkAeveSotC/kvk7QgHzxcZI0kb/6Rx RkrY8XeKvwT1L0omBYiFptGjhqmjFvmxE6RHHAbLRFag+Ga/mUBPJNJ79 X8zJgNeW+WspIs0B8iEEntxj7qEj8UK4ICtAjRmIM3t/0FS2zvnbsEc/R SwEOcgfVWqLw8S9jFxk5rtQS+Xv9y7RDKmQXxy/51Xnjt4A9vN7+VnYQ5 EFzTDFFE7wuClTBpacPVUoe6oxG9KDNTF/i+R92km6Tfzb9aoC21JsfIw w==; X-IronPort-AV: E=McAfee;i="6400,9594,10323"; a="263113302" X-IronPort-AV: E=Sophos;i="5.90,278,1643702400"; d="scan'208";a="263113302" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2022 00:03:25 -0700 X-IronPort-AV: E=Sophos;i="5.90,278,1643702400"; d="scan'208";a="577060238" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2022 00:03:22 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 4/5] test_plans/vm2vm_virtio_user_cbdma_test_plan: add new testsuite of DPDK-22.03 Date: Thu, 21 Apr 2022 15:03:18 +0800 Message-Id: <20220421070318.1554575-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path), add new test_plans/vm2vm_virtio_user_cbdma_test_plans.rst. Signed-off-by: Wei Ling --- .../vm2vm_virtio_user_cbdma_test_plan.rst | 1078 +++++++++++++++++ 1 file changed, 1078 insertions(+) create mode 100644 test_plans/vm2vm_virtio_user_cbdma_test_plan.rst diff --git a/test_plans/vm2vm_virtio_user_cbdma_test_plan.rst b/test_plans/vm2vm_virtio_user_cbdma_test_plan.rst new file mode 100644 index 00000000..b7afbdec --- /dev/null +++ b/test_plans/vm2vm_virtio_user_cbdma_test_plan.rst @@ -0,0 +1,1078 @@ +.. Copyright (c) <2022>, Intel Corporation + All rights reserved. + + Redistribution and use in source and binary forms, with or without + modification, are permitted provided that the following conditions + are met: + + - Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + + - Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in + the documentation and/or other materials provided with the + distribution. + + - Neither the name of Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived + from this software without specific prior written permission. + + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS + FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE + COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, + INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES + (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR + SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) + ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED + OF THE POSSIBILITY OF SUCH DAMAGE. + +================================================= +vm2vm vhost-user/virtio-user test plan +================================================= + +Description +=========== +Test indirect descriptor feature. +For example, the split ring mergeable inorder path use non-indirect descriptor, +the 2000,2000,2000,2000 chain packets will need 4 consequent ring, still need one ring put header. +the split ring mergeable path use indirect descriptor, +the 2000,2000,2000,2000 chain packets will only occupy one ring. + +This test plan test several features in VM2VM topo: +1. Split ring and packed ring vm2vm test when vhost enqueue operation with multi-CBDMA channels. +2. payload check +3. iova=pa mode (When DMA devices are bound to vfio driver, VA mode is the default and recommended. +For PA mode, page by page mapping may exceed IOMMU's max capability, better to use 1G guest hugepage.) + +For more about dpdk-testpmd sample, please refer to the DPDK docments: +https://doc.dpdk.org/guides/testpmd_app_ug/run_app.html + +For virtio-user vdev parameter, you can refer to the DPDK docments: +https://doc.dpdk.org/guides/nics/virtio.html#virtio-paths-selection-and-usage. + +Prerequisites +============= + +Topology +-------- +Test flow: Virtio-user-->Vhost-user-->Testpmd-->Vhost-user-->Virtio-user + +Hardware +-------- +Supportted NICs: ALL + +Software +-------- +Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz + +General set up +-------------- +1. Compile DPDK:: + + # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library= + # ninja -C -j 110 + +2. Get the PCI device ID and DMA device ID of DUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 is DMA device ID:: + + # ./usertools/dpdk-devbind.py -s + + Network devices using kernel driver + =================================== + 0000:18:00.0 'Device 159b' if=ens785f0 drv=ice unused=vfio-pci + + DMA devices using kernel driver + =============================== + 0000:00:04.0 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci + 0000:00:04.1 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci + +Test case +========= + +Common steps +------------ +1. Bind 1 NIC port and CBDMA channels to vfio-pci:: + + # ./usertools/dpdk-devbind.py -b vfio-pci + # ./usertools/dpdk-devbind.py -b vfio-pci + + For example, 2 CBDMA channels:: + # ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:04.0,0000:00:04.1 + +2. Attach pdump secondary process to primary process by same file-prefix:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 \ + -- --pdump 'device_id=net_virtio_user1,queue=*,rx-dev=./pdump-virtio-rx.pcap,mbuf-size=8000' + +Test Case 1: split virtqueue vm2vm non-mergeable path multi-queues payload check with cbdma enable +-------------------------------------------------------------------------------------------------- +This case uses test of 2 virtio-user with split ring non-mergeable path with multi-queues and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user side receive packets. + +1. Bind 2 CBDMA channel to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 -a 0000:00:04.1 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1] + +3. Launch virtio-user1 by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ + --no-pci --file-prefix=virtio1 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=0,queue_size=4096 \ + -- -i --enable-hw-vlan-strip --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set fwd rxonly + testpmd>start + +4. Start pdump to capture virtio-user1 packets, as common step 2. + +5. Launch virtio-user0 and send packets:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ + --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=0,queue_size=4096 \ + -- -i --enable-hw-vlan-strip --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set burst 1 + testpmd>set txpkts 64,128,256,512 + testpmd>start tx_first 27 + testpmd>stop + testpmd>set burst 32 + testpmd>start tx_first 7 + testpmd>stop + testpmd>set txpkts 64 + testpmd>start tx_first 1 + testpmd>stop + +6. Start vhost testpmd, check virtio-user1 RX-packets is 566 and RX-bytes is 486016, 54 packets with 960 length and 512 packets with 64 length in pdump-virtio-rx.pcap. + +7. Quit vhost testpmd:: + + testpmd>quit + +8. Clear virtio-user1 port stats, execute below command:: + + testpmd>stop + testpmd>clear port stats all + testpmd>start + +9. Relaunch vhost with iova=pa by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 -a 0000:00:04.1 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1] + +10. Rerun step 3. + +11. Virtio-user0 send packets:: + + testpmd>set burst 1 + testpmd>set txpkts 64,128,256,512 + testpmd>start tx_first 27 + testpmd>stop + testpmd>set burst 32 + testpmd>start tx_first 7 + testpmd>stop + testpmd>set txpkts 64 + testpmd>start tx_first 1 + testpmd>stop + +12. Rerun step 6. + +Test Case 2: split virtqueue vm2vm mergeable path multi-queues payload check with cbdma enable +---------------------------------------------------------------------------------------------- +This case uses test of 2 virtio-user with split ring mergeable path with multi-queues and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user side receive packets. + +1. Bind 1 CBDMA channel to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0] + +3. Launch virtio-user1 by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set fwd rxonly + testpmd>start + +4. Start pdump to capture virtio-user1 packets, as common step 2. + +5. Launch virtio-user0 and send packets:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + +6. Start vhost testpmd, check 502 packets and 279232 bytes received by virtio-user1 and 54 packets with 4640 length and 448 packets with 64 length in pdump-virtio-rx.pcap. + +7. Quit vhost testpmd:: + + testpmd>quit + +8. Clear virtio-user1 port stats, execute below command:: + + testpmd>stop + testpmd>clear port stats all + testpmd>start + +9. Relaunch vhost with iova=pa by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0] + +10. Rerun step 3. + +11. Virtio-user0 send packets:: + + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + +12. Rerun step 6. + +Test Case 3: split virtqueue vm2vm inorder non-mergeable path multi-queues payload check with cbdma enable +---------------------------------------------------------------------------------------------------------- +This case uses test of 2 virtio-user with split ring inorder non-mergeable path with multi-queues and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user side receive packets. + +1. Bind 5 CBDMA channel to vfio-pci, as common step 1. + +1. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4] + +2. Launch virtio-user1 by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=1,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set fwd rxonly + testpmd>start + +3. Start pdump to capture virtio-user1 packets, as common step 2. + +4. Launch virtio-user0 and send packets:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=1,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + +5. Start vhost testpmd, check 448 packets and 28672 bytes received by virtio-user1 and 448 packets with 64 length in pdump-virtio-rx.pcap. + +6. Quit vhost testpmd:: + + testpmd>quit + +7. Clear virtio-user1 port stats, execute below command:: + + testpmd>stop + testpmd>clear port stats all + testpmd>start + +8. Relaunch vhost with iova=pa by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4] + +9. Rerun step 3. + +10. Virtio-user0 send packets:: + + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + +11. Rerun step 5. + +Test Case 4: split virtqueue vm2vm vectorized path multi-queues payload check with cbdma enable +----------------------------------------------------------------------------------------------- +This case uses test of 2 virtio-user with split ring vectorized path with multi-queues and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user side receive packets. + +1. Bind 8 CBDMA channel to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq1],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + +3. Launch virtio-user1 by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set fwd rxonly + testpmd>start + +4. Start pdump to capture virtio-user1 packets, as common step 2. + +5. Launch virtio-user0 and send packets:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + +6. Start vhost testpmd, check 448 packets and 28672 bytes received by virtio-user1 and 448 packets with 64 length in pdump-virtio-rx.pcap. + +7. Quit vhost testpmd:: + + testpmd>quit + +8. Clear virtio-user1 port stats, execute below command:: + + testpmd>stop + testpmd>clear port stats all + testpmd>start + +9. Relaunch vhost with iova=pa by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq1],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + +10. Rerun step 3. + +11. Virtio-user0 send packets:: + + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + +12. Rerun step 6. + +Test Case 5: Split virtqueue vm2vm inorder mergeable path test non-indirect descriptor with cbdma enable +-------------------------------------------------------------------------------------------------------- +This case uses test of 2 virtio-user with split ring inorder mergeable path non-indirect descriptor and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user side receive packets. + +1. Bind 4 CBDMA channel to vfio-pci, as common step 1. + +2. Launch testpmd by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3] + +3. Launch virtio-user1 by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ + --no-pci --file-prefix=virtio1 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=1,queue_size=256 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 + testpmd>set fwd rxonly + testpmd>start + +4. Start pdump to capture virtio-user1 packets, as common step 2. + +5. Launch virtio-user0 and send packets(include 251 small packets and 32 8K packets):: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ + --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=1,queue_size=256 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 + testpmd>set burst 1 + testpmd>start tx_first 27 + testpmd>stop + testpmd>set burst 32 + testpmd>start tx_first 7 + testpmd>stop + testpmd>set txpkts 2000,2000,2000,2000 + testpmd>start tx_first 1 + testpmd>stop + +6. Start vhost, then quit pdump and three testpmd, about split virtqueue inorder mergeable path, it use the non-indirect descriptors, +the 8k length pkt will occupies 5 ring:2000,2000,2000,2000 will need 4 consequent ring, +still need one ring put header. So check 504 packets and 48128 bytes received by virtio-user1 and 502 packets with 64 length and 2 packets with 8K length in pdump-virtio-rx.pcap. + +7. Relaunch vhost with iova=pa by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3] + +8. Rerun step 3-6. + +Test Case 6: Split virtqueue vm2vm mergeable path test indirect descriptor with cbdma enable +-------------------------------------------------------------------------------------------- +This case uses test of 2 virtio-user with split ring inorder mergeable path indirect descriptor and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user side receive packets. + +1. Bind 4 CBDMA channel to vfio-pci, as common step 1. + +2. Launch testpmd by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3] + +3. Launch virtio-user1 by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ + --no-pci --file-prefix=virtio1 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=256 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 + testpmd>set fwd rxonly + testpmd>start + +4. Start pdump to capture virtio-user1 packets, as common step 2. + +5. Launch virtio-user0 and send packets(include 251 small packets and 32 8K packets):: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ + --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=256 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 + testpmd>set burst 1 + testpmd>start tx_first 27 + testpmd>stop + testpmd>set burst 32 + testpmd>start tx_first 7 + testpmd>stop + testpmd>set txpkts 2000,2000,2000,2000 + testpmd>start tx_first 1 + testpmd>stop + +6. Start vhost, then quit pdump and three testpmd, about split virtqueue mergeable path, it use the indirect descriptors, the 8k length pkt will just occupies one ring. +So check 512 packets and 112128 bytes received by virtio-user1 and 502 packets with 64 length and 10 packets with 8K length in pdump-virtio-rx.pcap. + +7. Relaunch vhost with iova=pa by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3] + +8. Rerun step 3-6. + +Test Case 7: packed virtqueue vm2vm non-mergeable path multi-queues payload check with cbdma enable +--------------------------------------------------------------------------------------------------- +This case uses test of 2 virtio-user with packed ring non-mergeable path with multi-queues and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user side receive packets. + +1. Bind 2 CBDMA channel to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 -a 0000:00:04.1 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=va -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1] + +3. Launch virtio-user1 by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ + --no-pci --file-prefix=virtio1 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=0,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set fwd rxonly + testpmd>start + +4. Start pdump to capture virtio-user1 packets, as common step 2. + +5. Launch virtio-user0 and send packets:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ + --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=0,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + +6. Start vhost testpmd, check virtio-user1 RX-packets is 448 and RX-bytes is 28672, 448 packets with 64 length in pdump-virtio-rx.pcap. + +7. Quit vhost testpmd:: + + testpmd>quit + +8. Clear virtio-user1 port stats, execute below command:: + + testpmd>stop + testpmd>clear port stats all + testpmd>start + +9. Relaunch vhost with iova=pa by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 -a 0000:00:04.1 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=pa -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1] + +10. Rerun step 3. + +11. Virtio-user0 send packets:: + + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + +12. Rerun step 6. + +Test Case 8: packed virtqueue vm2vm mergeable path multi-queues payload check with cbdma enable +----------------------------------------------------------------------------------------------- +This case uses test of 2 virtio-user with packed ring mergeable path with multi-queues and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user side receive packets. + +1. Bind 1 CBDMA channel to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=va -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0] + +3. Launch virtio-user1 by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=0,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set fwd rxonly + testpmd>start + +4. Start pdump to capture virtio-user1 packets, as common step 2. + +5. Launch virtio-user0 and send packets:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=0,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + +6. Start vhost testpmd, then quit pdump, check 502 packets and 279232 bytes received by virtio-user1 and 54 packets with 4640 length and 448 packets with 64 length in pdump-virtio-rx.pcap. + +7. Quit vhost testpmd:: + + testpmd>quit + +8. Clear virtio-user1 port stats, execute below command:: + + testpmd>stop + testpmd>clear port stats all + testpmd>start + +9. Relaunch vhost with iova=pa by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=pa -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0] + +10. Rerun step 3. + +11. Virtio-user0 send packets:: + + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + +12. Rerun step 6. + +Test Case 9: packed virtqueue vm2vm inorder mergeable path multi-queues payload check with cbdma enable +------------------------------------------------------------------------------------------------------- +This case uses test of 2 virtio-user with packed ring inorder mergeable path with multi-queues and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user side receive packets. + +1. Bind 5 CBDMA channel to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4] + +3. Launch virtio-user1 by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set fwd rxonly + testpmd>start + +4. Start pdump to capture virtio-user1 packets, as common step 2. + +5. Launch virtio-user0 and send 8k length packets:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + +6. Start vhost testpmd, then quit pdump, check 502 packets and 279232 bytes received by virtio-user1 and 54 packets with 4640 length and 448 packets with 64 length in pdump-virtio-rx.pcap. + +7. Quit vhost testpmd:: + + testpmd>quit + +8. Clear virtio-user1 port stats, execute below command:: + + testpmd>stop + testpmd>clear port stats all + testpmd>start + +9. Relaunch vhost with iova=pa by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4] + +10. Rerun step 3. + +11. Virtio-user0 send packets:: + + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + +12. Rerun step 5. + +Test Case 10: packed virtqueue vm2vm inorder non-mergeable path multi-queues payload check with cbdma enable +------------------------------------------------------------------------------------------------------------ +This case uses test of 2 virtio-user with packed ring inorder mergeable path with multi-queues and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user side receive packets. + +1. Bind 8 CBDMA channel to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq1],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + +3. Launch virtio-user1 by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set fwd rxonly + testpmd>start + +4. Start pdump to capture virtio-user1 packets, as common step 2. + +5. Launch virtio-user0 and send 8k length packets:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + +6. Start vhost testpmd, then quit pdump, check 448 packets and 28672 bytes received by virtio-user1 and 448 packets with 64 length in pdump-virtio-rx.pcap. + +7. Quit vhost testpmd:: + + testpmd>quit + +8. Clear virtio-user1 port stats, execute below command:: + + testpmd>stop + testpmd>clear port stats all + testpmd>start + +9. Relaunch vhost with iova=pa by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq1],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + +10. Rerun step 3. + +11. Virtio-user0 send packets:: + + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + +12. Rerun step 6. + +Test Case 11: packed virtqueue vm2vm vectorized-rx path multi-queues payload check with cbdma enable +---------------------------------------------------------------------------------------------------- +This case uses test of 2 virtio-user with packed ring vectorized-rx path with multi-queues and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user side receive packets. + +1. Bind 8 CBDMA channel to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq1],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=2 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore12@0000:00:04.4,lcore12@0000:00:04.5,lcore12@0000:00:04.6,lcore12@0000:00:04.7] + +3. Launch virtio-user1 by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + testpmd>set fwd rxonly + testpmd>start + +4. Start pdump to capture virtio-user1 packets, as common step 2. + +5. Launch virtio-user0 and send 8k length packets:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --force-max-simd-bitwidth=512 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 + + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + +6. Start vhost testpmd, then quit pdump, check 448 packets and 28672 bytes received by virtio-user1 and 448 packets with 64 length in pdump-virtio-rx.pcap. + +7. Quit vhost testpmd:: + + testpmd>quit + +8. Clear virtio-user1 port stats, execute below command:: + + testpmd>stop + testpmd>clear port stats all + testpmd>start + +9. Relaunch vhost with iova=pa by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq1],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=2 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore12@0000:00:04.3,lcore12@0000:00:04.4,lcore12@0000:00:04.5,lcore12@0000:00:04.6,lcore12@0000:00:04.7] + +10. Rerun step 3. + +11. Virtio-user0 send packets:: + + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + +12. Rerun step 5. + +Test Case 12: packed virtqueue vm2vm vectorized path multi-queues payload check with ring size is not power of 2 and cbdma enable +--------------------------------------------------------------------------------------------------------------------------------- +This case uses test of 2 virtio-user with packed ring vectorized path with ring size is not power of 2 with multi-queues and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user side receive packets. + +1. Bind 8 CBDMA channel to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + +3. Launch virtio-user1 by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --force-max-simd-bitwidth=512 --no-pci --file-prefix=virtio1 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4097 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4097 --rxd=4097 + testpmd>set fwd rxonly + testpmd>start + +4. Start pdump to capture virtio-user1 packets, as common step 2. + +5. Launch virtio-user0 and send 8k length packets:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --force-max-simd-bitwidth=512 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4097 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4097 --rxd=4097 + + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + +6. Start vhost testpmd, then quit pdump, check 448 packets and 28672 bytes received by virtio-user1 and 448 packets with 64 length in pdump-virtio-rx.pcap. + +7. Quit vhost testpmd:: + + testpmd>quit + +8. Clear virtio-user1 port stats, execute below command:: + + testpmd>stop + testpmd>clear port stats all + testpmd>start + +9. Relaunch vhost with iova=pa by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore12@0000:00:04.3,lcore12@0000:00:04.4,lcore12@0000:00:04.5,lcore12@0000:00:04.6,lcore12@0000:00:04.7] + +10. Rerun step 3. + +11. Virtio-user0 send packets:: + + testpmd>set burst 32 + testpmd>set txpkts 64 + testpmd>start tx_first 7 + testpmd>stop + testpmd>set burst 1 + testpmd>set txpkts 64,256,2000,64,256,2000 + testpmd>start tx_first 27 + testpmd>stop + +12. Rerun step 5. + +Test Case 13: packed virtqueue vm2vm vectorized-tx path multi-queues test indirect descriptor with cbdma enable +--------------------------------------------------------------------------------------------------------------- +This case uses test of 2 virtio-user with packed ring vectorized-tx path with multi-queues indirect descriptor and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user side receive packets. + +1. Bind 8 CBDMA channel to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + +3. Launch virtio-user1 by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 + testpmd>set fwd rxonly + testpmd>start + +4. Start pdump to capture virtio-user1 packets, as common step 2. + +5. Launch virtio-user0 and send 8k length packets:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --force-max-simd-bitwidth=512 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 + + testpmd>set burst 1 + testpmd>start tx_first 27 + testpmd>stop + testpmd>set burst 32 + testpmd>start tx_first 7 + testpmd>stop + testpmd>set txpkts 2000,2000,2000,2000 + testpmd>start tx_first 1 + testpmd>stop + +6. Start vhost, then quit pdump and three testpmd, about packed virtqueue vectorized-tx path, it use the indirect descriptors, the 8k length pkt will just occupies one ring. +So check 512 packets and 112128 bytes received by virtio-user1 and 502 packets with 64 length and 10 packets with 8K length in pdump-virtio-rx.pcap. + +7. Relaunch vhost with iova=pa by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0;txq1],dma_ring_size=2048' \ + --iova=pa -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + +8. Rerun step 3-6. + +Test Case 14: packed virtqueue vm2vm vectorized-tx path test batch processing with cbdma enable +----------------------------------------------------------------------------------------------- +This case uses test of 2 virtio-user with packed ring vectorized-tx path with multi-queues indirect descriptor and vhost with testpmd to forward packets, +and uses pdump to capture virtio-user side receive packets. + +1. Bind 8 CBDMA channel to vfio-pci, as common step 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=1,client=1,dmas=[txq0],dma_ring_size=2048' \ + --vdev 'eth_vhost1,iface=vhost-net1,queues=1,client=1,dmas=[txq0],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=1 --rxq=1 --txq=1 --txd=256 --rxd=256 --no-flush-rx \ + --lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore11@0000:00:04.2,lcore11@0000:00:04.3,lcore11@0000:00:04.4,lcore11@0000:00:04.5,lcore11@0000:00:04.6,lcore11@0000:00:04.7] + +3. Launch virtio-user1 by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256 \ + -- -i --nb-cores=1 --rxq=1 --txq=1 --txd=256 --rxd=256 + testpmd>set fwd rxonly + testpmd>start + +4. Start pdump to capture virtio-user1 packets, as common step 2. + +5. Launch virtio-user0 and send 1 packet:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --force-max-simd-bitwidth=512 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256 \ + -- -i --nb-cores=1 --rxq=1 --txq=1 --txd=256 --rxd=256 + testpmd>set burst 1 + testpmd>start tx_first 1 + testpmd>stop + +6. Start vhost, then quit pdump and three testpmd, check 1 packet and 64 bytes received by virtio-user1 and 1 packet with 64 length in pdump-virtio-rx.pcap. From patchwork Thu Apr 21 07:03:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 109952 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EFADBA00C3; Thu, 21 Apr 2022 09:03:40 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E5296410E1; Thu, 21 Apr 2022 09:03:40 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 8E00540040 for ; Thu, 21 Apr 2022 09:03:38 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650524618; x=1682060618; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=DRRdnOFxxefZIMnzzLZr/Eo1oCN2ISjXnKupRqlkLL8=; b=khWhzj2GNF0KFkE4XjQZ/XIeY7htHQzqdQJeBH+Z0zV9FLaROMbAyBNr gb1xwN2PoBaBlTsMFpclRWN+6Dh2q7w8CNk4y5pibF2LZPfNGDtRUIBgR X+4+n3JWQPFWwBeHcebUIMPuC2AA8rFsQ2M5R2EBFHTgcTBDi4Fw983uo JgnCyrXnhTDTSJNIXaJMCKpU9CGrPKtmmNZP73RMaIntVuEv4X1ry6AMY eFckjWYLSCDtc8rSglQlib7CbV43fP64eQIyIX1A/izY875EaQKW1O3lm +HqX+tlkTpivKbG7twECO0dHjTbiFzqufG6TIXm48C0xkTOepDwqUH7u5 g==; X-IronPort-AV: E=McAfee;i="6400,9594,10323"; a="264433097" X-IronPort-AV: E=Sophos;i="5.90,278,1643702400"; d="scan'208";a="264433097" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2022 00:03:37 -0700 X-IronPort-AV: E=Sophos;i="5.90,278,1643702400"; d="scan'208";a="577060320" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2022 00:03:35 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 5/5] tests/vm2vm_virtio_user_cbdma: add new testsuite of DPDK-22.03 Date: Thu, 21 Apr 2022 15:03:31 +0800 Message-Id: <20220421070331.1554633-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path), add new tests/TestSuite_vm2vm_virtio_user_cbdma.py. Signed-off-by: Wei Ling Tested-by: Chenyu Huang --- tests/TestSuite_vm2vm_virtio_user_cbdma.py | 1344 ++++++++++++++++++++ 1 file changed, 1344 insertions(+) create mode 100644 tests/TestSuite_vm2vm_virtio_user_cbdma.py diff --git a/tests/TestSuite_vm2vm_virtio_user_cbdma.py b/tests/TestSuite_vm2vm_virtio_user_cbdma.py new file mode 100644 index 00000000..b3f78778 --- /dev/null +++ b/tests/TestSuite_vm2vm_virtio_user_cbdma.py @@ -0,0 +1,1344 @@ +# BSD LICENSE +# +# Copyright(c) <2022> Intel Corporation. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +""" +DPDK Test suite. + +Test cases for vm2vm virtio-user +This suite include split virtqueue vm2vm in-order mergeable, +in-order non-mergeable,mergeable, non-mergeable, vector_rx path +test and packed virtqueue vm2vm in-order mergeable, in-order non-mergeable, +mergeable, non-mergeable path test +""" +import re + +from framework.packet import Packet +from framework.pmd_output import PmdOutput +from framework.test_case import TestCase + + +class TestVM2VMVirtioUserCbdma(TestCase): + def set_up_all(self): + self.memory_channel = self.dut.get_memory_channels() + self.dump_virtio_pcap = "/tmp/pdump-virtio-rx.pcap" + self.dump_vhost_pcap = "/tmp/pdump-vhost-rx.pcap" + self.app_pdump = self.dut.apps_name["pdump"] + self.dut_ports = self.dut.get_ports() + self.ports_socket = self.dut.get_numa_id(self.dut_ports[0]) + self.cores_list = self.dut.get_core_list(config="all", socket=self.ports_socket) + self.vhost_core_list = self.cores_list[0:9] + self.virtio0_core_list = self.cores_list[10:12] + self.virtio1_core_list = self.cores_list[12:14] + self.vhost_user = self.dut.new_session(suite="vhost-user") + self.virtio_user0 = self.dut.new_session(suite="virtio-user0") + self.virtio_user1 = self.dut.new_session(suite="virtio-user1") + self.pdump_user = self.dut.new_session(suite="pdump-user") + self.vhost_user_pmd = PmdOutput(self.dut, self.vhost_user) + self.virtio_user0_pmd = PmdOutput(self.dut, self.virtio_user0) + self.virtio_user1_pmd = PmdOutput(self.dut, self.virtio_user1) + self.testpmd_name = self.dut.apps_name['test-pmd'].split("/")[-1] + + def set_up(self): + """ + run before each test case. + """ + self.nopci = True + self.queue_num = 1 + self.dut.send_expect("rm -rf ./vhost-net*", "#") + self.dut.send_expect("rm -rf %s" % self.dump_virtio_pcap, "#") + self.dut.send_expect("rm -rf %s" % self.dump_vhost_pcap, "#") + self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#") + + def get_cbdma_ports_info_and_bind_to_dpdk(self, cbdma_num, allow_diff_socket=False): + """ + get and bind cbdma ports into DPDK driver + """ + self.all_cbdma_list = [] + self.cbdma_list = [] + self.cbdma_str = "" + out = self.dut.send_expect( + "./usertools/dpdk-devbind.py --status-dev dma", "# ", 30 + ) + device_info = out.split("\n") + for device in device_info: + pci_info = re.search("\s*(0000:\S*:\d*.\d*)", device) + if pci_info is not None: + dev_info = pci_info.group(1) + # the numa id of ioat dev, only add the device which on same socket with nic dev + bus = int(dev_info[5:7], base=16) + if bus >= 128: + cur_socket = 1 + else: + cur_socket = 0 + if allow_diff_socket: + self.all_cbdma_list.append(pci_info.group(1)) + else: + if self.ports_socket == cur_socket: + self.all_cbdma_list.append(pci_info.group(1)) + self.verify( + len(self.all_cbdma_list) >= cbdma_num, "There no enough cbdma device" + ) + self.cbdma_list = self.all_cbdma_list[0:cbdma_num] + self.cbdma_str = " ".join(self.cbdma_list) + self.dut.send_expect( + "./usertools/dpdk-devbind.py --force --bind=%s %s" + % (self.drivername, self.cbdma_str), + "# ", + 60, + ) + + @staticmethod + def generate_dms_param(queues): + das_list = [] + for i in range(queues): + das_list.append("txq{}".format(i)) + das_param = "[{}]".format(";".join(das_list)) + return das_param + + @staticmethod + def generate_lcore_dma_param(cbdma_list, core_list): + group_num = int(len(cbdma_list) / len(core_list)) + lcore_dma_list = [] + if len(cbdma_list) == 1: + for core in core_list: + lcore_dma_list.append("lcore{}@{}".format(core, cbdma_list[0])) + elif len(core_list) == 1: + for cbdma in cbdma_list: + lcore_dma_list.append("lcore{}@{}".format(core_list[0], cbdma)) + else: + for cbdma in cbdma_list: + core_list_index = int(cbdma_list.index(cbdma) / group_num) + lcore_dma_list.append( + "lcore{}@{}".format(core_list[core_list_index], cbdma) + ) + lcore_dma_param = "[{}]".format(",".join(lcore_dma_list)) + return lcore_dma_param + + def bind_cbdma_device_to_kernel(self): + self.dut.send_expect("modprobe ioatdma", "# ") + self.dut.send_expect( + "./usertools/dpdk-devbind.py -u %s" % self.cbdma_str, "# ", 30 + ) + self.dut.send_expect( + "./usertools/dpdk-devbind.py --force --bind=ioatdma %s" % self.cbdma_str, + "# ", + 60, + ) + + @property + def check_2M_env(self): + out = self.dut.send_expect( + "cat /proc/meminfo |grep Hugepagesize|awk '{print($2)}'", "# " + ) + return True if out == "2048" else False + + def start_vhost_testpmd( + self, cores, eal_param="", param="", ports="", iova_mode="" + ): + if iova_mode: + eal_param += " --iova=" + iova_mode + self.vhost_user_pmd.start_testpmd( + cores=cores, + eal_param=eal_param, + param=param, + ports=ports, + prefix="vhost", + fixed_prefix=True, + ) + + def start_virtio_testpmd_with_vhost_net1(self, cores, eal_param="", param=""): + """ + launch the testpmd as virtio with vhost_net1 + """ + if self.check_2M_env: + eal_param += " --single-file-segments" + self.virtio_user1_pmd.start_testpmd( + cores=cores, + eal_param=eal_param, + param=param, + no_pci=True, + prefix="virtio-user1", + fixed_prefix=True, + ) + self.virtio_user1_pmd.execute_cmd("set fwd rxonly") + self.virtio_user1_pmd.execute_cmd("start") + + def start_virtio_testpmd_with_vhost_net0(self, cores, eal_param="", param=""): + """ + launch the testpmd as virtio with vhost_net0 + """ + if self.check_2M_env: + eal_param += " --single-file-segments" + self.virtio_user0_pmd.start_testpmd( + cores=cores, + eal_param=eal_param, + param=param, + no_pci=True, + prefix="virtio-user0", + fixed_prefix=True, + ) + + def send_251_960byte_and_32_64byte_pkts(self): + """ + send 251 960byte and 32 64byte length packets from virtio_user0 testpmd + """ + self.virtio_user0_pmd.execute_cmd("set burst 1") + self.virtio_user0_pmd.execute_cmd("set txpkts 64,128,256,512") + self.virtio_user0_pmd.execute_cmd("start tx_first 27") + self.virtio_user0_pmd.execute_cmd("stop") + self.virtio_user0_pmd.execute_cmd("set burst 32") + self.virtio_user0_pmd.execute_cmd("start tx_first 7") + self.virtio_user0_pmd.execute_cmd("stop") + self.virtio_user0_pmd.execute_cmd("set txpkts 64") + self.virtio_user0_pmd.execute_cmd("start tx_first 1") + self.virtio_user0_pmd.execute_cmd("stop") + self.vhost_user_pmd.execute_cmd("start") + out = self.vhost_user_pmd.execute_cmd("show port stats all") + self.logger.info(out) + + def send_27_4640byte_and_224_64byte_pkts(self): + """ + send 54 4640byte and 448 64byte length packets from virtio_user0 testpmd + """ + self.virtio_user0_pmd.execute_cmd("set burst 1") + self.virtio_user0_pmd.execute_cmd("set txpkts 64,256,2000,64,256,2000") + self.virtio_user0_pmd.execute_cmd("start tx_first 27") + self.virtio_user0_pmd.execute_cmd("stop") + self.virtio_user0_pmd.execute_cmd("set burst 32") + self.virtio_user0_pmd.execute_cmd("set txpkts 64") + self.virtio_user0_pmd.execute_cmd("start tx_first 7") + self.virtio_user0_pmd.execute_cmd("stop") + self.vhost_user_pmd.execute_cmd("start") + out = self.vhost_user_pmd.execute_cmd("show port stats all") + self.logger.info(out) + + def send_224_64byte_and_27_4640byte_pkts(self): + """ + send 54 4640byte and 448 64byte length packets from virtio_user0 testpmd + """ + self.virtio_user0_pmd.execute_cmd("set burst 32") + self.virtio_user0_pmd.execute_cmd("set txpkts 64") + self.virtio_user0_pmd.execute_cmd("start tx_first 7") + self.virtio_user0_pmd.execute_cmd("stop") + self.virtio_user0_pmd.execute_cmd("set burst 1") + self.virtio_user0_pmd.execute_cmd("set txpkts 64,256,2000,64,256,2000") + self.virtio_user0_pmd.execute_cmd("start tx_first 27") + self.virtio_user0_pmd.execute_cmd("stop") + self.vhost_user_pmd.execute_cmd("start") + out = self.vhost_user_pmd.execute_cmd("show port stats all") + self.logger.info(out) + + def send_251_64byte_and_32_8000byte_pkts(self): + """ + send 54 4640byte and 448 64byte length packets from virtio_user0 testpmd + """ + self.virtio_user0_pmd.execute_cmd("set burst 1") + self.virtio_user0_pmd.execute_cmd("start tx_first 27") + self.virtio_user0_pmd.execute_cmd("stop") + self.virtio_user0_pmd.execute_cmd("set burst 32") + self.virtio_user0_pmd.execute_cmd("start tx_first 7") + self.virtio_user0_pmd.execute_cmd("stop") + self.virtio_user0_pmd.execute_cmd("set txpkts 2000,2000,2000,2000") + self.virtio_user0_pmd.execute_cmd("start tx_first 1") + self.virtio_user0_pmd.execute_cmd("stop") + self.vhost_user_pmd.execute_cmd("start") + out = self.vhost_user_pmd.execute_cmd("show port stats all") + self.logger.info(out) + + def send_1_64byte_pkts(self): + self.virtio_user0_pmd.execute_cmd("set burst 1") + self.virtio_user0_pmd.execute_cmd("start tx_first 1") + self.virtio_user0_pmd.execute_cmd("stop") + self.vhost_user_pmd.execute_cmd("start") + out = self.vhost_user_pmd.execute_cmd("show port stats all") + self.logger.info(out) + + def clear_virtio_user1_stats(self): + self.virtio_user1_pmd.execute_cmd("stop") + self.virtio_user1_pmd.execute_cmd("clear port stats all") + self.virtio_user1_pmd.execute_cmd("start") + out = self.virtio_user1_pmd.execute_cmd("show port stats all") + self.logger.info(out) + + def start_pdump_to_capture_pkt(self): + """ + launch pdump app with dump_port and file_prefix + the pdump app should start after testpmd started + if dump the vhost-testpmd, the vhost-testpmd should started before launch pdump + if dump the virtio-testpmd, the virtio-testpmd should started before launch pdump + """ + eal_params = self.dut.create_eal_parameters( + cores="Default", prefix="virtio-user1", fixed_prefix=True + ) + command_line = ( + self.app_pdump + + " %s -v -- " + + "--pdump 'device_id=net_virtio_user1,queue=*,rx-dev=%s,mbuf-size=8000'" + ) + self.pdump_user.send_expect( + command_line % (eal_params, self.dump_virtio_pcap), "Port" + ) + + def check_virtio_user1_stats(self, check_dict): + """ + check the virtio-user1 show port stats + """ + out = self.virtio_user1_pmd.execute_cmd("show port stats all") + self.logger.info(out) + rx_packets = re.search("RX-packets:\s*(\d*)", out) + rx_bytes = re.search("RX-bytes:\s*(\d*)", out) + rx_num = int(rx_packets.group(1)) + byte_num = int(rx_bytes.group(1)) + packet_count = 0 + byte_count = 0 + for key, value in check_dict.items(): + packet_count += value + byte_count += key * value + self.verify( + rx_num == packet_count, + "receive pakcet number: {} is not equal as send:{}".format( + rx_num, packet_count + ), + ) + self.verify( + byte_num == byte_count, + "receive pakcet byte:{} is not equal as send:{}".format( + byte_num, byte_count + ), + ) + + def check_packet_payload_valid(self, check_dict): + """ + check the payload is valid + """ + self.pdump_user.send_expect("^c", "# ", 60) + self.dut.session.copy_file_from( + src=self.dump_virtio_pcap, dst=self.dump_virtio_pcap + ) + pkt = Packet() + pkts = pkt.read_pcapfile(self.dump_virtio_pcap) + for key, value in check_dict.items(): + count = 0 + for i in range(len(pkts)): + if len(pkts[i]) == key: + count += 1 + self.verify( + value == count, + "pdump file: {} have not include enough packets {}".format(count, key), + ) + + def check_vhost_user_testpmd_logs(self): + out = self.vhost_user.get_session_before(timeout=30) + check_logs = [ + "DMA completion failure on channel", + "DMA copy failed for channel", + ] + for check_log in check_logs: + self.verify(check_log not in out, "Vhost-user testpmd Exception") + + def quit_all_testpmd(self): + self.vhost_user_pmd.quit() + self.virtio_user0_pmd.quit() + self.virtio_user1_pmd.quit() + self.pdump_user.send_expect("^c", "# ", 60) + + def test_split_ring_non_mergeable_path_multi_queues_with_cbdma(self): + """ + Test Case 1: split virtqueue vm2vm non-mergeable path multi-queues payload check with cbdma enable + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(2) + dmas = self.generate_dms_param(2) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] + ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + " --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + vhost_param = ( + " --nb-cores=1 --txd=4096 --rxd=4096 --txq=2 --rxq=2 --no-flush-rx" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=0,queue_size=4096" + virtio1_param = ( + " --enable-hw-vlan-strip --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + ) + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=0,queue_size=4096" + virtio0_param = ( + " --enable-hw-vlan-strip --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + ) + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_251_960byte_and_32_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + self.check_vhost_user_testpmd_logs() + + if not self.check_2M_env: + self.logger.info("Quit and relaunch vhost with iova=pa") + self.vhost_user_pmd.quit() + self.clear_virtio_user1_stats() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.start_pdump_to_capture_pkt() + + self.send_251_960byte_and_32_64byte_pkts() + check_dict = {960: 502, 64: 64} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + self.check_vhost_user_testpmd_logs() + + def test_split_ring_mergeable_path_multi_queues_with_cbdma(self): + """ + Test Case 2: split virtqueue vm2vm mergeable path multi-queues payload check with cbdma enable + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(1) + dmas = self.generate_dms_param(2) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list[0:1], core_list=self.vhost_core_list[1:2] + ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + " --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + vhost_param = ( + " --nb-cores=1 --txd=4096 --rxd=4096 --txq=2 --rxq=2 --no-flush-rx" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=4096" + virtio1_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=4096" + virtio0_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_27_4640byte_and_224_64byte_pkts() + check_dict = {4640: 54, 64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + if not self.check_2M_env: + self.logger.info("Quit and relaunch vhost with iova=pa") + self.vhost_user_pmd.quit() + self.clear_virtio_user1_stats() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.start_pdump_to_capture_pkt() + + self.send_27_4640byte_and_224_64byte_pkts() + check_dict = {4640: 54, 64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_split_ring_inorder_non_mergeable_path_multi_queues_with_cbdma(self): + """ + Test Case 3: split virtqueue vm2vm inorder non-mergeable path multi-queues payload check with cbdma enable + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(5) + dmas = self.generate_dms_param(2) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] + ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + " --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + vhost_param = ( + " --nb-cores=1 --txd=4096 --rxd=4096 --txq=2 --rxq=2 --no-flush-rx" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=1,queue_size=4096" + virtio1_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=0,mrg_rxbuf=0,in_order=1,queue_size=4096" + virtio0_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_27_4640byte_and_224_64byte_pkts() + check_dict = {64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + if not self.check_2M_env: + self.logger.info("Quit and relaunch vhost with iova=pa") + self.vhost_user_pmd.quit() + self.clear_virtio_user1_stats() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.start_pdump_to_capture_pkt() + + self.send_27_4640byte_and_224_64byte_pkts() + check_dict = {64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_split_ring_vectorized_path_multi_queues_with_cbdma(self): + """ + Test Case 4: split virtqueue vm2vm vectorized path multi-queues payload check with cbdma enable + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(8) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0],dma_ring_size=2048'" + + " --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq1],dma_ring_size=2048'" + ) + vhost_param = ( + " --nb-cores=1 --txd=4096 --rxd=4096 --txq=2 --rxq=2 --no-flush-rx" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=4096" + virtio1_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=4096" + virtio0_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_224_64byte_and_27_4640byte_pkts() + check_dict = {64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + if not self.check_2M_env: + self.logger.info("Quit and relaunch vhost with iova=pa") + self.vhost_user_pmd.quit() + self.clear_virtio_user1_stats() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.start_pdump_to_capture_pkt() + + self.send_224_64byte_and_27_4640byte_pkts() + check_dict = {64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_split_ring_inorder_mergeable_path_multi_queues_test_non_indirect_descriptor_with_cbdma( + self, + ): + """ + Test Case 5: Split virtqueue vm2vm inorder mergeable path test non-indirect descriptor with cbdma enable + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(4) + dmas = self.generate_dms_param(2) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] + ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + " --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + vhost_param = ( + " --nb-cores=1 --txd=256 --rxd=256 --txq=2 --rxq=2 --no-flush-rx" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=1,queue_size=256" + virtio1_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=1,queue_size=256" + virtio0_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_251_64byte_and_32_8000byte_pkts() + check_dict = {64: 502, 8000: 2} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + if not self.check_2M_env: + self.logger.info("Quit and relaunch vhost with iova=pa") + self.vhost_user_pmd.quit() + self.virtio_user1_pmd.quit() + self.virtio_user0_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_251_64byte_and_32_8000byte_pkts() + check_dict = {64: 502, 8000: 2} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_split_ring_inorder_mergeable_path_multi_queues_test_indirect_descriptor_with_cbdma( + self, + ): + """ + Test Case 6: Split virtqueue vm2vm mergeable path test indirect descriptor with cbdma enable + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(4) + dmas = self.generate_dms_param(2) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] + ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + " --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + vhost_param = ( + " --nb-cores=1 --txd=256 --rxd=256 --txq=2 --rxq=2 --no-flush-rx" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=256" + virtio1_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=256" + virtio0_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_251_64byte_and_32_8000byte_pkts() + check_dict = {64: 502, 8000: 10} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + if not self.check_2M_env: + self.logger.info("Quit and relaunch vhost with iova=pa") + self.vhost_user_pmd.quit() + self.virtio_user1_pmd.quit() + self.virtio_user0_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_251_64byte_and_32_8000byte_pkts() + check_dict = {64: 502, 8000: 10} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_ring_non_mergeable_path_multi_queues_with_cbdma(self): + """ + Test Case 7: packed virtqueue vm2vm non-mergeable path multi-queues payload check with cbdma enable + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(2) + dmas = self.generate_dms_param(2) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] + ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + " --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + vhost_param = ( + " --nb-cores=1 --txd=4096 --rxd=4096 --txq=2 --rxq=2 --no-flush-rx" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=0,queue_size=4096" + virtio1_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=0,queue_size=4096" + virtio0_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_224_64byte_and_27_4640byte_pkts() + check_dict = {64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + if not self.check_2M_env: + self.logger.info("Quit and relaunch vhost with iova=pa") + self.vhost_user_pmd.quit() + self.clear_virtio_user1_stats() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.start_pdump_to_capture_pkt() + + self.send_224_64byte_and_27_4640byte_pkts() + check_dict = {64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_ring_mergeable_path_multi_queues_with_cbdma(self): + """ + Test Case 8: packed virtqueue vm2vm mergeable path multi-queues payload check with cbdma enable + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(1) + dmas = self.generate_dms_param(2) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list[0:1], core_list=self.vhost_core_list[1:2] + ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + " --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + vhost_param = ( + " --nb-cores=1 --txd=4096 --rxd=4096 --txq=2 --rxq=2 --no-flush-rx" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=0,queue_size=4096" + virtio1_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=0,queue_size=4096" + virtio0_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_27_4640byte_and_224_64byte_pkts() + check_dict = {4640: 54, 64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + if not self.check_2M_env: + self.logger.info("Quit and relaunch vhost with iova=pa") + self.vhost_user_pmd.quit() + self.clear_virtio_user1_stats() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.start_pdump_to_capture_pkt() + + self.send_27_4640byte_and_224_64byte_pkts() + check_dict = {4640: 54, 64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_ring_inorder_mergeable_path_multi_queues_with_cbdma(self): + """ + Test Case 9: packed virtqueue vm2vm inorder mergeable path multi-queues payload check with cbdma enable + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(5) + dmas = self.generate_dms_param(2) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] + ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + " --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + vhost_param = ( + " --nb-cores=1 --txd=4096 --rxd=4096 --txq=2 --rxq=2 --no-flush-rx" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,queue_size=4096" + virtio1_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,queue_size=4096" + virtio0_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_27_4640byte_and_224_64byte_pkts() + check_dict = {4640: 54, 64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + if not self.check_2M_env: + self.logger.info("Quit and relaunch vhost with iova=pa") + self.vhost_user_pmd.quit() + self.clear_virtio_user1_stats() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.start_pdump_to_capture_pkt() + + self.send_27_4640byte_and_224_64byte_pkts() + check_dict = {4640: 54, 64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_ring_inorder_non_mergeable_path_multi_queues_with_cbdma(self): + """ + Test Case 10: packed virtqueue vm2vm inorder non-mergeable path multi-queues payload check with cbdma enable + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(8) + dmas = self.generate_dms_param(1) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] + ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + " --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + vhost_param = ( + " --nb-cores=1 --txd=4096 --rxd=4096 --txq=2 --rxq=2 --no-flush-rx" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + + virtio1_eal_param = "--vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,queue_size=4096" + virtio1_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,queue_size=4096" + virtio0_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_224_64byte_and_27_4640byte_pkts() + check_dict = {64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + if not self.check_2M_env: + self.logger.info("Quit and relaunch vhost with iova=pa") + self.vhost_user_pmd.quit() + self.clear_virtio_user1_stats() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.start_pdump_to_capture_pkt() + + self.send_224_64byte_and_27_4640byte_pkts() + check_dict = {64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_ring_inorder_vectorized_rx_path_multi_queues_with_cbdma(self): + """ + Test Case 11: packed virtqueue vm2vm vectorized-rx path multi-queues payload check with cbdma enable + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(8) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:3] + ) + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,queues=2,client=1,dmas=[txq0],dma_ring_size=2048'" + + " --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq1],dma_ring_size=2048'" + ) + vhost_param = ( + " --nb-cores=2 --txd=4096 --rxd=4096 --txq=2 --rxq=2 --no-flush-rx" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + + virtio1_eal_param = "--force-max-simd-bitwidth=512 --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4096" + virtio1_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--force-max-simd-bitwidth=512 --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4096" + virtio0_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4096 --rxd=4096" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_224_64byte_and_27_4640byte_pkts() + check_dict = {64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + if not self.check_2M_env: + self.logger.info("Quit and relaunch vhost with iova=pa") + self.vhost_user_pmd.quit() + self.clear_virtio_user1_stats() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.start_pdump_to_capture_pkt() + + self.send_224_64byte_and_27_4640byte_pkts() + check_dict = {64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_ring_inorder_vectorized_path_multi_queues_check_with_ring_size_is_not_power_of_2_queues_with_cbdma( + self, + ): + """ + Test Case 12: packed virtqueue vm2vm vectorized path multi-queues payload check with ring size is not power of 2 and cbdma enabled + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(8) + dmas = self.generate_dms_param(2) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] + ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + " --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + vhost_param = ( + " --nb-cores=1 --txd=4096 --rxd=4096 --txq=2 --rxq=2 --no-flush-rx" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + + virtio1_eal_param = "--force-max-simd-bitwidth=512 --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4097" + virtio1_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4097 --rxd=4097" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--force-max-simd-bitwidth=512 --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=4097" + virtio0_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=4097 --rxd=4097" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_224_64byte_and_27_4640byte_pkts() + check_dict = {64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + if not self.check_2M_env: + self.logger.info("Quit and relaunch vhost with iova=pa") + self.vhost_user_pmd.quit() + self.clear_virtio_user1_stats() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.start_pdump_to_capture_pkt() + + self.send_224_64byte_and_27_4640byte_pkts() + check_dict = {64: 448} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_ring_vectorized_tx_path_multi_queues_test_indirect_descriptor_with_cbdma( + self, + ): + """ + Test Case 13: packed virtqueue vm2vm vectorized-tx path multi-queues test indirect descriptor with cbdma enable + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(8) + dmas = self.generate_dms_param(2) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] + ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + " --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + vhost_param = ( + " --nb-cores=1 --txd=256 --rxd=256 --txq=2 --rxq=2 --no-flush-rx" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + + virtio1_eal_param = "--force-max-simd-bitwidth=512 --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256" + virtio1_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--force-max-simd-bitwidth=512 --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256" + virtio0_param = " --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_251_64byte_and_32_8000byte_pkts() + check_dict = {64: 502, 8000: 10} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + if not self.check_2M_env: + self.logger.info("Quit and relaunch vhost with iova=pa") + self.vhost_user_pmd.quit() + self.virtio_user1_pmd.quit() + self.virtio_user0_pmd.quit() + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="pa", + ) + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_251_64byte_and_32_8000byte_pkts() + check_dict = {64: 502, 8000: 10} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def test_packed_ring_vectorized_tx_path_test_batch_processing_with_cbdma(self): + """ + Test Case 14: packed virtqueue vm2vm vectorized-tx path test batch processing with cbdma enable + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(8) + dmas = self.generate_dms_param(1) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:2] + ) + vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=1,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + " --vdev 'net_vhost1,iface=vhost-net1,queues=1,client=1,dmas={},dma_ring_size=2048'".format( + dmas + ) + vhost_param = ( + " --nb-cores=1 --txd=256 --rxd=256 --txq=1 --rxq=1 --no-flush-rx" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + eal_param=vhost_eal_param, + param=vhost_param, + ports=self.cbdma_list, + iova_mode="va", + ) + + virtio1_eal_param = "--force-max-simd-bitwidth=512 --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256" + virtio1_param = " --nb-cores=1 --rxq=1 --txq=1 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net1( + cores=self.virtio1_core_list, + eal_param=virtio1_eal_param, + param=virtio1_param, + ) + self.start_pdump_to_capture_pkt() + + virtio0_eal_param = "--force-max-simd-bitwidth=512 --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net0,queues=2,server=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256" + virtio0_param = " --nb-cores=1 --rxq=1 --txq=1 --txd=256 --rxd=256" + self.start_virtio_testpmd_with_vhost_net0( + cores=self.virtio0_core_list, + eal_param=virtio0_eal_param, + param=virtio0_param, + ) + + self.send_1_64byte_pkts() + check_dict = {64: 1} + self.check_virtio_user1_stats(check_dict) + self.check_packet_payload_valid(check_dict) + + def close_all_session(self): + if getattr(self, "vhost_user", None): + self.dut.close_session(self.vhost_user) + if getattr(self, "virtio-user0", None): + self.dut.close_session(self.virtio_user0) + if getattr(self, "virtio-user1", None): + self.dut.close_session(self.virtio_user1) + if getattr(self, "pdump_session", None): + self.dut.close_session(self.pdump_user) + + def tear_down(self): + """ + Run after each test case. + """ + self.quit_all_testpmd() + self.dut.kill_all() + self.bind_cbdma_device_to_kernel() + + def tear_down_all(self): + """ + Run after each test suite. + """ + self.close_all_session()