From patchwork Thu May 5 06:45:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 110650 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 697EAA00C4; Thu, 5 May 2022 08:45:49 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 63D2440E0F; Thu, 5 May 2022 08:45:49 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 2E5CB40042 for ; Thu, 5 May 2022 08:45:47 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1651733148; x=1683269148; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=ZmRIHaj55c6lXhTieXtXKv98R8OmNCPjX82LTBa+AXo=; b=cblz538aM0YhATxKRdBkI2/eXNYY0rDwqhOPuf2dO702+U+eEs4OdWmd i8hzx0Cw444kV4m0mg3Y42rgqlYmBgsxRFrE6KQgZNBhhM7lDUJsoIwFz 4DlTrsbfzypItLizMR8/lxkbThjtTSMbXG5IwLlQygZp9PHLoW/NhvNe/ GKLi96WlZwibpIMxRQgbIqpVUQvUQM40eOcdqUI2vKpXq7eVG5optQIru f2c0JM6Wi/PG9v35l3BJ0P6shHovO+No8g97OX90eeodJMy+FSzr5pGbN FrOusyMDVujtxU3DMzLVV4EJS4sEZHZvNWzHmdoZINgRI+B8QcqJ1vI8P Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10337"; a="268169766" X-IronPort-AV: E=Sophos;i="5.91,200,1647327600"; d="scan'208";a="268169766" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2022 23:45:47 -0700 X-IronPort-AV: E=Sophos;i="5.91,200,1647327600"; d="scan'208";a="811483946" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2022 23:45:46 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V3 1/3] test_plans/index: add vm2vm_virtio_pmd_cbdma Date: Thu, 5 May 2022 06:45:27 +0000 Message-Id: <20220505064527.52578-1-weix.ling@intel.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path), add new testsuite vm2vm_virtio_pmd_cbdma for coverage the vm2vm split ring and packed ring path test with cbdma. 1) Add new testplan vm2vm_virtio_pmd_cbdma_test_plan into test_plans/index.rst. Signed-off-by: Wei Ling --- test_plans/index.rst | 1 + 1 file changed, 1 insertion(+) diff --git a/test_plans/index.rst b/test_plans/index.rst index 007c01b3..9d21c461 100644 --- a/test_plans/index.rst +++ b/test_plans/index.rst @@ -233,6 +233,7 @@ The following are the test plans for the DPDK DTS automated test system. vm2vm_virtio_net_perf_test_plan vm2vm_virtio_net_perf_cbdma_test_plan vm2vm_virtio_pmd_test_plan + vm2vm_virtio_pmd_cbdma_test_plan dpdk_gro_lib_test_plan dpdk_gso_lib_test_plan vswitch_sample_cbdma_test_plan From patchwork Thu May 5 06:45:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 110651 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A0FE2A00C4; Thu, 5 May 2022 08:46:02 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 921CD40E2D; Thu, 5 May 2022 08:46:02 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id CF4C740042 for ; Thu, 5 May 2022 08:46:00 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1651733161; x=1683269161; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=MHTfsJC9O3IzPxvSRAf7W0Qwo2tLohM9om38Tyt0s6A=; b=P2oEWWZSqpgESY14/JC9O7Hew+HgFRjlu2VYeCLZaSrtZ8Mkd7jm52+N fstt2myQz1EZGjP4ob1haXwUnM1RYaFR8oyOpuItZaYz0ZqPd+4+seaeY PDjaB1+QfGagLc0WhH/GpXspYtbiIODntlzArjcnZ8NpjRF69U5QvPXNF psx7C2JjuNCTl83Wkz7v7D1XiiVSzrTEfqbGqHFIryfFJ4e8G4TMh+pVu QP7K2BvKQHntdBtv+Dg0z0xLMGgfHzegzenJ1hUfH8ZCgQYnbCTZqi7e1 dyuATfO3qzROqjwGRkC23w3iKA5E8PbiNOeSssMGwNNM9N72RRLn21VcH g==; X-IronPort-AV: E=McAfee;i="6400,9594,10337"; a="267615976" X-IronPort-AV: E=Sophos;i="5.91,200,1647327600"; d="scan'208";a="267615976" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2022 23:45:59 -0700 X-IronPort-AV: E=Sophos;i="5.91,200,1647327600"; d="scan'208";a="621165691" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2022 23:45:58 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V3 2/3] test_plans/vm2vm_virtio_pmd_cbdma_test_plan: add vm2vm_virtio_pmd_cbdma testplan Date: Thu, 5 May 2022 06:45:39 +0000 Message-Id: <20220505064539.52606-1-weix.ling@intel.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org v1: As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path), add new testsuite vm2vm_virtio_pmd_cbdma for coverage the vm2vm split ring and packed ring path test with cbdma. 1) Add new testplan test_plans/vm2vm_virtio_pmd_cbdma_test_plan into test_plans. v2: Modify the `Description` content in test_plan. v3: Fix WARNING info in test_plan. Signed-off-by: Wei Ling --- .../vm2vm_virtio_pmd_cbdma_test_plan.rst | 389 ++++++++++++++++++ 1 file changed, 389 insertions(+) create mode 100644 test_plans/vm2vm_virtio_pmd_cbdma_test_plan.rst diff --git a/test_plans/vm2vm_virtio_pmd_cbdma_test_plan.rst b/test_plans/vm2vm_virtio_pmd_cbdma_test_plan.rst new file mode 100644 index 00000000..2690966a --- /dev/null +++ b/test_plans/vm2vm_virtio_pmd_cbdma_test_plan.rst @@ -0,0 +1,389 @@ +.. Copyright (c) <2022>, Intel Corporation + All rights reserved. + + Redistribution and use in source and binary forms, with or without + modification, are permitted provided that the following conditions + are met: + + - Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + + - Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in + the documentation and/or other materials provided with the + distribution. + + - Neither the name of Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived + from this software without specific prior written permission. + + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS + FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE + COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, + INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES + (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR + SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) + ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED + OF THE POSSIBILITY OF SUCH DAMAGE. + +================================================ +vm2vm vhost-user/virtio-pmd with cbdma test plan +================================================ + +Description +=========== + +Vhost asynchronous data path leverages DMA devices to offload memory copies from the CPU and it is implemented in an asynchronous way. +In addition, vhost supports M:N mapping between vrings and DMA virtual channels. Specifically, one vring can use multiple different DMA +channels and one DMA channel can be shared by multiple vrings at the same time. Vhost enqueue operation with CBDMA channels is supported +in both split and packed ring. + +This document provides the test plan for testing some basic functions with CBDMA device in vm2vm vhost-user/virtio-pmd topology environment. +1. vm2vm mergeable, normal path test with virtio 1.0 and virtio 1.1 +2. vm2vm mergeable path test with virtio 1.0 and dynamic change queue number. + +Note: +1.For packed virtqueue virtio-net test, need qemu version > 4.2.0 and VM kernel version > 5.1, and packed ring multi-queues not support reconnect in qemu yet. +2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version >= 5.2.0, dut to old qemu exist reconnect issue when multi-queues test. +3.When DMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may +exceed IOMMU's max capability, better to use 1G guest hugepage. +4.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. + +For more about dpdk-testpmd sample, please refer to the DPDK docments: +https://doc.dpdk.org/guides/testpmd_app_ug/run_app.html +For more about qemu, you can refer to the qemu doc: https://qemu-project.gitlab.io/qemu/system/invocation.html + +Prerequisites +============= + +Topology +-------- + Test flow: Virtio-pmd-->Vhost-user-->Testpmd-->Vhost-user-->Virtio-pmd + +Software +-------- + Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz + +General set up +-------------- +1. Compile DPDK:: + + # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library= + # ninja -C -j 110 + For example: + CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=x86_64-native-linuxapp-gcc + ninja -C x86_64-native-linuxapp-gcc -j 110 + +2. Get the PCI device ID and DMA device ID of DUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 is DMA device ID:: + + # ./usertools/dpdk-devbind.py -s + + Network devices using kernel driver + =================================== + 0000:18:00.0 'Device 159b' if=ens785f0 drv=ice unused=vfio-pci + + DMA devices using kernel driver + =============================== + 0000:00:04.0 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci + 0000:00:04.1 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci + +Test case +========= + +Common steps +------------ +1. Bind 1 NIC port and CBDMA channels to vfio-pci:: + + # ./usertools/dpdk-devbind.py -b vfio-pci + # ./usertools/dpdk-devbind.py -b vfio-pci + + For example, Bind 1 NIC port and 2 CBDMA channels:: + # ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:18.0 + # ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:04.0,0000:00:04.1 + +2. On VM1 and VM2, bind virtio device(for example,0000:00:05.0) with vfio-pci driver:: + + modprobe vfio + modprobe vfio-pci + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode + # ./usertools/dpdk-devbind.py --force --bind=vfio-pci 0000:00:05.0 + +Test Case 1: VM2VM virtio-pmd split ring mergeable path 8 queues CBDMA enable with server mode stable test +---------------------------------------------------------------------------------------------------------- +This case uses testpmd and QEMU to test split ring mergeable path with 8 queues and CBDMA enable with server mode, +In VM, use testpmd to send imix packets, and relaunch vhost-user 10 times to test stable. + +1. Bind 16 CBDMA channels to vfio-pci, as common step 1. + +2. Launch the testpmd with 2 vhost port and 8 queues by below commands:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ + -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ + --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore3@0000:00:04.4,lcore3@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,\ + lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore5@0000:80:04.4,lcore5@0000:80:04.5,lcore5@0000:80:04.6,lcore5@0000:80:04.7] + testpmd> start + +3. Launch VM1 and VM2 using qemu:: + + taskset -c 6-16 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 9 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ + -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=./vhost-net0,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\ + mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 + + taskset -c 17-27 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 9 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net1,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,\ + vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 + +4. On VM1 and VM2, bind virtio device with vfio-pci driver, as common step 2. + +5. Launch testpmd in VM1:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip \ + --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 + testpmd> set fwd mac + testpmd> start + +6. Launch testpmd in VM2, sent imix pkts from VM2:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip \ + --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 + testpmd> set fwd mac + testpmd> set txpkts 64,256,512,1024,2000,64,256,512,1024,2000 + testpmd> start tx_first 1 + +7. Check imix packets can looped between two VMs and 8 queues all have packets rx/tx:: + + testpmd> show port stats all + testpmd> stop + +8. Relaunch and start vhost side testpmd with below cmd:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ + -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ + --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore3@0000:00:04.4,lcore3@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,\ + lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore5@0000:80:04.4,lcore5@0000:80:04.5,lcore5@0000:80:04.6,lcore5@0000:80:04.7] + testpmd> start + +9. Send pkts by testpmd in VM2, check imix packets can looped between two VMs and 8 queues all have packets rx/tx:: + + testpmd> stop + testpmd> start tx_first 1 + testpmd> show port stats all + testpmd> stop + +10. Rerun step 7-8 for 10 times. + +Test Case 2: VM2VM virtio-pmd split ring mergeable path dynamic queue size CBDMA enable with server mode test +------------------------------------------------------------------------------------------------------------- +This case uses testpmd and QEMU to test split ring mergeable path and CBDMA enable with server mode, +In VM, use testpmd to send imix packets, and then dynamic queue size from 4 to 8 to test it works well or not. + +1. Bind 16 CBDMA channels to vfio-pci, as common step 1. + +2. Launch the testpmd with 2 vhost ports below commands:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3]' \ + -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4 \ + --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore3@0000:00:04.4,lcore3@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,\ + lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore5@0000:80:04.4,lcore5@0000:80:04.5,lcore5@0000:80:04.6,lcore5@0000:80:04.7] + testpmd> start + +3. Launch VM1 and VM2 using qemu:: + + taskset -c 6-16 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 9 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ + -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=./vhost-net0,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\ + mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 + + taskset -c 17-27 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 9 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net1,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,\ + vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 + +4. On VM1 and VM2, bind virtio device with vfio-pci driver, as common step 2. + +5. Launch testpmd in VM1:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip \ + --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 + testpmd> set fwd mac + testpmd> start + +6. Launch testpmd in VM2, sent imix pkts from VM2:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip \ + --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 + testpmd> set fwd mac + testpmd> set txpkts 64,256,512,1024,2000,64,256,512,1024,2000 + testpmd> start tx_first 1 + +7. Check imix packets can looped between two VMs and 4 queues (queue0 to queue3) have packets rx/tx:: + + testpmd> show port stats all + testpmd> stop + +8. Relaunch and start vhost side testpmd with 8 queues:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ + -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ + --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore3@0000:00:04.4,lcore3@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,\ + lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore5@0000:80:04.4,lcore5@0000:80:04.5,lcore5@0000:80:04.6,lcore5@0000:80:04.7] + testpmd> start + +9. Send pkts by testpmd in VM2, check imix packets can looped between two VMs and 8 queues all have packets rx/tx:: + + testpmd> stop + testpmd> start tx_first 1 + testpmd> show port stats all + testpmd> stop + +10. Rerun step 7-8 for 10 times. + +Test Case 3: VM2VM virtio-pmd packed ring mergeable path 8 queues CBDMA enable test +----------------------------------------------------------------------------------- +This case uses testpmd and QEMU to test packed ring mergeable path with 8 queues and CBDMA enable, +In VM, use testpmd to send imix packets, and then quit VM1 and change VM1 from packed ring path to splirt ring path to test. + +1. Bind 16 CBDMA channels to vfio-pci, as common step 1. + +2. Launch the testpmd with 2 vhost port and 8 queues by below commands:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ + --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ + -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ + --lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore3@0000:00:04.4,lcore3@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,\ + lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore5@0000:80:04.4,lcore5@0000:80:04.5,lcore5@0000:80:04.6,lcore5@0000:80:04.7] + testpmd> start + +3. Launch VM1 and VM2 with qemu:: + + taskset -c 6-16 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 9 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ + -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=./vhost-net0 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\ + mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10 + + taskset -c 17-27 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 9 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net1 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,\ + vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12 + +4. On VM1 and VM2, bind virtio device with vfio-pci driver, as common step 2. + +5. Launch testpmd in VM1:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip \ + --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 + testpmd> set fwd mac + testpmd> start + +6. Launch testpmd in VM2, sent imix pkts from VM2:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip \ + --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 + testpmd> set fwd mac + testpmd> set txpkts 64,256,512,1024,2000,64,256,512,1024,2000 + testpmd> start tx_first 1 + +7. Check imix packets can looped between two VMs and 8 queues all have packets rx/tx:: + + testpmd> show port stats all + testpmd> stop + +8. Quit VM2 and relaunch VM2 with split ring:: + + taskset -c 6-16 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 9 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=./vhost-net0 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\ + mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 + +9. Bind virtio device with vfio-pci driver:: + + modprobe vfio + modprobe vfio-pci + echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode + # ./usertools/dpdk-devbind.py --force --bind=vfio-pci 0000:00:05.0 + +10. Launch testpmd in VM2 and send imix pkts from VM2:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip \ + --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 + testpmd> set fwd mac + testpmd> set txpkts 64,256,512,1024,2000,64,256,512,1024,2000 + +11. Check imix packets can looped between two VMs and 8 queues all have packets rx/tx:: + + testpmd> show port stats all + testpmd> stop From patchwork Thu May 5 06:45:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 110652 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CD60CA00C4; Thu, 5 May 2022 08:46:14 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C2A1140E5A; Thu, 5 May 2022 08:46:14 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id C678840042 for ; Thu, 5 May 2022 08:46:12 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1651733173; x=1683269173; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=WoC4CBjv10s0aBnAMqdnFzkdbGMIzTHz6PmJ1GRwiE0=; b=ZgA84sjQCSlS6gabhIcDDICFgVd7nCegXRu89VmIRPtzCmXCLLU6URZ/ 7Lr0ES/6jft70Mpt/uBOVSfZqluZZUD+d/boILJzJprh0qQcPvcdt/Vq/ dnEnnZgCjwXuB6MZ5rO8DaA+3/eFwpoch/lRal3AuG4reeePqMJAdb9k7 meuwrkb126IxQRmtZhwh97U4rhJu3ff7ppDJBskqRIUsp4bTKGJ7ncS9X IAZGaTgOWXd9IGAFdoPXRs/YW7myNoJytAOenboUEKpSEePV32zPtED2r 1d4Qx2ioLGvDA2XUig/vSzbUVkPDPkpl8PL1AudJzDCCLjC3p1fD4Cds1 A==; X-IronPort-AV: E=McAfee;i="6400,9594,10337"; a="266863246" X-IronPort-AV: E=Sophos;i="5.91,200,1647327600"; d="scan'208";a="266863246" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2022 23:46:11 -0700 X-IronPort-AV: E=Sophos;i="5.91,200,1647327600"; d="scan'208";a="654088061" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2022 23:46:10 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V3 3/3] tests/vm2vm_virtio_pmd_cbdma: add vm2vm_virtio_pmd_cbdma testsuite Date: Thu, 5 May 2022 06:45:52 +0000 Message-Id: <20220505064552.52632-1-weix.ling@intel.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path), add new testsuite vm2vm_virtio_pmd_cbdma for coverage the vm2vm split ring and packed ring path test with cbdma. 1) Add new testsuite tests/vm2vm_virtio_pmd_cbdma into tests. Signed-off-by: Wei Ling --- tests/TestSuite_vm2vm_virtio_pmd_cbdma.py | 426 ++++++++++++++++++++++ 1 file changed, 426 insertions(+) create mode 100644 tests/TestSuite_vm2vm_virtio_pmd_cbdma.py diff --git a/tests/TestSuite_vm2vm_virtio_pmd_cbdma.py b/tests/TestSuite_vm2vm_virtio_pmd_cbdma.py new file mode 100644 index 00000000..d9f49969 --- /dev/null +++ b/tests/TestSuite_vm2vm_virtio_pmd_cbdma.py @@ -0,0 +1,426 @@ +# BSD LICENSE +# +# Copyright(c) <2022> Intel Corporation. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +""" +DPDK Test suite. + +Test cases for Vhost-user/Virtio-pmd VM2VM +Test cases for vhost/virtio-pmd(0.95/1.0) VM2VM test with 3 rx/tx paths, +includes mergeable, normal, vector_rx. +Test cases fro vhost/virtio-pmd(1.1) VM2VM test with mergeable path. +About mergeable path check the large packet payload. +""" +import re +import time + +import framework.utils as utils +from framework.pmd_output import PmdOutput +from framework.test_case import TestCase +from framework.virt_common import VM + + +class TestVM2VMVirtioPmdCbdma(TestCase): + def set_up_all(self): + self.dut_ports = self.dut.get_ports() + self.ports_socket = self.dut.get_numa_id(self.dut_ports[0]) + self.cores_list = self.dut.get_core_list(config="all", socket=self.ports_socket) + self.vhost_core_list = self.cores_list[0:5] + self.memory_channel = self.dut.get_memory_channels() + self.base_dir = self.dut.base_dir.replace("~", "/root") + self.pci_info = self.dut.ports_info[0]["pci"] + self.app_testpmd_path = self.dut.apps_name["test-pmd"] + self.testpmd_name = self.app_testpmd_path.split("/")[-1] + self.vhost_user = self.dut.new_session(suite="vhost") + self.vhost_user_pmd = PmdOutput(self.dut, self.vhost_user) + + def set_up(self): + """ + run before each test case. + """ + self.table_header = [ + "FrameSize(B)", + "Mode", + "Throughput(Mpps)", + "Queue Number", + "Path", + ] + self.result_table_create(self.table_header) + self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "#") + self.dut.send_expect("killall -s INT qemu-system-x86_64", "#") + self.dut.send_expect("rm -rf %s/vhost-net*" % self.base_dir, "#") + self.vm_num = 2 + self.vm_dut = [] + self.vm = [] + + def get_cbdma_ports_info_and_bind_to_dpdk(self, cbdma_num, allow_diff_socket=False): + """ + get all cbdma ports + """ + self.all_cbdma_list = [] + self.cbdma_list = [] + self.cbdma_str = "" + out = self.dut.send_expect( + "./usertools/dpdk-devbind.py --status-dev dma", "# ", 30 + ) + device_info = out.split("\n") + for device in device_info: + pci_info = re.search("\s*(0000:\S*:\d*.\d*)", device) + if pci_info is not None: + dev_info = pci_info.group(1) + # the numa id of ioat dev, only add the device which on same socket with nic dev + bus = int(dev_info[5:7], base=16) + if bus >= 128: + cur_socket = 1 + else: + cur_socket = 0 + if allow_diff_socket: + self.all_cbdma_list.append(pci_info.group(1)) + else: + if self.ports_socket == cur_socket: + self.all_cbdma_list.append(pci_info.group(1)) + self.verify( + len(self.all_cbdma_list) >= cbdma_num, "There no enough cbdma device" + ) + self.cbdma_list = self.all_cbdma_list[0:cbdma_num] + self.cbdma_str = " ".join(self.cbdma_list) + self.dut.send_expect( + "./usertools/dpdk-devbind.py --force --bind=%s %s" + % (self.drivername, self.cbdma_str), + "# ", + 60, + ) + + @staticmethod + def generate_dms_param(queues): + das_list = [] + for i in range(queues): + das_list.append("txq{}".format(i)) + das_param = "[{}]".format(";".join(das_list)) + return das_param + + @staticmethod + def generate_lcore_dma_param(cbdma_list, core_list): + group_num = int(len(cbdma_list) / len(core_list)) + lcore_dma_list = [] + if len(cbdma_list) == 1: + for core in core_list: + lcore_dma_list.append("lcore{}@{}".format(core, cbdma_list[0])) + elif len(core_list) == 1: + for cbdma in cbdma_list: + lcore_dma_list.append("lcore{}@{}".format(core_list[0], cbdma)) + else: + for cbdma in cbdma_list: + core_list_index = int(cbdma_list.index(cbdma) / group_num) + lcore_dma_list.append( + "lcore{}@{}".format(core_list[core_list_index], cbdma) + ) + lcore_dma_param = "[{}]".format(",".join(lcore_dma_list)) + return lcore_dma_param + + def start_vhost_testpmd(self, cores, ports, prefix, eal_param, param): + """ + launch the testpmd with different parameters + """ + self.vhost_user_pmd.start_testpmd( + cores=cores, ports=ports, prefix=prefix, eal_param=eal_param, param=param + ) + self.vhost_user_pmd.execute_cmd("start") + + def start_vms( + self, + vm_queue, + packed=False, + server_mode=True, + restart_vm1=False, + vm_config="vhost_sample", + ): + """ + start two VM, each VM has one virtio device + """ + vm_params = {} + vm_params["opt_queue"] = vm_queue + if restart_vm1: + self.vm_num = 1 + for i in range(self.vm_num): + if restart_vm1: + i = i + 1 + vm_info = VM(self.dut, "vm%d" % i, vm_config) + vm_params["driver"] = "vhost-user" + if not server_mode: + vm_params["opt_path"] = self.base_dir + "/vhost-net%d" % i + else: + vm_params["opt_path"] = self.base_dir + "/vhost-net%d" % i + ",server" + vm_params["opt_mac"] = "52:54:00:00:00:0%d" % (i + 1) + if not packed: + vm_params[ + "opt_settings" + ] = "disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on" + else: + vm_params[ + "opt_settings" + ] = "disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on" + vm_info.set_vm_device(**vm_params) + time.sleep(3) + try: + vm_dut = vm_info.start() + if vm_dut is None: + raise Exception("Set up VM ENV failed") + except Exception as e: + print((utils.RED("Failure for %s" % str(e)))) + raise e + self.vm_dut.append(vm_dut) + self.vm.append(vm_info) + + def start_vm0_testpmd(self): + param = "--tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000" + self.vm0_pmd.start_testpmd(cores="default", param=param) + self.vm0_pmd.execute_cmd("set fwd mac") + self.vm0_pmd.execute_cmd("start") + + def start_vm1_testpmd(self, resend=False): + param = "--tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000" + if not resend: + self.vm1_pmd.start_testpmd(cores="default", param=param) + self.vm1_pmd.execute_cmd("set fwd mac") + self.vm1_pmd.execute_cmd( + "set txpkts 64,256,512,1024,2000,64,256,512,1024,2000" + ) + self.vm1_pmd.execute_cmd("start tx_first 1") + else: + self.vm1_pmd.execute_cmd("stop") + self.vm0_pmd.execute_cmd("start") + self.vm0_pmd.execute_cmd("clear port stats all") + self.vhost_user_pmd.execute_cmd("clear port stats all") + self.vm1_pmd.execute_cmd("clear port stats all") + self.vm1_pmd.execute_cmd("start tx_first 1") + + def check_packets_of_each_queue(self, vm_pmd, queues): + vm_pmd.execute_cmd("show port stats all") + out = vm_pmd.execute_cmd("stop") + for queue in range(queues): + reg = "Queue= %d" % queue + index = out.find(reg) + rx = re.search("RX-packets:\s*(\d*)", out[index:]) + tx = re.search("TX-packets:\s*(\d*)", out[index:]) + rx_packets = int(rx.group(1)) + tx_packets = int(tx.group(1)) + self.verify( + rx_packets > 0 and tx_packets > 0, + "The queue {} rx-packets or tx-packets is 0 about ".format(queue) + + "rx-packets: {}, tx-packets: {}".format(rx_packets, tx_packets), + ) + + def test_vm2vm_virtio_pmd_split_ring_mergeable_path_8_queues_cbdma_enable_with_server_mode_stable_test( + self, + ): + """ + Test Case 1: VM2VM virtio-pmd split ring mergeable path 8 queues CBDMA enable with server mode stable test + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) + dmas = self.generate_dms_param(8) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:] + ) + eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas={}'".format( + dmas + ) + + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas={}'".format( + dmas + ) + ) + param = ( + "--nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + ports=self.cbdma_list, + prefix="vhost", + eal_param=eal_param, + param=param, + ) + self.start_vms(vm_queue=8, packed=False, server_mode=True) + self.vm0_pmd = PmdOutput(self.vm_dut[0]) + self.vm1_pmd = PmdOutput(self.vm_dut[1]) + self.start_vm0_testpmd() + self.start_vm1_testpmd(resend=False) + self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=8) + self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=8) + for _ in range(10): + self.logger.info("Quit and relaunch vhost side testpmd") + self.vhost_user_pmd.execute_cmd("quit", "#") + self.start_vhost_testpmd( + cores=self.vhost_core_list, + ports=self.cbdma_list, + prefix="vhost", + eal_param=eal_param, + param=param, + ) + self.start_vm1_testpmd(resend=True) + self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=8) + self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=8) + + def test_vm2vm_virtio_pmd_split_ring_mergeable_path_dynamic_queue_size_cbdma_enable_with_server_mode_test( + self, + ): + """ + Test Case 2: VM2VM virtio-pmd split ring mergeable path dynamic queue size CBDMA enable with server mode test + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) + dmas = self.generate_dms_param(4) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:] + ) + eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas={}'".format( + dmas + ) + + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas={}'".format( + dmas + ) + ) + param = ( + " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + ports=self.cbdma_list, + prefix="vhost", + eal_param=eal_param, + param=param, + ) + self.start_vms(vm_queue=8, packed=False, server_mode=True) + self.vm0_pmd = PmdOutput(self.vm_dut[0]) + self.vm1_pmd = PmdOutput(self.vm_dut[1]) + self.start_vm0_testpmd() + self.start_vm1_testpmd(resend=False) + self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=4) + self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=4) + for _ in range(10): + self.logger.info("Quit and relaunch vhost side testpmd with 8 queues") + self.vhost_user_pmd.execute_cmd("quit", "#") + dmas = self.generate_dms_param(8) + eal_param = "--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas={}'".format( + dmas + ) + " --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas={}'".format( + dmas + ) + param = ( + " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + ports=self.cbdma_list, + prefix="vhost", + eal_param=eal_param, + param=param, + ) + self.start_vm1_testpmd(resend=True) + self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=8) + self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=8) + + def test_vm2vm_virtio_pmd_packed_ring_mergeable_path_8_queues_cbdma_enable_test( + self, + ): + """ + Test Case 3: VM2VM virtio-pmd packed ring mergeable path 8 queues CBDMA enable test + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=16, allow_diff_socket=True) + dmas = self.generate_dms_param(8) + lcore_dma = self.generate_lcore_dma_param( + cbdma_list=self.cbdma_list, core_list=self.vhost_core_list[1:] + ) + eal_param = "--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas={}'".format( + dmas + ) + " --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas={}'".format(dmas) + param = ( + " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8" + + " --lcore-dma={}".format(lcore_dma) + ) + self.start_vhost_testpmd( + cores=self.vhost_core_list, + ports=self.cbdma_list, + prefix="vhost", + eal_param=eal_param, + param=param, + ) + self.start_vms(vm_queue=8, packed=True, server_mode=False) + self.vm0_pmd = PmdOutput(self.vm_dut[0]) + self.vm1_pmd = PmdOutput(self.vm_dut[1]) + self.start_vm0_testpmd() + self.start_vm1_testpmd(resend=False) + self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=8) + self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=8) + self.logger.info("Quit and relaunch VM2 with split ring") + self.vm1_pmd.execute_cmd("quit", "#") + self.vm[1].stop() + self.vm_dut.remove(self.vm_dut[1]) + self.vm.remove(self.vm[1]) + self.start_vms(vm_queue=8, packed=False, restart_vm1=True, server_mode=False) + self.vm1_pmd = PmdOutput(self.vm_dut[1]) + self.vm0_pmd.execute_cmd("start") + self.start_vm1_testpmd(resend=False) + self.check_packets_of_each_queue(vm_pmd=self.vm0_pmd, queues=8) + self.check_packets_of_each_queue(vm_pmd=self.vm1_pmd, queues=8) + + def stop_all_apps(self): + for i in range(len(self.vm)): + self.vm_dut[i].send_expect("quit", "#", 20) + self.vm[i].stop() + self.vhost_user.send_expect("quit", "#", 30) + + def bind_cbdma_device_to_kernel(self): + self.dut.send_expect("modprobe ioatdma", "# ") + self.dut.send_expect( + "./usertools/dpdk-devbind.py -u %s" % self.cbdma_str, "# ", 30 + ) + self.dut.send_expect( + "./usertools/dpdk-devbind.py --force --bind=ioatdma %s" % self.cbdma_str, + "# ", + 60, + ) + + def tear_down(self): + """ + Run after each test case. + """ + self.stop_all_apps() + self.dut.kill_all() + self.bind_cbdma_device_to_kernel() + + def tear_down_all(self): + """ + Run after each test suite. + """ + self.dut.close_session(self.vhost_user)