From patchwork Mon Apr 25 01:37:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 110195 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 83460A00BE; Mon, 25 Apr 2022 03:37:35 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7C265410D4; Mon, 25 Apr 2022 03:37:35 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id A3C644069D for ; Mon, 25 Apr 2022 03:37:33 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650850654; x=1682386654; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=Qa8I1Vk9l4G4ALxNQcp86YB44TA4oNSRsKXzDB+oa3s=; b=HpFJPPxIuig5lBCusgv0Pyk1z1NArDULDFfIyWP8pXe7OGUeiNRWTsSw n+VkQsKGmp+5kwzzrgL3Z+CaGMD23/O5teIoNb2jDUl1HSz0m7BL0bYS/ Mw8Ami06UB9mJeIjc1DL9vFl6xv4OOn+AWv+ZUN28N65tgtbCQ4tTelg3 zIcJmr2SgNFfbO+AAk5LsCstqgmHqCz5NFYLMB9ijRc7khahbyU9O4d+1 RUBuZ/eXUUFQdvUr2h0f6pNRpfVbq6DXf/YB/oeTpts5LgwFYFnPx9Fx9 L+1MIz8WRiHVzfHhf7B1SxAiOt5YUR3sQqZIUjs0FUKcNX/DytRQ/l8pc Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10327"; a="263960116" X-IronPort-AV: E=Sophos;i="5.90,287,1643702400"; d="scan'208";a="263960116" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2022 18:37:32 -0700 X-IronPort-AV: E=Sophos;i="5.90,287,1643702400"; d="scan'208";a="704355014" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2022 18:37:31 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 1/3] test_plans/index: add pvp_virtio_user_4k_pages_cbdma Date: Mon, 25 Apr 2022 09:37:27 +0800 Message-Id: <20220425013727.1572177-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path), add new testsuite pvp_virtio_user_4k_pages_cbdma for coverage the pvp virtio-user 4k pages with cbdma. 1) Add net pvp_virtio_user_4k_pages_cbdma_test_plan into test_plans/index.rst. Signed-off-by: Wei Ling --- test_plans/index.rst | 1 + 1 file changed, 1 insertion(+) diff --git a/test_plans/index.rst b/test_plans/index.rst index f8118d14..17707cad 100644 --- a/test_plans/index.rst +++ b/test_plans/index.rst @@ -245,6 +245,7 @@ The following are the test plans for the DPDK DTS automated test system. pvp_vhost_user_reconnect_test_plan pvp_virtio_bonding_test_plan pvp_virtio_user_4k_pages_test_plan + pvp_virtio_user_4k_pages_cbdma_test_plan vdev_primary_secondary_test_plan vhost_1024_ethports_test_plan virtio_pvp_regression_test_plan From patchwork Mon Apr 25 01:37:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 110196 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A61A2A00BE; Mon, 25 Apr 2022 03:37:46 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A056C410E6; Mon, 25 Apr 2022 03:37:46 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id B8FFE4069D for ; Mon, 25 Apr 2022 03:37:44 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650850664; x=1682386664; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=uWaRD9OwxyP+fQldYtxzmqzXT8cFOAJ2Q4BBG0DnZ6I=; b=fmWT7XmaDS8rECrNffFl5faON3SVSL4jy733xDv6YHb3MvJSCdUBTVsI O9fmTedbIqBA39sNbsfp2aOc5BUrQl87s5jKxNFnKbPYqqBUxZhCvGAxg KYgu9TW6u5P17fUdy0V/mU/W/sydMS6FBzTcYoLzEr8VZfuOe8DgCkOrg D440OTGMUzLL1stG0Br1eX8KFWvouikrDR8E3OgDV/ihvltVgL69QUj6b Xl2KOXhd6LbVfrLRJGzjyoIuXWDyo1wVzP0o2m+pM7eAG0YszXLEYNPcv GJVccb/Gdmte+DH97B4n5+R+6CbXTCTYXkujolQJt0lAM3Xpn8swcK46e Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10327"; a="262709734" X-IronPort-AV: E=Sophos;i="5.90,287,1643702400"; d="scan'208";a="262709734" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2022 18:37:43 -0700 X-IronPort-AV: E=Sophos;i="5.90,287,1643702400"; d="scan'208";a="704355054" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2022 18:37:42 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 2/3] test_plans/pvp_virtio_user_4k_pages_cbdma_test_plan: add add pvp_virtio_user_4k_pages_cbdma testplan Date: Mon, 25 Apr 2022 09:37:38 +0800 Message-Id: <20220425013738.1572238-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path), add new testsuite pvp_virtio_user_4k_pages_cbdma for coverage the pvp virtio-user 4k pages with cbdma. 1) Add new testplan pvp_virtio_user_4k_pages_cbdma_test_plan into test_plans. Signed-off-by: Wei Ling --- ...p_virtio_user_4k_pages_cbdma_test_plan.rst | 455 ++++++++++++++++++ 1 file changed, 455 insertions(+) create mode 100644 test_plans/pvp_virtio_user_4k_pages_cbdma_test_plan.rst diff --git a/test_plans/pvp_virtio_user_4k_pages_cbdma_test_plan.rst b/test_plans/pvp_virtio_user_4k_pages_cbdma_test_plan.rst new file mode 100644 index 00000000..0abd8f42 --- /dev/null +++ b/test_plans/pvp_virtio_user_4k_pages_cbdma_test_plan.rst @@ -0,0 +1,455 @@ +.. Copyright (c) <2022>, Intel Corporation + All rights reserved. + + Redistribution and use in source and binary forms, with or without + modification, are permitted provided that the following conditions + are met: + + - Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + + - Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in + the documentation and/or other materials provided with the + distribution. + + - Neither the name of Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived + from this software without specific prior written permission. + + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS + FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE + COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, + INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES + (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR + SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) + ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED + OF THE POSSIBILITY OF SUCH DAMAGE. + +============================================= +vhost/virtio-user pvp with 4K-pages test plan +============================================= + +DPDK 19.02 add support for using virtio-user without hugepages. +The --no-huge mode was augmented to use memfd-backed memory (on systems that support memfd), +to allow using virtio-user-based NICs without hugepages. + +For more about dpdk-testpmd sample, please refer to the DPDK docments: +https://doc.dpdk.org/guides/testpmd_app_ug/run_app.html + +For virtio-user vdev parameter, you can refer to the DPDK docments: +https://doc.dpdk.org/guides/nics/virtio.html#virtio-paths-selection-and-usage. + + +Prerequisites +============= + +Topology +-------- +Test flow: Vhost-user-->Virtio-user + +Hardware +-------- +Supportted NICs: ALL + +Software +-------- +Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz + +General set up +-------------- +1. Compile DPDK:: + + # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library= + # ninja -C -j 110 + +2. Get the PCI device ID and DMA device ID of DUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 is DMA device ID:: + + # ./usertools/dpdk-devbind.py -s + + Network devices using kernel driver + =================================== + 0000:18:00.0 'Device 159b' if=ens785f0 drv=ice unused=vfio-pci + + DMA devices using kernel driver + =============================== + 0000:00:04.0 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci + 0000:00:04.1 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci + +Test case +========= + +Common steps +------------ +1. Bind 1 NIC port and CBDMA channels to vfio-pci:: + + # ./usertools/dpdk-devbind.py -b vfio-pci + # ./usertools/dpdk-devbind.py -b vfio-pci + + For example, bind 1 NIC port and 1 CBDMA channels:: + # ./usertools/dpdk-devbind.py -b vfio-pci 0000:af:00.0,0000:80:04.0 + +Test Case 1: Basic test vhost/virtio-user split ring with 4K-pages and cbdma enable +----------------------------------------------------------------------------------- +This case uses testpmd to test split ring path with 4K-pages and cbdma enable. + +1. Bind 1 NIC port and 1 CBDMA channel to vfio-pci, as common steps 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 31-32 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:af:00.0 -a 0000:80:04.0 \ + --vdev 'net_vhost0,iface=/tmp/vhost-net,queues=1,dmas=[txq0]' -- -i --no-numa --socket-num=1 --lcore-dma=[lcore32@0000:80:04.0] + testpmd> start + +3. Prepare tmpfs with 4K-pages:: + + mkdir /mnt/tmpfs_nohuge0 + mount tmpfs /mnt/tmpfs_nohuge0 -t tmpfs -o size=4G + +4. Launch virtio-user with 4K-pages:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 33-34 -n 4 --no-huge -m 1024 --file-prefix=virtio-user --no-pci \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1 -- -i + testpmd> set fwd mac + testpmd> start + +5. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command:: + + testpmd> show port stats all + +Test Case 2: Basic test vhost/virtio-user packed ring with 4K-pages and cbdma enable +------------------------------------------------------------------------------------ +This case uses testpmd to test packed ring path with 4K-pages and cbdma enable. + +1. Bind 1 NIC port and 1 CBDMA channel to vfio-pci, as common steps 1. + +2. Launch vhost by below command:: + + modprobe vfio-pci + ./usertools/dpdk-devbind.py --bind=vfio-pci 0000:af:00.0 0000:80:04.0 + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 31-32 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:af:00.0 -a 0000:80:04.0 \ + --vdev 'net_vhost0,iface=/tmp/vhost-net,queues=1,dmas=[txq0]' -- -i --no-numa --socket-num=1 --lcore-dma=[lcore32@0000:80:04.0] + testpmd> start + +3. Prepare tmpfs with 4K-pages:: + + mkdir /mnt/tmpfs_nohuge0 + mount tmpfs /mnt/tmpfs_nohuge0 -t tmpfs -o size=4G + +4. Launch virtio-user with 4K-pages:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 33-34 -n 4 --no-huge -m 1024 --file-prefix=virtio-user --no-pci \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,packed_vq=1,queues=1 -- -i + testpmd> set fwd mac + testpmd> start + +5. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command:: + + testpmd> show port stats all + +Test Case 3: VM2VM split ring vhost-user/virtio-net 4K-pages and CBDMA enable test with tcp traffic +--------------------------------------------------------------------------------------------------- +This case uses testpmd, QEMU and iperf to test split ring path with 4K-pages and cbdma enable to forward packets. + +1. Bind 2 CBDMA channels to vfio-pci, as common steps 1. + +2. Launch vhost by below command:: + + ./usertools/dpdk-devbind.py --bind=vfio-pci 0000:80:04.0 0000:80:04.1 + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30-32 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:80:04.0 -a 0000:80:04.1 \ + --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0],dma_ring_size=2048' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0],dma_ring_size=2048' \ + --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --lcore-dma=[lcore31@0000:80:04.0,lcore32@0000:80:04.1] + testpmd> start + +3. Prepare tmpfs with 4K-pages:: + + mkdir /mnt/tmpfs_nohuge0 + mkdir /mnt/tmpfs_nohuge1 + mount tmpfs /mnt/tmpfs_nohuge0 -t tmpfs -o size=8G + mount tmpfs /mnt/tmpfs_nohuge1 -t tmpfs -o size=8G + +4. Launch VM1 and VM2:: + + taskset -c 20,21,22,23,24,25,26,27 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004.img \ + -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6000-:22 \ + -chardev socket,id=char0,path=./vhost-net0 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10 + + taskset -c 48,49,50,51,52,53,54,55 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004_2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6001-:22 \ + -chardev socket,id=char0,path=./vhost-net1 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12 + +5. On VM1, set virtio device IP and run arp protocal:: + + ifconfig ens5 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 + +6. On VM2, set virtio device IP and run arp protocal:: + + ifconfig ens5 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 + +7. Check the iperf performance between two VMs by below commands:: + + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + +8. Check 2VMs can receive and send big packets to each other:: + + testpmd> show port xstats all + Port 0 should have tx packets above 1522 + Port 1 should have rx packets above 1522 + +Test Case 4: vm2vm vhost/virtio-net packed ring multi queues with 4K-pages and cbdma enable +------------------------------------------------------------------------------------------- +This case uses testpmd, QEMU and iperf to test packed ring path with 4K-pages and cbdma enable to forward packets. + +1. Bind 16 CBDMA channels to vfio-pci, as common steps 1. + +2. Launch vhost by below command:: + + ./usertools/dpdk-devbind.py --bind=vfio-pci 0000:80:04.0 0000:80:04.1 0000:80:04.2 0000:80:04.3 0000:80:04.4 0000:80:04.5 0000:80:04.6 0000:80:04.7 \ + 0000:00:04.0 0000:00:04.1 0000:00:04.2 0000:00:04.3 0000:00:04.4 0000:00:04.5 0000:00:04.6 0000:00:04.7 + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-33 -n 4 --no-huge -m 1024 --file-prefix=vhost \ + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ + --lcore-dma=[lcore30@0000:80:04.0,lcore30@0000:80:04.1,lcore30@0000:00:04.2,lcore30@0000:00:04.3,lcore30@0000:00:04.4,lcore30@0000:00:04.5,lcore31@0000:00:04.6,lcore31@0000:00:04.7,lcore32@0000:80:04.0,lcore32@0000:80:04.1,lcore32@0000:80:04.2,lcore32@0000:80:04.3,lcore32@0000:80:04.4,lcore32@0000:80:04.5,lcore32@0000:80:04.6,lcore33@0000:80:04.7] + + testpmd> start + +3. Prepare tmpfs with 4K-pages:: + + mkdir /mnt/tmpfs_nohuge0 + mkdir /mnt/tmpfs_nohuge1 + mount tmpfs /mnt/tmpfs_nohuge0 -t tmpfs -o size=4G + mount tmpfs /mnt/tmpfs_nohuge1 -t tmpfs -o size=4G + +4. Launch VM qemu:: + + taskset -c 20,21,22,23,24,25,26,27 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004.img \ + -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6000-:22 \ + -chardev socket,id=char0,path=./vhost-net0,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 + + taskset -c 48,49,50,51,52,53,54,55 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_yinan1,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004_2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6001-:22 \ + -chardev socket,id=char0,path=./vhost-net1,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 + +5. On VM1, set virtio device IP and run arp protocal:: + + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 + +6. On VM2, set virtio device IP and run arp protocal:: + + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 + +7. Scp 1MB file form VM1 to VM2:: + + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + +8. Check the iperf performance between two VMs by below commands:: + + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + +9. Quit and relaunch vhost w/ diff CBDMA channels:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-33 -n 4 --no-huge -m 1024 --file-prefix=vhost \ + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ + --lcore-dma=[lcore30@0000:80:04.0,lcore30@0000:80:04.1,lcore30@0000:00:04.2,lcore30@0000:00:04.3,lcore31@0000:80:04.0,lcore31@0000:00:04.2,lcore31@0000:00:04.4,lcore31@0000:00:04.5,lcore31@0000:00:04.6,lcore31@0000:00:04.7,lcore32@0000:80:04.1,lcore32@0000:00:04.3,lcore32@0000:80:04.0,lcore32@0000:80:04.1,lcore32@0000:80:04.2,lcore32@0000:80:04.3,lcore32@0000:80:04.4,lcore32@0000:80:04.5,lcore32@0000:80:04.6,lcore33@0000:80:04.7] + testpmd> start + +10. Rerun step 6-7. + +11. Quit and relaunch vhost w/o CBDMA channels:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-33 -n 4 --no-huge -m 1024 --file-prefix=vhost \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4' \ + -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4 + testpmd> start + +12. On VM1, set virtio device:: + + ethtool -L ens5 combined 4 + +13. On VM2, set virtio device:: + + ethtool -L ens5 combined 4 + +14. Scp 1MB file form VM1 to VM2:: + + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + +15. Check the iperf performance and compare with CBDMA enable performance, ensure CMDMA enable performance is higher:: + + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + +16. Quit and relaunch vhost with 1 queues:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-33 -n 4 --no-huge -m 1024 --file-prefix=vhost \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4' \ + -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1 + testpmd> start + +17. On VM1, set virtio device:: + + ethtool -L ens5 combined 1 + +18. On VM2, set virtio device:: + + ethtool -L ens5 combined 1 + +19. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp:: + + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + +20. Check the iperf performance, ensure queue0 can work from vhost side:: + + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + +Test Case 5: loopback packed ring large chain packets 4K-pages stress test with server mode and cbdma enable +------------------------------------------------------------------------------------------------------------ +This case uses and testpmd to test packed ring path with 4K-pages and cbdma enable to stress test with chain packets forward. + +1. Bind 1 CBDMA channel to vfio-pci, as common steps 1. + +2. Launch vhost by below command:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30-31 -n 4 --no-huge -m 1024 -a 0000:80:04.0 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0],client=1' --iova=va -- -i --no-numa --socket-num=1 --nb-cores=1 --mbuf-size=65535 --lcore-dma=[lcore31@0000:80:04.0] + +3. Prepare tmpfs with 4K-pages:: + + mkdir /mnt/tmpfs_nohuge0 + mount tmpfs /mnt/tmpfs_nohuge0 -t tmpfs -o size=4G + +4. Launch virtio and start testpmd:: + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 32-33 -n 4 --no-huge -m 1024 --file-prefix=testpmd0 --no-pci \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=1,mrg_rxbuf=1,in_order=0,vectorized=1,packed_vq=1,queue_size=2048,server=1 \ + -- -i --rxq=1 --txq=1 --txd=2048 --rxd=2048 --nb-cores=1 + testpmd> set fwd mac + testpmd> start + +5. Send large packets from vhost, check virtio can receive packets:: + + testpmd> set txpkts 65535,65535,65535,65535,65535 + testpmd> start tx_first 32 + testpmd> show port stats all + +Test Case 6: vm2vm vhost/virtio-net split packed ring multi queues with 1G/4k-pages and cbdma enable +---------------------------------------------------------------------------------------------------- +This case uses testpmd, QEMU and iperf to test split and packed ring path with 1G/4K-pages and cbdma enable to forward packets. + +1. Bind 16 CBDMA channel to vfio-pci, as common steps 1. + +2. Launch vhost by below command:: + + ./usertools/dpdk-devbind.py --bind=vfio-pci 0000:80:04.0 0000:80:04.1 0000:80:04.2 0000:80:04.3 0000:80:04.4 0000:80:04.5 0000:80:04.6 0000:80:04.7 \ + 0000:00:04.0 0000:00:04.1 0000:00:04.2 0000:00:04.3 0000:00:04.4 0000:00:04.5 0000:00:04.6 0000:00:04.7 + + # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-33 -n 4 -m 1024 --file-prefix=vhost \ + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + -a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \ + --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0;txq1;txq2;txq3;]' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0;txq1;txq2;txq3;]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \ + --lcore-dma=[lcore30@0000:80:04.0,lcore30@0000:80:04.1,lcore30@0000:00:04.2,lcore30@0000:00:04.3,lcore31@0000:00:04.4,lcore31@0000:00:04.5,lcore31@0000:00:04.6,lcore31@0000:00:04.7,lcore32@0000:80:04.0,lcore32@0000:80:04.1,lcore32@0000:80:04.2,lcore32@0000:80:04.3,lcore33@0000:80:04.4,lcore33@0000:80:04.5,lcore33@0000:80:04.6,lcore33@0000:80:04.7] + testpmd> start + +3. Prepare tmpfs with 4K-pages:: + + mkdir /mnt/tmpfs_nohuge0 + mkdir /mnt/tmpfs_nohuge1 + mount tmpfs /mnt/tmpfs_yinan0 -t tmpfs -o size=4G + mount tmpfs /mnt/tmpfs_yinan1 -t tmpfs -o size=4G + +4. Launch VM qemu:: + + taskset -c 20,21,22,23,24,25,26,27 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004.img \ + -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6000-:22 \ + -chardev socket,id=char0,path=./vhost-net0 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 + + taskset -c 48,49,50,51,52,53,54,55 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004_2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6001-:22 \ + -chardev socket,id=char0,path=./vhost-net1 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,packed=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 + +5. On VM1, set virtio device IP and run arp protocal:: + + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 + +6. On VM2, set virtio device IP and run arp protocal:: + + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 + +7. Scp 1MB file form VM1 to VM2:: + + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + +8. Check the iperf performance between two VMs by below commands:: + + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` From patchwork Mon Apr 25 01:37:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 110197 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D0163A00BE; Mon, 25 Apr 2022 03:38:07 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CB5AF40F35; Mon, 25 Apr 2022 03:38:07 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 6F4A84069D for ; Mon, 25 Apr 2022 03:38:05 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650850685; x=1682386685; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=fAzhc1E94pjC8lvUoFvaLklyhlmM1+92xGVXacdIe4E=; b=eiFZogUIxXZdLEvpbhsSspnfq0uE6OwtTKBPhKVTHTdBu46lO+kM1J3Y tZEBqcOnj9GLgJo+z1M+aMAG9o8sCgC915Ent7PLeCCadHqV72/10mMj6 fC1fuNJd3z30nanbfhKv4vOtHLukTu7hrlocDgNIvV5CcmqksknF4iEt7 RiBGOPXK0TsfIPNUvSMKNtDIkRbM43s9DstC9rAqSFRbqxwXEZFRZLqKB syfiObPWC3dRk6D0VDq6V87XYiy8X8ocg5/lXWXlgF9Z3sD2ZWhL/JlF7 xUb3Bj+kCRJszDmLh9ZeGBq0Y1M1qruNZZU/1F0IpKGycXSjkHqpy288m A==; X-IronPort-AV: E=McAfee;i="6400,9594,10327"; a="252472249" X-IronPort-AV: E=Sophos;i="5.90,287,1643702400"; d="scan'208";a="252472249" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2022 18:38:04 -0700 X-IronPort-AV: E=Sophos;i="5.90,287,1643702400"; d="scan'208";a="704355099" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2022 18:38:02 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 3/3] tests/pvp_virtio_user_4k_pages_cbdma: add pvp_virtio_user_4k_pages_cbdma testsuite Date: Mon, 25 Apr 2022 09:37:55 +0800 Message-Id: <20220425013755.1572297-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path), add new testsuite pvp_virtio_user_4k_pages_cbdma for coverage the pvp virtio-user 4k pages with cbdma. 1) Add new testsuite TestSuite_pvp_virtio_user_4k_pages_cbdma.py into tests. Signed-off-by: Wei Ling --- ...estSuite_pvp_virtio_user_4k_pages_cbdma.py | 490 ++++++++++++++++++ 1 file changed, 490 insertions(+) create mode 100644 tests/TestSuite_pvp_virtio_user_4k_pages_cbdma.py diff --git a/tests/TestSuite_pvp_virtio_user_4k_pages_cbdma.py b/tests/TestSuite_pvp_virtio_user_4k_pages_cbdma.py new file mode 100644 index 00000000..9b89530c --- /dev/null +++ b/tests/TestSuite_pvp_virtio_user_4k_pages_cbdma.py @@ -0,0 +1,490 @@ +# BSD LICENSE +# +# Copyright(c) <2022> Intel Corporation. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +""" +DPDK Test suite. +vhost/virtio-user pvp with 4K pages. +""" + +import re +import time + +import framework.utils as utils +from framework.packet import Packet +from framework.pktgen import PacketGeneratorHelper +from framework.pmd_output import PmdOutput +from framework.test_case import TestCase +from framework.virt_common import VM + + +class TestPvpVirtioUser4kPagesCbdma(TestCase): + def set_up_all(self): + """ + Run at the start of each test suite. + """ + self.dut_ports = self.dut.get_ports() + self.ports_socket = self.dut.get_numa_id(self.dut_ports[0]) + self.cores_num = len([n for n in self.dut.cores if int(n["socket"]) == 0]) + self.verify(len(self.dut_ports) >= 1, "Insufficient ports for testing") + self.verify( + self.cores_num >= 4, + "There has not enought cores to test this suite %s" % self.suite_name, + ) + self.cores_list = self.dut.get_core_list(config="all", socket=self.ports_socket) + self.vhost_core_list = self.cores_list[0:9] + self.virtio0_core_list = self.cores_list[9:11] + self.vhost_user = self.dut.new_session(suite="vhost-user") + self.virtio_user0 = self.dut.new_session(suite="virtio-user") + self.vhost_user_pmd = PmdOutput(self.dut, self.vhost_user) + self.virtio_user0_pmd = PmdOutput(self.dut, self.virtio_user0) + self.pci_info = self.dut.ports_info[0]["pci"] + self.dst_mac = self.dut.get_mac_address(self.dut_ports[0]) + self.frame_sizes = [64, 128, 256, 512, 1024, 1518] + self.out_path = "/tmp/%s" % self.suite_name + out = self.tester.send_expect("ls -d %s" % self.out_path, "# ") + if "No such file or directory" in out: + self.tester.send_expect("mkdir -p %s" % self.out_path, "# ") + # create an instance to set stream field setting + self.pktgen_helper = PacketGeneratorHelper() + self.number_of_ports = 1 + self.app_testpmd_path = self.dut.apps_name["test-pmd"] + self.testpmd_name = self.app_testpmd_path.split("/")[-1] + self.vm_num = 2 + self.virtio_ip1 = "1.1.1.1" + self.virtio_ip2 = "1.1.1.2" + self.virtio_mac1 = "52:54:00:00:00:01" + self.virtio_mac2 = "52:54:00:00:00:02" + self.base_dir = self.dut.base_dir.replace("~", "/root") + + def set_up(self): + """ + Run before each test case. + """ + self.dut.send_expect("rm -rf /tmp/vhost-net*", "# ") + self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "# ") + self.umount_tmpfs_for_4k() + # Prepare the result table + self.table_header = ["Frame"] + self.table_header.append("Mode") + self.table_header.append("Mpps") + self.table_header.append("Queue Num") + self.table_header.append("% linerate") + self.result_table_create(self.table_header) + self.vm_dut = [] + self.vm = [] + + def get_cbdma_ports_info_and_bind_to_dpdk(self, cbdma_num, allow_diff_socket=False): + """ + get and bind cbdma ports into DPDK driver + """ + self.all_cbdma_list = [] + self.cbdma_list = [] + self.cbdma_str = "" + out = self.dut.send_expect( + "./usertools/dpdk-devbind.py --status-dev dma", "# ", 30 + ) + device_info = out.split("\n") + for device in device_info: + pci_info = re.search("\s*(0000:\S*:\d*.\d*)", device) + if pci_info is not None: + dev_info = pci_info.group(1) + # the numa id of ioat dev, only add the device which on same socket with nic dev + bus = int(dev_info[5:7], base=16) + if bus >= 128: + cur_socket = 1 + else: + cur_socket = 0 + if allow_diff_socket: + self.all_cbdma_list.append(pci_info.group(1)) + else: + if self.ports_socket == cur_socket: + self.all_cbdma_list.append(pci_info.group(1)) + self.verify( + len(self.all_cbdma_list) >= cbdma_num, "There no enough cbdma device" + ) + self.cbdma_list = self.all_cbdma_list[0:cbdma_num] + self.cbdma_str = " ".join(self.cbdma_list) + self.dut.send_expect( + "./usertools/dpdk-devbind.py --force --bind=%s %s" + % (self.drivername, self.cbdma_str), + "# ", + 60, + ) + + def bind_cbdma_device_to_kernel(self): + self.dut.send_expect("modprobe ioatdma", "# ") + self.dut.send_expect( + "./usertools/dpdk-devbind.py -u %s" % self.cbdma_str, "# ", 30 + ) + self.dut.send_expect( + "./usertools/dpdk-devbind.py --force --bind=ioatdma %s" % self.cbdma_str, + "# ", + 60, + ) + + def send_and_verify(self): + """ + Send packet with packet generator and verify + """ + for frame_size in self.frame_sizes: + tgen_input = [] + rx_port = self.tester.get_local_port(self.dut_ports[0]) + tx_port = self.tester.get_local_port(self.dut_ports[0]) + pkt = Packet(pkt_type="UDP", pkt_len=frame_size) + pkt.config_layer("ether", {"dst": "%s" % self.dst_mac}) + pkt.save_pcapfile(self.tester, "%s/vhost.pcap" % self.out_path) + tgen_input.append((tx_port, rx_port, "%s/vhost.pcap" % self.out_path)) + + self.tester.pktgen.clear_streams() + streams = self.pktgen_helper.prepare_stream_from_tginput( + tgen_input, 100, None, self.tester.pktgen + ) + _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams) + Mpps = pps / 1000000.0 + # self.verify(Mpps > self.check_value[frame_size], + # "%s of frame size %d speed verify failed, expect %s, result %s" % ( + # self.running_case, frame_size, self.check_value[frame_size], Mpps)) + throughput = Mpps * 100 / float(self.wirespeed(self.nic, 64, 1)) + results_row = [frame_size] + results_row.append("4K pages") + results_row.append(Mpps) + results_row.append("1") + results_row.append(throughput) + self.result_table_add(results_row) + + def send_chain_pakcets_from_vhost(self): + self.vhost_user_pmd.execute_cmd("set txpkts 65535,65535,65535,65535,65535") + self.vhost_user_pmd.execute_cmd("start tx_first 32") + out = self.vhost_user_pmd.execute_cmd("show port stats all") + self.logger.info(out) + + def verify_virtio_user_receive_packets(self): + for _ in range(10): + self.virtio_user0_pmd.execute_cmd("show port stats all") + out = self.virtio_user0_pmd.execute_cmd("show port stats all") + self.logger.info(out) + rx = re.search("RX-packets:\s*(\d*)", out) + rx_packets = int(rx.group(1)) + self.verify(rx_packets > 0, "virtio-user can't receive packets") + + def start_vhost_user_testpmd(self, cores, param="", eal_param="", ports=""): + """ + launch the testpmd as virtio with vhost_user + """ + self.vhost_user_pmd.start_testpmd( + cores=cores, + param=param, + eal_param=eal_param, + ports=ports, + prefix="vhost", + fixed_prefix=True, + ) + + def start_virtio_user0_testpmd(self, cores, eal_param="", param=""): + """ + launch the testpmd as virtio with vhost_net0 + """ + self.virtio_user0_pmd.start_testpmd( + cores=cores, + eal_param=eal_param, + param=param, + no_pci=True, + prefix="virtio-user0", + fixed_prefix=True, + ) + + def start_vms( + self, + setting_args="", + server_mode=False, + opt_queue=None, + vm_config="vhost_sample", + ): + """ + start one VM, each VM has one virtio device + """ + vm_params = {} + if opt_queue is not None: + vm_params["opt_queue"] = opt_queue + + for i in range(self.vm_num): + vm_dut = None + vm_info = VM(self.dut, "vm%d" % i, vm_config) + + vm_params["driver"] = "vhost-user" + if not server_mode: + vm_params["opt_path"] = self.base_dir + "/vhost-net%d" % i + else: + vm_params["opt_path"] = self.base_dir + "/vhost-net%d" % i + ",server" + vm_params["opt_mac"] = "52:54:00:00:00:0%d" % (i + 1) + vm_params["opt_settings"] = setting_args + vm_info.set_vm_device(**vm_params) + time.sleep(3) + try: + vm_dut = vm_info.start(set_target=False) + if vm_dut is None: + raise Exception("Set up VM ENV failed") + except Exception as e: + print((utils.RED("Failure for %s" % str(e)))) + raise e + self.vm_dut.append(vm_dut) + self.vm.append(vm_info) + + def config_vm_ip(self): + """ + set virtio device IP and run arp protocal + """ + vm1_intf = self.vm_dut[0].ports_info[0]["intf"] + vm2_intf = self.vm_dut[1].ports_info[0]["intf"] + self.vm_dut[0].send_expect( + "ifconfig %s %s" % (vm1_intf, self.virtio_ip1), "#", 10 + ) + self.vm_dut[1].send_expect( + "ifconfig %s %s" % (vm2_intf, self.virtio_ip2), "#", 10 + ) + self.vm_dut[0].send_expect( + "arp -s %s %s" % (self.virtio_ip2, self.virtio_mac2), "#", 10 + ) + self.vm_dut[1].send_expect( + "arp -s %s %s" % (self.virtio_ip1, self.virtio_mac1), "#", 10 + ) + + def config_vm_combined(self, combined=1): + """ + set virtio device combined + """ + vm1_intf = self.vm_dut[0].ports_info[0]["intf"] + vm2_intf = self.vm_dut[1].ports_info[0]["intf"] + self.vm_dut[0].send_expect( + "ethtool -L %s combined %d" % (vm1_intf, combined), "#", 10 + ) + self.vm_dut[1].send_expect( + "ethtool -L %s combined %d" % (vm2_intf, combined), "#", 10 + ) + + def start_iperf(self): + """ + run perf command between to vms + """ + iperf_server = "iperf -s -i 1" + iperf_client = "iperf -c {} -i 1 -t 60".format(self.virtio_ip1) + self.vm_dut[0].send_expect( + "{} > iperf_server.log &".format(iperf_server), "", 10 + ) + self.vm_dut[1].send_expect( + "{} > iperf_client.log &".format(iperf_client), "", 60 + ) + time.sleep(60) + + def get_iperf_result(self): + """ + get the iperf test result + """ + self.table_header = ["Mode", "[M|G]bits/sec"] + self.result_table_create(self.table_header) + self.vm_dut[0].send_expect("pkill iperf", "# ") + self.vm_dut[1].session.copy_file_from("%s/iperf_client.log" % self.dut.base_dir) + fp = open("./iperf_client.log") + fmsg = fp.read() + fp.close() + # remove the server report info from msg + index = fmsg.find("Server Report") + if index != -1: + fmsg = fmsg[:index] + iperfdata = re.compile("\S*\s*[M|G]bits/sec").findall(fmsg) + # the last data of iperf is the ave data from 0-30 sec + self.verify(len(iperfdata) != 0, "The iperf data between to vms is 0") + self.logger.info("The iperf data between vms is %s" % iperfdata[-1]) + + # put the result to table + results_row = ["vm2vm", iperfdata[-1]] + self.result_table_add(results_row) + + # print iperf resut + self.result_table_print() + # rm the iperf log file in vm + self.vm_dut[0].send_expect("rm iperf_server.log", "#", 10) + self.vm_dut[1].send_expect("rm iperf_client.log", "#", 10) + + def verify_xstats_info_on_vhost(self): + """ + check both 2VMs can receive and send big packets to each other + """ + self.vhost_user_pmd.execute_cmd("show port stats all") + out_tx = self.vhost_user_pmd.execute_cmd("show port xstats 0") + out_rx = self.vhost_user_pmd.execute_cmd("show port xstats 1") + + tx_info = re.search("tx_size_1523_to_max_packets:\s*(\d*)", out_tx) + rx_info = re.search("rx_size_1523_to_max_packets:\s*(\d*)", out_rx) + + self.verify( + int(rx_info.group(1)) > 0, "Port 1 not receive packet greater than 1522" + ) + self.verify( + int(tx_info.group(1)) > 0, "Port 0 not forward packet greater than 1522" + ) + + def mount_tmpfs_for_4k(self, number=1): + """ + Prepare tmpfs with 4K-pages + """ + for num in range(number): + self.dut.send_expect("mkdir /mnt/tmpfs_nohuge{}".format(num), "# ") + self.dut.send_expect( + "mount tmpfs /mnt/tmpfs_nohuge{} -t tmpfs -o size=4G".format(num), "# " + ) + + def umount_tmpfs_for_4k(self): + """ + Prepare tmpfs with 4K-pages + """ + out = self.dut.send_expect( + "mount |grep 'mnt/tmpfs' |awk -F ' ' {'print $3'}", "#" + ) + mount_infos = out.replace("\r", "").split("\n") + if len(mount_infos) != 0: + for mount_info in mount_infos: + self.dut.send_expect("umount {}".format(mount_info), "# ") + + def umount_huge_pages(self): + self.dut.send_expect("mount |grep '/mnt/huge' |awk -F ' ' {'print $3'}", "#") + self.dut.send_expect("umount /mnt/huge", "# ") + + def mount_huge_pages(self): + self.dut.send_expect("mkdir -p /mnt/huge", "# ") + self.dut.send_expect("mount -t hugetlbfs nodev /mnt/huge", "# ") + + def test_perf_pvp_virtio_user_split_ring_with_4K_pages_and_cbdma_enable(self): + """ + Test Case 1: Basic test vhost/virtio-user split ring with 4K-pages and cbdma enable + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(1) + lcore_dma = f"lcore{self.vhost_core_list[1]}@{self.cbdma_list[0]}" + vhost_eal_param = "--no-huge -m 1024 --vdev 'net_vhost0,iface=./vhost-net,queues=1,dmas=[txq0]'" + vhost_param = " --no-numa --socket-num={} --lcore-dma=[{}]".format( + self.ports_socket, lcore_dma + ) + ports = [self.dut.ports_info[0]["pci"]] + for i in self.cbdma_list: + ports.append(i) + self.start_vhost_user_testpmd( + cores=self.vhost_core_list[0:2], + eal_param=vhost_eal_param, + param=vhost_param, + ports=ports, + ) + self.vhost_user_pmd.execute_cmd("start") + self.mount_tmpfs_for_4k(number=1) + virtio_eal_param = "--no-huge -m 1024 --vdev net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net,queues=1" + self.start_virtio_user0_testpmd( + cores=self.virtio0_core_list, eal_param=virtio_eal_param + ) + self.virtio_user0_pmd.execute_cmd("set fwd mac") + self.virtio_user0_pmd.execute_cmd("start") + self.send_and_verify() + self.result_table_print() + + def test_perf_pvp_virtio_user_packed_ring_with_4K_pages_and_cbdma_enable(self): + """ + Test Case 2: Basic test vhost/virtio-user packed ring with 4K-pages and cbdma enable + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(1) + lcore_dma = f"lcore{self.vhost_core_list[1]}@{self.cbdma_list[0]}" + vhost_eal_param = "--no-huge -m 1024 --vdev 'net_vhost0,iface=./vhost-net,queues=1,dmas=[txq0]'" + vhost_param = " --no-numa --socket-num={} --lcore-dma=[{}]".format( + self.ports_socket, lcore_dma + ) + ports = [self.dut.ports_info[0]["pci"]] + for i in self.cbdma_list: + ports.append(i) + self.start_vhost_user_testpmd( + cores=self.vhost_core_list[0:2], + eal_param=vhost_eal_param, + param=vhost_param, + ports=ports, + ) + self.vhost_user_pmd.execute_cmd("start") + self.mount_tmpfs_for_4k(number=1) + virtio_eal_param = "--no-huge -m 1024 --vdev net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net,packed_vq=1,queues=1" + self.start_virtio_user0_testpmd( + cores=self.virtio0_core_list, eal_param=virtio_eal_param + ) + self.virtio_user0_pmd.execute_cmd("set fwd mac") + self.virtio_user0_pmd.execute_cmd("start") + self.send_and_verify() + self.result_table_print() + + def test_loopback_packed_ring_large_chain_packets_4k_pages_stress_test_with_server_mode_and_cbdma_enable( + self, + ): + """ + Test Case 5: loopback packed ring large chain packets 4K-pages stress test with server mode and cbdma enable + """ + self.get_cbdma_ports_info_and_bind_to_dpdk(1) + lcore_dma = f"lcore{self.vhost_core_list[1]}@{self.cbdma_list[0]}" + vhost_eal_param = "--no-huge -m 1024 --vdev 'eth_vhost0,iface=./vhost-net0,queues=1,dmas=[txq0],client=1'" + vhost_param = " --no-numa --socket-num={} --nb-cores=1 --mbuf-size=4096 --lcore-dma=[{}]".format( + self.ports_socket, lcore_dma + ) + ports = self.cbdma_list + self.start_vhost_user_testpmd( + cores=self.vhost_core_list[0:2], + eal_param=vhost_eal_param, + param=vhost_param, + ports=ports, + ) + self.mount_tmpfs_for_4k(number=1) + virtio_eal_param = "--no-huge -m 1024 --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=1,mrg_rxbuf=1,in_order=0,vectorized=1,packed_vq=1,queue_size=2048,server=1" + virtio_param = "--rxq=1 --txq=1 --txd=2048 --rxd=2048 --nb-cores=1" + self.start_virtio_user0_testpmd( + cores=self.virtio0_core_list, eal_param=virtio_eal_param, param=virtio_param + ) + # self.virtio_user0_pmd.execute_cmd('set fwd mac') + self.virtio_user0_pmd.execute_cmd("start") + self.send_chain_pakcets_from_vhost() + self.verify_virtio_user_receive_packets() + + def tear_down(self): + """ + Run after each test case. + """ + self.virtio_user0_pmd.quit() + self.vhost_user_pmd.quit() + self.dut.send_expect("killall -s INT %s" % self.testpmd_name, "# ") + self.bind_cbdma_device_to_kernel() + self.umount_tmpfs_for_4k() + + def tear_down_all(self): + """ + Run after each test suite. + """ + self.dut.close_session(self.vhost_user) + self.dut.close_session(self.virtio_user0)