From patchwork Mon Jun 5 06:34:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 128073 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 664C942C31; Mon, 5 Jun 2023 08:34:21 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6355640ED6; Mon, 5 Jun 2023 08:34:21 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id E81644003C for ; Mon, 5 Jun 2023 08:34:18 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685946859; x=1717482859; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=t2iOf2ZnCw1qMLB00nEZwo7uuZBlXGvAI6FWaVqtFQE=; b=Oc2aH+2ncCVIH2Is/GN1DZaFskr5/5EKO4+T4Jsu+P7RbFAIP5xaGMOS soI4DUpHQuKdAIQsogVRmekDY4R5LC4BZhKkigFEjVux5hsXoViCkaiXZ 7BzFaiifQvbO6xGEkNHuJDq+ZwGRjYhToSk+mq3qbB2kdqjy9sj054ZbK Zm7KvtToa4V2KWefYhUeRVxPw4+RCOc9bY8TpqDchQaiojsoTkrPXhGIA MvK8uiQ+TUqK1amoyqwu5dOTSoFtLuRxTvC1vmbXrEHTvZKzIWpHrFee3 7WM98coUsq4prRq4DJ5iAOqZFelB0XJrnZgXUjboL1v3U3nVRsmMFh0/D g==; X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="356306985" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="356306985" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jun 2023 23:34:18 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="852860704" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="852860704" Received: from unknown (HELO dut222..) ([10.239.252.222]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jun 2023 23:34:16 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 1/3] test_plans/basic_4k_pages_cbdma: modify testplan by DPDK changed Date: Mon, 5 Jun 2023 14:34:07 +0800 Message-Id: <20230605063407.563754-1-weix.ling@intel.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As the DPDK commit a399d7b5(vfio: do not coalesce DMA mappings) changed, We need to delete the `--no-huge -m 1024` parameter when start the vhost-user as back-end side. So delete the `--no-huge -m 1024` parameter in testplan. Signed-off-by: Wei Ling --- test_plans/basic_4k_pages_cbdma_test_plan.rst | 560 +++++++++--------- 1 file changed, 278 insertions(+), 282 deletions(-) diff --git a/test_plans/basic_4k_pages_cbdma_test_plan.rst b/test_plans/basic_4k_pages_cbdma_test_plan.rst index 6c7d8398..c9586521 100644 --- a/test_plans/basic_4k_pages_cbdma_test_plan.rst +++ b/test_plans/basic_4k_pages_cbdma_test_plan.rst @@ -56,31 +56,31 @@ General set up 2. Compile DPDK:: - # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static - # ninja -C -j 110 - For example: - CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc - ninja -C x86_64-native-linuxapp-gcc -j 110 + # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static + # ninja -C -j 110 + For example: + CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc + ninja -C x86_64-native-linuxapp-gcc -j 110 3. Get the PCI device ID and DMA device ID of DUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 is DMA device ID:: - # ./usertools/dpdk-devbind.py -s + # ./usertools/dpdk-devbind.py -s - Network devices using kernel driver - =================================== - 0000:18:00.0 'Device 159b' if=ens785f0 drv=ice unused=vfio-pci + Network devices using kernel driver + =================================== + 0000:18:00.0 'Device 159b' if=ens785f0 drv=ice unused=vfio-pci - DMA devices using kernel driver - =============================== - 0000:00:04.0 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci - 0000:00:04.1 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci + DMA devices using kernel driver + =============================== + 0000:00:04.0 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci + 0000:00:04.1 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci 4. Prepare tmpfs with 4K-pages:: - mkdir /mnt/tmpfs_nohuge0 - mkdir /mnt/tmpfs_nohuge1 - mount tmpfs /mnt/tmpfs_nohuge0 -t tmpfs -o size=4G - mount tmpfs /mnt/tmpfs_nohuge1 -t tmpfs -o size=4G + mkdir /mnt/tmpfs_nohuge0 + mkdir /mnt/tmpfs_nohuge1 + mount tmpfs /mnt/tmpfs_nohuge0 -t tmpfs -o size=4G + mount tmpfs /mnt/tmpfs_nohuge1 -t tmpfs -o size=4G Test case ========= @@ -89,11 +89,11 @@ Common steps ------------ 1. Bind 1 NIC port and CBDMA channels to vfio-pci:: - # ./usertools/dpdk-devbind.py -b vfio-pci - # ./usertools/dpdk-devbind.py -b vfio-pci + # ./usertools/dpdk-devbind.py -b vfio-pci + # ./usertools/dpdk-devbind.py -b vfio-pci - For example, bind 1 NIC port and 1 CBDMA channels: - # ./usertools/dpdk-devbind.py -b vfio-pci 0000:af:00.0 0000:80:04.0 + For example, bind 1 NIC port and 1 CBDMA channels: + # ./usertools/dpdk-devbind.py -b vfio-pci 0000:af:00.0 0000:80:04.0 Test Case 1: Basic test vhost-user/virtio-user split ring vhost async operation using 4K-pages and cbdma enable --------------------------------------------------------------------------------------------------------------- @@ -102,22 +102,21 @@ in 4K-pages memory environment and PVP vhost-user/virtio-user topology. 1. Bind 1 CBDMA port and 1 NIC port to vfio-pci, launch vhost:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 -m 1024 --no-huge --file-prefix=vhost \ - -a 0000:18:00.0 -a 0000:00:04.0 \ - --vdev 'net_vhost0,iface=/tmp/vhost-net,queues=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0]' \ - -- -i --no-numa --socket-num=0 - testpmd>start + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/vhost-net,queues=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0]' \ + -- -i --no-numa --socket-num=0 + testpmd>start 2. Launch virtio-user with 4K-pages:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --no-pci --no-huge -m 1024 --file-prefix=virtio-user \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/vhost-net,queues=1 \ - -- -i - testpmd>start + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --no-pci --no-huge -m 1024 --file-prefix=virtio-user \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/vhost-net,queues=1 \ + -- -i + testpmd>start 3. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command:: - testpmd>show port stats all + testpmd>show port stats all Test Case 2: Basic test vhost-user/virtio-user packed ring vhost async operation using 4K-pages and cbdma enable ---------------------------------------------------------------------------------------------------------------- @@ -126,22 +125,21 @@ in 4K-pages memory environment and PVP vhost-user/virtio-user topology. 1. Bind 1 CBDMA port and 1 NIC port to vfio-pci, launch vhost:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 -m 1024 --no-huge --file-prefix=vhost \ - -a 0000:18:00.0 -a 0000:00:04.0 \ - --vdev 'net_vhost0,iface=/tmp/vhost-net,queues=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0]' \ - -- -i --no-numa --socket-num=0 - testpmd>start + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 --file-prefix=vhost -a 0000:18:00.0 -a 0000:00:04.0 \ + --vdev 'net_vhost0,iface=/tmp/vhost-net,queues=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0]' \ + -- -i --no-numa --socket-num=0 + testpmd>start 2. Launch virtio-user with 4K-pages:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --no-pci --no-huge -m 1024 --file-prefix=virtio-user \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/vhost-net,packed_vq=1,queues=1 \ - -- -i - testpmd>start + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --no-pci --no-huge -m 1024 --file-prefix=virtio-user \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/vhost-net,packed_vq=1,queues=1 \ + -- -i + testpmd>start 3. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command:: - testpmd>show port stats all + testpmd>show port stats all Test Case 3: VM2VM vhost-user/virtio-net split ring vhost async operation test with tcp traffic using 4K-pages and cbdma enable ------------------------------------------------------------------------------------------------------------------------------- @@ -150,57 +148,56 @@ TSO/cksum in the TCP/IP stack when vhost uses the asynchronous operations with C 1. Bind 2 CBDMA port to vfio-pci, then launch vhost by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-huge -m 1024 --file-prefix=vhost \ - -a 0000:00:04.0 -a 0000:00:04.1 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0],dma-ring-size=2048' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0@0000:00:04.1;rxq0@0000:00:04.1],dma-ring-size=2048' \ - --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 - testpmd>start + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --file-prefix=vhost -a 0000:00:04.0 -a 0000:00:04.1 \ + --vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0],dma-ring-size=2048' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0@0000:00:04.1;rxq0@0000:00:04.1],dma-ring-size=2048' \ + --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 + testpmd>start 2. Launch VM1 and VM2:: - taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04.img \ - -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10 - - taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12 + taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04.img \ + -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=./vhost-net0 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10 + + taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04-2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net1 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12 3. On VM1, set virtio device IP and run arp protocal:: - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 + ifconfig ens5 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 4. On VM2, set virtio device IP and run arp protocal:: - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 + ifconfig ens5 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 5. Check the iperf performance between two VMs by below commands:: - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` 6. Check 2VMs can receive and send big packets to each other:: - testpmd>show port xstats all - Port 0 should have tx packets above 1518 - Port 1 should have rx packets above 1518 + testpmd>show port xstats all + Port 0 should have tx packets above 1518 + Port 1 should have rx packets above 1518 Test Case 4: VM2VM vhost-user/virtio-net packed ring vhost async operation test with tcp traffic using 4K-pages and cbdma enable -------------------------------------------------------------------------------------------------------------------------------- @@ -209,57 +206,56 @@ TSO/cksum in the TCP/IP stack when vhost uses the asynchronous operations with C 1. Bind 2 CBDMA port to vfio-pci, then launch vhost by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-huge -m 1024 --file-prefix=vhost \ - -a 0000:00:04.0 -a 0000:00:04.1 \ - --vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0],dma-ring-size=2048' \ - --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0@0000:00:04.1;rxq0@0000:00:04.1],dma-ring-size=2048' \ - --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 - testpmd>start + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --file-prefix=vhost -a 0000:00:04.0 -a 0000:00:04.1 \ + --vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0@0000:00:04.0;rxq0@0000:00:04.0],dma-ring-size=2048' \ + --vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0@0000:00:04.1;rxq0@0000:00:04.1],dma-ring-size=2048' \ + --iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 + testpmd>start 2. Launch VM1 and VM2:: - taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04.img \ - -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10 - - taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12 + taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04.img \ + -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=./vhost-net0 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10 + + taskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04-2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net1 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12 3. On VM1, set virtio device IP and run arp protocal:: - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 + ifconfig ens5 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 4. On VM2, set virtio device IP and run arp protocal:: - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 + ifconfig ens5 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 5. Check the iperf performance between two VMs by below commands:: - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` 6. Check 2VMs can receive and send big packets to each other:: - testpmd>show port xstats all - Port 0 should have tx packets above 1518 - Port 1 should have rx packets above 1518 + testpmd>show port xstats all + Port 0 should have tx packets above 1518 + Port 1 should have rx packets above 1518 Test Case 5: vm2vm vhost/virtio-net split ring multi queues using 4K-pages and cbdma enable ------------------------------------------------------------------------------------------- @@ -270,107 +266,109 @@ The dynamic change of multi-queues number is also tested. 1. Bind 4 CBDMA port to vfio-pci, launch vhost:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost \ - -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.1;txq5@0000:00:04.1;txq6@0000:00:04.1;txq7@0000:00:04.1]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:00:04.2;txq1@0000:00:04.2;txq2@0000:00:04.2;txq3@0000:00:04.2;txq4@0000:00:04.3;txq5@0000:00:04.3;txq6@0000:00:04.3;txq7@0000:00:04.3]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 - testpmd>start + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.1;txq5@0000:00:04.1;txq6@0000:00:04.1;txq7@0000:00:04.1]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:00:04.2;txq1@0000:00:04.2;txq2@0000:00:04.2;txq3@0000:00:04.2;txq4@0000:00:04.3;txq5@0000:00:04.3;txq6@0000:00:04.3;txq7@0000:00:04.3]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 + testpmd>start 2. Launch VM qemu:: - taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 - - taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 + taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=./vhost-net0,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 + + taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04-2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net1,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 3. On VM1, set virtio device IP and run arp protocal:: - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 4. On VM2, set virtio device IP and run arp protocal:: - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 5. Scp 1MB file form VM1 to VM2:: - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name 6. Check the iperf performance between two VMs by below commands:: - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` 7. Quit and relaunch vhost w/ diff CBDMA channels:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost \ - -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.0;txq5@0000:00:04.0;rxq2@0000:00:04.1;rxq3@0000:00:04.1;rxq4@0000:00:04.1;rxq5@0000:00:04.1;rxq6@0000:00:04.1;rxq7@0000:00:04.1],dma-ring-size=1024' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:00:04.2;txq1@0000:00:04.2;txq2@0000:00:04.2;txq3@0000:00:04.2;txq4@0000:00:04.2;txq5@0000:00:04.2;rxq2@0000:00:04.3;rxq3@0000:00:04.3;rxq4@0000:00:04.3;rxq5@0000:00:04.3;rxq6@0000:00:04.3;rxq7@0000:00:04.3],dma-ring-size=1024' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 - testpmd>start + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.0;txq5@0000:00:04.0;rxq2@0000:00:04.1;rxq3@0000:00:04.1;rxq4@0000:00:04.1;rxq5@0000:00:04.1;rxq6@0000:00:04.1;rxq7@0000:00:04.1],dma-ring-size=1024' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:00:04.2;txq1@0000:00:04.2;txq2@0000:00:04.2;txq3@0000:00:04.2;txq4@0000:00:04.2;txq5@0000:00:04.2;rxq2@0000:00:04.3;rxq3@0000:00:04.3;rxq4@0000:00:04.3;rxq5@0000:00:04.3;rxq6@0000:00:04.3;rxq7@0000:00:04.3],dma-ring-size=1024' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 + testpmd>start 8. Rerun step 5-6. 9. Quit and relaunch vhost w/o CBDMA channels:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4' \ - -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4 - testpmd>start + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-pci --file-prefix=vhost \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4' \ + -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4 + testpmd>start 10. On VM1, set virtio device:: - ethtool -L ens5 combined 4 + ethtool -L ens5 combined 4 11. On VM2, set virtio device:: - ethtool -L ens5 combined 4 + ethtool -L ens5 combined 4 12. Scp 1MB file form VM1 to VM2:: - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name 13. Check the iperf performance and compare with CBDMA enable performance, ensure CMDMA enable performance is higher:: - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` 14. Quit and relaunch vhost with 1 queues:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4' \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-pci --file-prefix=vhost \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4' \ -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1 testpmd>start 15. On VM1, set virtio device:: - ethtool -L ens5 combined 1 + ethtool -L ens5 combined 1 16. On VM2, set virtio device:: - ethtool -L ens5 combined 1 + ethtool -L ens5 combined 1 17. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp:: @@ -389,57 +387,56 @@ uses the asynchronous operations with CBDMA channels in 4K-pages memory environm 1. Bind 2 CBDMA port to vfio-pci, launch vhost:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost \ - -a 0000:00:04.0 -a 0000:00:04.1 \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,tso=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.1;txq5@0000:00:04.1;txq6@0000:00:04.1;txq7@0000:00:04.1]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,tso=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.1;txq5@0000:00:04.1;txq6@0000:00:04.1;txq7@0000:00:04.1]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 - testpmd>start + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:00:04.0 -a 0000:00:04.1 \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,tso=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.1;txq5@0000:00:04.1;txq6@0000:00:04.1;txq7@0000:00:04.1]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,tso=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.1;txq5@0000:00:04.1;txq6@0000:00:04.1;txq7@0000:00:04.1]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 + testpmd>start 2. Launch VM qemu:: - taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ - -chardev socket,id=char0,path=./vhost-net0,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10 - - taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04-2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net1,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12 + taskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=./vhost-net0,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10 + + taskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu22-04-2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net1,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12 3. On VM1, set virtio device IP and run arp protocal:: - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 4. On VM2, set virtio device IP and run arp protocal:: - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 5. Scp 1MB file form VM1 to VM2:: - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name 6. Check the iperf performance between two VMs by below commands:: - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` Test Case 7: vm2vm vhost/virtio-net split ring multi queues using 1G/4k-pages and cbdma enable ---------------------------------------------------------------------------------------------- @@ -450,66 +447,65 @@ environment and the front-end is in 4k-pages memory environment. 1. Bind 4 CBDMA port to vfio-pci, launch vhost:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-33 -n 4 -m 1024 --file-prefix=vhost \ - -a 0000:00:04.0 -a 0000:00:04.1 \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,tso=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.0;txq5@0000:00:04.0;rxq2@0000:00:04.1;rxq3@0000:00:04.1;rxq4@0000:00:04.1;rxq5@0000:00:04.1;rxq6@0000:00:04.1;rxq7@0000:00:04.1],dma-ring-size=1024' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,tso=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.0;txq5@0000:00:04.0;rxq2@0000:00:04.1;rxq3@0000:00:04.1;rxq4@0000:00:04.1;rxq5@0000:00:04.1;rxq6@0000:00:04.1;rxq7@0000:00:04.1],dma-ring-size=1024' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 - testpmd>start + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-33 -n 4 --file-prefix=vhost -a 0000:00:04.0 -a 0000:00:04.1 \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,tso=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.0;txq5@0000:00:04.0;rxq2@0000:00:04.1;rxq3@0000:00:04.1;rxq4@0000:00:04.1;rxq5@0000:00:04.1;rxq6@0000:00:04.1;rxq7@0000:00:04.1],dma-ring-size=1024' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,tso=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.0;txq5@0000:00:04.0;rxq2@0000:00:04.1;rxq3@0000:00:04.1;rxq4@0000:00:04.1;rxq5@0000:00:04.1;rxq6@0000:00:04.1;rxq7@0000:00:04.1],dma-ring-size=1024' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 + testpmd>start 2. Launch VM qemu:: - taskset -c 20,21,22,23,24,25,26,27 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6000-:22 \ - -chardev socket,id=char0,path=./vhost-net0,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 - - taskset -c 48,49,50,51,52,53,54,55 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004_2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6001-:22 \ - -chardev socket,id=char0,path=./vhost-net1,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 + taskset -c 20,21,22,23,24,25,26,27 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6000-:22 \ + -chardev socket,id=char0,path=./vhost-net0,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 + + taskset -c 48,49,50,51,52,53,54,55 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004_2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6001-:22 \ + -chardev socket,id=char0,path=./vhost-net1,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12 3. On VM1, set virtio device IP and run arp protocal:: - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 4. On VM2, set virtio device IP and run arp protocal:: - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 5. Scp 1MB file form VM1 to VM2:: - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name 6. Check the iperf performance between two VMs by below commands:: - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` 7. Quit and relaunch vhost w/ diff CBDMA channels:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost \ - -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,tso=1,dmas=[txq00000:00:04.0;txq10000:00:04.0;txq20000:00:04.0;txq30000:00:04.0;txq4@0000:00:04.1;txq5@0000:00:04.1;txq6@0000:00:04.1;txq7@0000:00:04.1;rxq0@0000:00:04.2;rxq1@0000:00:04.2;rxq2@0000:00:04.2;rxq3@0000:00:04.2;rxq4@0000:00:04.3;rxq5@0000:00:04.3;rxq6@0000:00:04.3;rxq7@0000:00:04.3]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,tso=1,dmas=[txq00000:00:04.0;txq10000:00:04.0;txq20000:00:04.0;txq30000:00:04.0;txq4@0000:00:04.1;txq5@0000:00:04.1;txq6@0000:00:04.1;txq7@0000:00:04.1;rxq0@0000:00:04.2;rxq1@0000:00:04.2;rxq2@0000:00:04.2;rxq3@0000:00:04.2;rxq4@0000:00:04.3;rxq5@0000:00:04.3;rxq6@0000:00:04.3;rxq7@0000:00:04.3]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 - testpmd>start + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,tso=1,dmas=[txq00000:00:04.0;txq10000:00:04.0;txq20000:00:04.0;txq30000:00:04.0;txq4@0000:00:04.1;txq5@0000:00:04.1;txq6@0000:00:04.1;txq7@0000:00:04.1;rxq0@0000:00:04.2;rxq1@0000:00:04.2;rxq2@0000:00:04.2;rxq3@0000:00:04.2;rxq4@0000:00:04.3;rxq5@0000:00:04.3;rxq6@0000:00:04.3;rxq7@0000:00:04.3]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,tso=1,dmas=[txq00000:00:04.0;txq10000:00:04.0;txq20000:00:04.0;txq30000:00:04.0;txq4@0000:00:04.1;txq5@0000:00:04.1;txq6@0000:00:04.1;txq7@0000:00:04.1;rxq0@0000:00:04.2;rxq1@0000:00:04.2;rxq2@0000:00:04.2;rxq3@0000:00:04.2;rxq4@0000:00:04.3;rxq5@0000:00:04.3;rxq6@0000:00:04.3;rxq7@0000:00:04.3]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 + testpmd>start 8. Rerun step 5-6. @@ -522,57 +518,57 @@ and the front-end is in 4k-pages memory environment. 1. Bind 8 CBDMA port to vfio-pci, launch vhost:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-33 -n 4 -m 1024 --file-prefix=vhost \ - -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ - --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.1;txq5@0000:00:04.1;rxq2@0000:00:04.2;rxq3@0000:00:04.2;rxq4@0000:00:04.3;rxq5@0000:00:04.3;rxq6@0000:00:04.3;rxq7@0000:00:04.3]' \ - --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:00:04.4;txq1@0000:00:04.4;txq2@0000:00:04.4;txq3@0000:00:04.4;txq4@0000:00:04.5;txq5@0000:00:04.5;rxq2@0000:00:04.6;rxq3@0000:00:04.6;rxq4@0000:00:04.6;rxq5@0000:00:04.6;rxq6@0000:00:04.7;rxq7@0000:00:04.7]' \ - --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 - testpmd>start + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-33 -n 4 --file-prefix=vhost \ + -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \ + --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.0;txq2@0000:00:04.0;txq3@0000:00:04.0;txq4@0000:00:04.1;txq5@0000:00:04.1;rxq2@0000:00:04.2;rxq3@0000:00:04.2;rxq4@0000:00:04.3;rxq5@0000:00:04.3;rxq6@0000:00:04.3;rxq7@0000:00:04.3]' \ + --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:00:04.4;txq1@0000:00:04.4;txq2@0000:00:04.4;txq3@0000:00:04.4;txq4@0000:00:04.5;txq5@0000:00:04.5;rxq2@0000:00:04.6;rxq3@0000:00:04.6;rxq4@0000:00:04.6;rxq5@0000:00:04.6;rxq6@0000:00:04.7;rxq7@0000:00:04.7]' \ + --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 + testpmd>start 2. Launch VM qemu:: - taskset -c 20,21,22,23,24,25,26,27 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6000-:22 \ - -chardev socket,id=char0,path=./vhost-net0,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 - - taskset -c 48,49,50,51,52,53,54,55 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004_2.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6001-:22 \ - -chardev socket,id=char0,path=./vhost-net1,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12 + taskset -c 20,21,22,23,24,25,26,27 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge0,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6000-:22 \ + -chardev socket,id=char0,path=./vhost-net0,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10 + + taskset -c 48,49,50,51,52,53,54,55 /home/QEMU/qemu-6.2.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_nohuge1,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004_2.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:10.239.251.220:6001-:22 \ + -chardev socket,id=char0,path=./vhost-net1,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12 3. On VM1, set virtio device IP and run arp protocal:: - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.2 - arp -s 1.1.1.8 52:54:00:00:00:02 + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.2 + arp -s 1.1.1.8 52:54:00:00:00:02 4. On VM2, set virtio device IP and run arp protocal:: - ethtool -L ens5 combined 8 - ifconfig ens5 1.1.1.8 - arp -s 1.1.1.2 52:54:00:00:00:01 + ethtool -L ens5 combined 8 + ifconfig ens5 1.1.1.8 + arp -s 1.1.1.2 52:54:00:00:00:01 5. Scp 1MB file form VM1 to VM2:: - Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name + Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name 6. Check the iperf performance between two VMs by below commands:: - Under VM1, run: `iperf -s -i 1` - Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` + Under VM1, run: `iperf -s -i 1` + Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60` 7. Relaunch VM1, and rerun step 3. From patchwork Mon Jun 5 06:34:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 128074 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8A30042C31; Mon, 5 Jun 2023 08:34:30 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8732A410EE; Mon, 5 Jun 2023 08:34:30 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 3912A4003C for ; Mon, 5 Jun 2023 08:34:28 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685946868; x=1717482868; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=VGsrX9ciEn0V7MAbm340Xuc9hNiuDPhzx+ZUMhMWwaI=; b=RvtgaWMtMpjnSF7W5Ui+Dq/ld2YBxQtiPWFU0avQdBzBL8BVJa+le0t7 sWlej1ksS/Z+bPrNLTUkYZM+e8+aY/2vD1pRJtSEhwxGlq5Zd3g29ZKBU eDPHiBfFieLoQFszTY/UB/tkYH63EisTUtRt1ig9w6NtntJwf+PXDBpZ8 M4AeVtK7VVlVXbXfOOtRM/ebNd4Gk/LllL2eXAaySVDu+Ty2B9K8KoDoA cptmjeAfXH1SZ70U/v3giikqo8bgAzDqB9hX38mcYmlEQovjtoSUYcpqX hzPLgIzQOy/1VT2Fo83IvrBeH+dJPs8bURO+6fU8dtQavJ+cyngxocKUw Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="356307028" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="356307028" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jun 2023 23:34:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="852860731" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="852860731" Received: from unknown (HELO dut222..) ([10.239.252.222]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jun 2023 23:34:25 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 2/3] tests/basic_4k_pages_cbdma: modify testsuite by DPDK changed Date: Mon, 5 Jun 2023 14:34:17 +0800 Message-Id: <20230605063417.563776-1-weix.ling@intel.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org As the DPDK commit a399d7b5(vfio: do not coalesce DMA mappings) changed, We need to delete the `--no-huge -m 1024` parameter when start the vhost-user as back-end side. So delete `--no-huge -m 1024` parameter in testsuite. Add optimize the testsuite to read and update the expected values into the testsuite config conf/basic_4k_pages_cbdma.cfg. Signed-off-by: Wei Ling --- tests/TestSuite_basic_4k_pages_cbdma.py | 225 ++++++++++++++++++------ 1 file changed, 172 insertions(+), 53 deletions(-) diff --git a/tests/TestSuite_basic_4k_pages_cbdma.py b/tests/TestSuite_basic_4k_pages_cbdma.py index 70415dd1..0f7936cb 100644 --- a/tests/TestSuite_basic_4k_pages_cbdma.py +++ b/tests/TestSuite_basic_4k_pages_cbdma.py @@ -7,13 +7,19 @@ import random import re import string import time +from copy import deepcopy from framework.config import VirtConf from framework.packet import Packet from framework.pktgen import PacketGeneratorHelper from framework.pmd_output import PmdOutput from framework.qemu_kvm import QEMUKvm -from framework.settings import CONFIG_ROOT_PATH, get_host_ip +from framework.settings import ( + CONFIG_ROOT_PATH, + UPDATE_EXPECTED, + get_host_ip, + load_global_setting, +) from framework.test_case import TestCase @@ -45,14 +51,13 @@ class TestBasic4kPagesCbdma(TestCase): self.virtio_user0_pmd = PmdOutput(self.dut, self.virtio_user0) self.pci_info = self.dut.ports_info[0]["pci"] self.dst_mac = self.dut.get_mac_address(self.dut_ports[0]) - self.frame_sizes = [64, 128, 256, 512, 1024, 1518] self.out_path = "/tmp/%s" % self.suite_name out = self.tester.send_expect("ls -d %s" % self.out_path, "# ") if "No such file or directory" in out: self.tester.send_expect("mkdir -p %s" % self.out_path, "# ") # create an instance to set stream field setting self.pktgen_helper = PacketGeneratorHelper() - self.number_of_ports = 1 + self.nb_ports = 1 self.app_testpmd_path = self.dut.apps_name["test-pmd"] self.testpmd_name = self.app_testpmd_path.split("/")[-1] self.vm_num = 2 @@ -102,6 +107,15 @@ class TestBasic4kPagesCbdma(TestCase): self.vm1_user = param["login"][0]["user"] self.vm1_passwd = param["login"][0]["password"] + self.logger.info( + "You can config packet_size in file %s.cfg," % self.suite_name + + " in region 'suite' like packet_sizes=[64, 128, 256]" + ) + if "packet_sizes" in self.get_suite_cfg(): + self.frame_sizes = self.get_suite_cfg()["packet_sizes"] + self.test_duration = self.get_suite_cfg()["test_duration"] + self.gap = self.get_suite_cfg()["accepted_tolerance"] + def set_up(self): """ Run before each test case. @@ -111,14 +125,13 @@ class TestBasic4kPagesCbdma(TestCase): self.dut.send_expect("rm -rf /root/dpdk/vhost-net*", "# ") # Prepare the result table self.table_header = ["Frame"] - self.table_header.append("Mode") self.table_header.append("Mpps") - self.table_header.append("Queue Num") self.table_header.append("% linerate") self.result_table_create(self.table_header) self.vm_dut = [] self.vm = [] - self.packed = False + self.throughput = {} + self.test_result = {} def start_vm0(self, packed=False, queues=1, server=False): packed_param = ",packed=on" if packed else "" @@ -145,6 +158,9 @@ class TestBasic4kPagesCbdma(TestCase): ) self.vm0_session.send_expect(cmd0, "# ") time.sleep(10) + self.monitor_socket = "/tmp/vm0_monitor.sock" + lcores = self.vm0_cpupin.split(" ") + self.pin_threads(lcores) self.vm0_dut = self.connect_vm0() self.verify(self.vm0_dut is not None, "vm start fail") self.vm_session = self.vm0_dut.new_session(suite="vm_session") @@ -174,10 +190,52 @@ class TestBasic4kPagesCbdma(TestCase): ) self.vm1_session.send_expect(cmd1, "# ") time.sleep(10) + self.monitor_socket = "/tmp/vm1_monitor.sock" + lcores = self.vm1_cpupin.split(" ") + self.pin_threads(lcores) self.vm1_dut = self.connect_vm1() self.verify(self.vm1_dut is not None, "vm start fail") self.vm_session = self.vm1_dut.new_session(suite="vm_session") + def __monitor_session(self, command, *args): + """ + Connect the qemu monitor session, send command and return output message. + """ + self.dut.send_expect("nc -U %s" % self.monitor_socket, "(qemu)") + + cmd = command + for arg in args: + cmd += " " + str(arg) + + # after quit command, qemu will exit + if "quit" in cmd: + self.dut.send_command("%s" % cmd) + out = self.dut.send_expect(" ", "#") + else: + out = self.dut.send_expect("%s" % cmd, "(qemu)", 30) + self.dut.send_expect("^C", "# ") + return out + + def pin_threads(self, lcores): + thread_reg = r"CPU #\d+: thread_id=(\d+)" + output = self.__monitor_session("info", "cpus") + threads = re.findall(thread_reg, output) + if len(threads) <= len(lcores): + map = list(zip(threads, lcores)) + else: + self.logger.warning( + "lcores is less than VM's threads, 1 lcore will pin multiple VM's threads" + ) + lcore_len = len(lcores) + for item in threads: + thread_idx = threads.index(item) + if thread_idx >= lcore_len: + lcore_idx = thread_idx % lcore_len + lcores.append(lcores[lcore_idx]) + map = list(zip(threads, lcores)) + for thread, lcore in map: + self.dut.send_expect("taskset -pc %s %s" % (lcore, thread), "#") + def connect_vm0(self): self.vm0 = QEMUKvm(self.dut, "vm0", self.suite_name) self.vm0.net_type = "hostfwd" @@ -299,32 +357,97 @@ class TestBasic4kPagesCbdma(TestCase): streams = self.pktgen_helper.prepare_stream_from_tginput( tgen_input, 100, None, self.tester.pktgen ) - _, pps = self.tester.pktgen.measure_throughput(stream_ids=streams) + # set traffic option + traffic_opt = { + "delay": 5, + "duration": self.get_suite_cfg()["test_duration"], + } + _, pps = self.tester.pktgen.measure_throughput( + stream_ids=streams, options=traffic_opt + ) Mpps = pps / 1000000.0 - # self.verify(Mpps > self.check_value[frame_size], - # "%s of frame size %d speed verify failed, expect %s, result %s" % ( - # self.running_case, frame_size, self.check_value[frame_size], Mpps)) - throughput = Mpps * 100 / float(self.wirespeed(self.nic, 64, 1)) + self.throughput[frame_size] = Mpps + linerate = Mpps * 100 / float(self.wirespeed(self.nic, 64, 1)) results_row = [frame_size] - results_row.append("4K pages") results_row.append(Mpps) - results_row.append("1") - results_row.append(throughput) + results_row.append(linerate) self.result_table_add(results_row) + self.result_table_print() - def start_vhost_user_testpmd(self, cores, param="", eal_param="", ports=""): + def handle_expected(self): """ - launch the testpmd as virtio with vhost_user + Update expected numbers to configurate file: $DTS_CFG_FOLDER/$suite_name.cfg """ - self.vhost_user_pmd.start_testpmd( - cores=cores, - param=param, - eal_param=eal_param, - ports=ports, - prefix="vhost", - fixed_prefix=True, + if load_global_setting(UPDATE_EXPECTED) == "yes": + for frame_size in self.frame_sizes: + self.expected_throughput[frame_size] = round( + self.throughput[frame_size], 3 + ) + + def handle_results(self): + """ + results handled process: + 1, save to self.test_results + 2, create test results table + """ + # save test results to self.test_result + header = self.table_header + header.append("Expected Throughput(Mpps)") + header.append("Status") + self.result_table_create(self.table_header) + for frame_size in self.frame_sizes: + wirespeed = self.wirespeed(self.nic, frame_size, self.nb_ports) + ret_data = {} + ret_data[header[0]] = str(frame_size) + _real = float(self.throughput[frame_size]) + _exp = float(self.expected_throughput[frame_size]) + ret_data[header[1]] = "{:.3f}".format(_real) + ret_data[header[2]] = "{:.3f}%".format(_real * 100 / wirespeed) + ret_data[header[3]] = "{:.3f}".format(_exp) + gap = _exp * -self.gap * 0.01 + if _real > _exp + gap: + ret_data[header[4]] = "PASS" + else: + ret_data[header[4]] = "FAIL" + self.test_result[frame_size] = deepcopy(ret_data) + + for frame_size in self.test_result.keys(): + table_row = list() + for i in range(len(header)): + table_row.append(self.test_result[frame_size][header[i]]) + self.result_table_add(table_row) + # present test results to screen + self.result_table_print() + self.verify( + "FAIL" not in self.test_result, + "Excessive gap between test results and expectations", ) + def start_vhost_user_testpmd( + self, cores, param="", eal_param="", ports="", no_pci=False + ): + """ + launch the testpmd as virtio with vhost_user + """ + if no_pci: + self.vhost_user_pmd.start_testpmd( + cores=cores, + param=param, + eal_param=eal_param, + no_pci=True, + prefix="vhost", + fixed_prefix=True, + ) + else: + self.vhost_user_pmd.start_testpmd( + cores=cores, + param=param, + eal_param=eal_param, + ports=ports, + prefix="vhost", + fixed_prefix=True, + ) + def start_virtio_user0_testpmd(self, cores, eal_param="", param=""): """ launch the testpmd as virtio with vhost_net0 @@ -489,11 +612,14 @@ class TestBasic4kPagesCbdma(TestCase): """ Test Case 1: Basic test vhost-user/virtio-user split ring vhost async operation using 4K-pages and cbdma enable """ + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=1) dmas = "txq0@%s;rxq0@%s" % (self.cbdma_list[0], self.cbdma_list[0]) vhost_eal_param = ( - "--no-huge -m 1024 --vdev 'net_vhost0,iface=./vhost-net,queues=1,dmas=[%s]'" - % dmas + "--vdev 'net_vhost0,iface=./vhost-net,queues=1,dmas=[%s]'" % dmas ) vhost_param = "--no-numa --socket-num=%s " % self.ports_socket ports = [self.dut.ports_info[0]["pci"]] @@ -513,7 +639,8 @@ class TestBasic4kPagesCbdma(TestCase): self.virtio_user0_pmd.execute_cmd("set fwd mac") self.virtio_user0_pmd.execute_cmd("start") self.send_and_verify() - self.result_table_print() + self.handle_expected() + self.handle_results() def test_perf_pvp_packed_ring_vhost_async_operation_using_4K_pages_and_cbdma_enable( self, @@ -521,11 +648,14 @@ class TestBasic4kPagesCbdma(TestCase): """ Test Case 2: Basic test vhost-user/virtio-user packed ring vhost async operation using 4K-pages and cbdma enable """ + self.test_target = self.running_case + self.expected_throughput = self.get_suite_cfg()["expected_throughput"][ + self.test_target + ] self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=1) dmas = "txq0@%s;rxq0@%s" % (self.cbdma_list[0], self.cbdma_list[0]) vhost_eal_param = ( - "--no-huge -m 1024 --vdev 'net_vhost0,iface=./vhost-net,queues=1,dmas=[%s]'" - % dmas + "--vdev 'net_vhost0,iface=./vhost-net,queues=1,dmas=[%s]'" % dmas ) vhost_param = "--no-numa --socket-num=%s " % self.ports_socket ports = [self.dut.ports_info[0]["pci"]] @@ -545,7 +675,8 @@ class TestBasic4kPagesCbdma(TestCase): self.virtio_user0_pmd.execute_cmd("set fwd mac") self.virtio_user0_pmd.execute_cmd("start") self.send_and_verify() - self.result_table_print() + self.handle_expected() + self.handle_results() def test_vm2vm_split_ring_vhost_async_operaiton_test_with_tcp_traffic_using_4k_pages_and_cbdma_enable( self, @@ -557,8 +688,7 @@ class TestBasic4kPagesCbdma(TestCase): dmas1 = "txq0@%s;rxq0@%s" % (self.cbdma_list[0], self.cbdma_list[0]) dmas2 = "txq0@%s;rxq0@%s" % (self.cbdma_list[1], self.cbdma_list[1]) vhost_eal_param = ( - "--no-huge -m 1024 " - + "--vdev 'net_vhost0,iface=./vhost-net0,queues=1,tso=1,dmas=[%s],dma-ring-size=2048'" + "--vdev 'net_vhost0,iface=./vhost-net0,queues=1,tso=1,dmas=[%s],dma-ring-size=2048'" % dmas1 + " --vdev 'net_vhost1,iface=./vhost-net1,queues=1,tso=1,dmas=[%s],dma-ring-size=2048'" % dmas2 @@ -594,8 +724,7 @@ class TestBasic4kPagesCbdma(TestCase): dmas1 = "txq0@%s;rxq0@%s" % (self.cbdma_list[0], self.cbdma_list[0]) dmas2 = "txq0@%s;rxq0@%s" % (self.cbdma_list[1], self.cbdma_list[1]) vhost_eal_param = ( - "--no-huge -m 1024 " - + "--vdev 'net_vhost0,iface=./vhost-net0,queues=1,tso=1,dmas=[%s],dma-ring-size=2048'" + "--vdev 'net_vhost0,iface=./vhost-net0,queues=1,tso=1,dmas=[%s],dma-ring-size=2048'" % dmas1 + " --vdev 'net_vhost1,iface=./vhost-net1,queues=1,tso=1,dmas=[%s],dma-ring-size=2048'" % dmas2 @@ -667,9 +796,7 @@ class TestBasic4kPagesCbdma(TestCase): ) ) vhost_eal_param = ( - "--no-huge -m 1024 " - + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=8,dmas=[%s]'" - % dmas1 + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=8,dmas=[%s]'" % dmas1 + " --vdev 'net_vhost1,iface=./vhost-net1,client=1,queues=8,dmas=[%s]'" % dmas2 ) @@ -748,8 +875,7 @@ class TestBasic4kPagesCbdma(TestCase): ) ) vhost_eal_param = ( - "--no-huge -m 1024 " - + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=8,dmas=[%s],dma-ring-size=1024'" + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=8,dmas=[%s],dma-ring-size=1024'" % dmas1 + " --vdev 'net_vhost1,iface=./vhost-net1,client=1,queues=8,dmas=[%s],dma-ring-size=1024'" % dmas2 @@ -769,8 +895,7 @@ class TestBasic4kPagesCbdma(TestCase): self.vhost_user_pmd.quit() vhost_eal_param = ( - "--no-huge -m 1024 " - + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=4'" + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=4'" + " --vdev 'net_vhost1,iface=./vhost-net1,client=1,queues=4'" ) vhost_param = "--nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4" @@ -778,7 +903,7 @@ class TestBasic4kPagesCbdma(TestCase): cores=self.vhost_core_list, eal_param=vhost_eal_param, param=vhost_param, - ports=self.cbdma_list, + no_pci=True, ) self.vhost_user_pmd.execute_cmd("start") self.config_vm_combined(combined=4) @@ -789,8 +914,7 @@ class TestBasic4kPagesCbdma(TestCase): self.vhost_user_pmd.quit() vhost_eal_param = ( - "--no-huge -m 1024 " - + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=4'" + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=4'" + " --vdev 'net_vhost1,iface=./vhost-net1,client=1,queues=4'" ) vhost_param = " --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1" @@ -798,7 +922,7 @@ class TestBasic4kPagesCbdma(TestCase): cores=self.vhost_core_list, eal_param=vhost_eal_param, param=vhost_param, - ports=self.cbdma_list, + no_pci=True, ) self.vhost_user_pmd.execute_cmd("start") self.config_vm_combined(combined=1) @@ -837,8 +961,7 @@ class TestBasic4kPagesCbdma(TestCase): ) ) vhost_eal_param = ( - "--no-huge -m 1024 " - + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=8,tso=1,dmas=[%s]'" + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=8,tso=1,dmas=[%s]'" % dmas + " --vdev 'net_vhost1,iface=./vhost-net1,client=1,queues=8,tso=1,dmas=[%s]'" % dmas @@ -899,8 +1022,7 @@ class TestBasic4kPagesCbdma(TestCase): ) ) vhost_eal_param = ( - "-m 1024 " - + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=8,tso=1,dmas=[%s],dma-ring-size=1024'" + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=8,tso=1,dmas=[%s],dma-ring-size=1024'" % dmas + " --vdev 'net_vhost1,iface=./vhost-net1,client=1,queues=8,tso=1,dmas=[%s],dma-ring-size=1024'" % dmas @@ -961,8 +1083,7 @@ class TestBasic4kPagesCbdma(TestCase): ) ) vhost_eal_param = ( - "--no-huge -m 1024 " - + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=8,tso=1,dmas=[%s]'" + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=8,tso=1,dmas=[%s]'" % dmas + " --vdev 'net_vhost1,iface=./vhost-net1,client=1,queues=8,tso=1,dmas=[%s]'" % dmas @@ -1048,9 +1169,7 @@ class TestBasic4kPagesCbdma(TestCase): ) ) vhost_eal_param = ( - "-m 1024 " - + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=8,dmas=[%s]'" - % dmas1 + "--vdev 'net_vhost0,iface=./vhost-net0,client=1,queues=8,dmas=[%s]'" % dmas1 + " --vdev 'net_vhost1,iface=./vhost-net1,client=1,queues=8,dmas=[%s]'" % dmas2 ) From patchwork Mon Jun 5 06:34:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 128075 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B1B0D42C31; Mon, 5 Jun 2023 08:34:39 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AF5BA4113F; Mon, 5 Jun 2023 08:34:39 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 83B974003C for ; Mon, 5 Jun 2023 08:34:37 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685946877; x=1717482877; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=c/+SmAaOtNBYcTvYGO1saHWoWTPzlp5EltE/7uR+FKA=; b=e6e5p09rWkh/xZY9LjBY2eL3oE7J2ALvyoO9HzyrkOC6a4DOak7xytef /nmee4uLUUbCTmzTBb1Xec0jz06oKCXKHaCiGbFfnwSAmJ6tvQw1IpEkm EfuWRjxbnKma6Pqa3uiJkSPuV2lHmdUcQlIVeRIoQgpTKDDb2QeA465bi DO+LnHNBk7YEz4rCkT9kD4GXt12wj9gybtV3HGgn2bVo/qdRlYErqtezZ xFG8dJe15uZSBm9JiJRCn6YwdP70VhO2fTUETHuYmsWW9OFHa4cj7DE4N bQGuPG4tsXlUVMJBz04rfNgyEtTkxRmtCyPA32V4yaoD/kTgi43XAoAZJ Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="384594640" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="384594640" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jun 2023 23:34:36 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="686004491" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="686004491" Received: from unknown (HELO dut222..) ([10.239.252.222]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jun 2023 23:34:35 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 3/3] conf/basic_4k_pages_cbdma: add suite config as expected values Date: Mon, 5 Jun 2023 14:34:26 +0800 Message-Id: <20230605063426.563798-1-weix.ling@intel.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add [suite] section to save case config and expected values. Signed-off-by: Wei Ling --- conf/basic_4k_pages_cbdma.cfg | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/conf/basic_4k_pages_cbdma.cfg b/conf/basic_4k_pages_cbdma.cfg index cdac3af6..5901edb7 100644 --- a/conf/basic_4k_pages_cbdma.cfg +++ b/conf/basic_4k_pages_cbdma.cfg @@ -34,3 +34,10 @@ daemon = enable=yes; qemu = path=/home/QEMU/qemu-7.0.0/bin/qemu-system-x86_64; + +[suite] +update_expected = True +packet_sizes = [64, 128, 256, 512, 1024, 1518] +test_duration = 30 +accepted_tolerance = 10 +expected_throughput = {'test_perf_pvp_split_ring_vhost_async_operation_using_4K_pages_and_cbdma_enable': {64: 0.00, 128: 0.00, 256: 0.00, 512: 0.00, 1024: 0.00, 1518: 0.00}, 'test_perf_pvp_packed_ring_vhost_async_operation_using_4K_pages_and_cbdma_enable': {64: 0.00, 128: 0.00, 256: 0.00, 512: 0.00, 1024: 0.00, 1518: 0.00}}