From patchwork Fri Jan 14 08:18:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 105810 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A2257A0032; Fri, 14 Jan 2022 09:19:00 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8E0BC42766; Fri, 14 Jan 2022 09:19:00 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 2ED0740DDD for ; Fri, 14 Jan 2022 09:18:58 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1642148338; x=1673684338; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=ZB0d9yyA7uv5yjVKD1pbquHZMToiJTOHIyWBPzoMJ8M=; b=IaCcSu5OwBDuVuDaUmZPjltbYP/X8sLM64ei+1MfS6sHAGAPSe94CeBx LlUnhyx7sn9a8itW6iVlqNKQp/perjUdcTdX9F4RB76V6gQJgVmVwy5ZB DnGMoQvXcUNd0Rb6i6Lqu7LzzSEX9e04uflcaV01DcFzOeVKsT2pNwjIN IBjcaMh31Z5xNPAX3H1snKAdDWSRyDz1rojoOkfqACMFJdljLYAHoENun BSLN8/s1UTZoOvK6wjGyCGO4YA6ZT/nxrbVovbS+uew+C4B6V4ueoNxM0 w62+BcJXkPFM6DK1IB9zHbKGSXcYyhETEjzPdgjQK/JZXJSC6jDASG0cX g==; X-IronPort-AV: E=McAfee;i="6200,9189,10226"; a="243020380" X-IronPort-AV: E=Sophos;i="5.88,287,1635231600"; d="scan'208";a="243020380" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Jan 2022 00:18:57 -0800 X-IronPort-AV: E=Sophos;i="5.88,288,1635231600"; d="scan'208";a="620905765" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Jan 2022 00:18:54 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1] test_plans/*:modify dpdk app name Date: Fri, 14 Jan 2022 16:18:49 +0800 Message-Id: <20220114081849.1214764-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org test_plans/*:modify dpdk app name, such as change testpmd to dpdk-testpmd. Signed-off-by: Wei Ling Tested-by: Wei Ling --- test_plans/dpdk_gro_lib_test_plan.rst | 24 ++-- test_plans/dpdk_gso_lib_test_plan.rst | 12 +- .../dpdk_hugetlbfs_mount_size_test_plan.rst | 18 +-- ...back_virtio_user_server_mode_test_plan.rst | 110 +++++++++--------- .../perf_virtio_user_loopback_test_plan.rst | 40 +++---- .../pvp_diff_qemu_version_test_plan.rst | 8 +- .../pvp_multi_paths_performance_test_plan.rst | 40 +++---- ...host_single_core_performance_test_plan.rst | 40 +++---- ...rtio_single_core_performance_test_plan.rst | 40 +++---- ...emu_multi_paths_port_restart_test_plan.rst | 24 ++-- test_plans/pvp_share_lib_test_plan.rst | 6 +- .../pvp_vhost_user_reconnect_test_plan.rst | 60 +++++----- test_plans/pvp_virtio_bonding_test_plan.rst | 12 +- ...pvp_virtio_user_2M_hugepages_test_plan.rst | 8 +- .../pvp_virtio_user_4k_pages_test_plan.rst | 8 +- ...er_multi_queues_port_restart_test_plan.rst | 40 +++---- test_plans/vhost_1024_ethports_test_plan.rst | 2 +- .../vhost_event_idx_interrupt_test_plan.rst | 40 +++---- .../vhost_multi_queue_qemu_test_plan.rst | 12 +- test_plans/vhost_pmd_xstats_test_plan.rst | 44 +++---- test_plans/vhost_user_interrupt_test_plan.rst | 24 ++-- .../vhost_user_live_migration_test_plan.rst | 92 ++++++++------- .../vhost_virtio_pmd_interrupt_test_plan.rst | 34 +++--- .../vhost_virtio_user_interrupt_test_plan.rst | 36 +++--- .../virtio_event_idx_interrupt_test_plan.rst | 36 +++--- .../virtio_pvp_regression_test_plan.rst | 32 ++--- .../vm2vm_virtio_net_perf_test_plan.rst | 32 ++--- test_plans/vm2vm_virtio_pmd_test_plan.rst | 84 ++++++------- test_plans/vm2vm_virtio_user_test_plan.rst | 110 +++++++++--------- test_plans/vswitch_sample_cbdma_test_plan.rst | 28 ++--- 30 files changed, 550 insertions(+), 546 deletions(-) diff --git a/test_plans/dpdk_gro_lib_test_plan.rst b/test_plans/dpdk_gro_lib_test_plan.rst index 9685afc1..88ef971a 100644 --- a/test_plans/dpdk_gro_lib_test_plan.rst +++ b/test_plans/dpdk_gro_lib_test_plan.rst @@ -129,8 +129,8 @@ Test Case1: DPDK GRO lightmode test with tcp/ipv4 traffic 2. Bind nic1 to vfio-pci, launch vhost-user with testpmd and set flush interval to 1:: - ./dpdk-devbind.py -b vfio-pci xx:xx.x - ./testpmd -l 2-4 -n 4 \ + ./usertools/dpdk-devbind.py -b vfio-pci xx:xx.x + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --txd=1024 --rxd=1024 testpmd>set fwd csum testpmd>stop @@ -181,8 +181,8 @@ Test Case2: DPDK GRO heavymode test with tcp/ipv4 traffic 2. Bind nic1 to vfio-pci, launch vhost-user with testpmd and set flush interval to 2:: - ./dpdk-devbind.py -b vfio-pci xx:xx.x - ./testpmd -l 2-4 -n 4 \ + ./usertools/dpdk-devbind.py -b vfio-pci xx:xx.x + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --txd=1024 --rxd=1024 testpmd>set fwd csum testpmd>stop @@ -233,8 +233,8 @@ Test Case3: DPDK GRO heavymode_flush4 test with tcp/ipv4 traffic 2. Bind nic1 to vfio-pci, launch vhost-user with testpmd and set flush interval to 4:: - ./dpdk-devbind.py -b vfio-pci xx:xx.x - ./testpmd -l 2-4 -n 4 \ + ./usertools/dpdk-devbind.py -b vfio-pci xx:xx.x + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --txd=1024 --rxd=1024 testpmd>set fwd csum testpmd>stop @@ -301,8 +301,8 @@ Vxlan topology 2. Bind nic1 to vfio-pci, launch vhost-user with testpmd and set flush interval to 4:: - ./dpdk-devbind.py -b vfio-pci xx:xx.x - ./testpmd -l 2-4 -n 4 \ + ./usertools/dpdk-devbind.py -b vfio-pci xx:xx.x + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --txd=1024 --rxd=1024 testpmd>set fwd csum testpmd>stop @@ -365,8 +365,8 @@ NIC2(In kernel) -> NIC1(DPDK) -> testpmd(csum fwd) -> Vhost -> Virtio-net 2. Bind cbdma port and nic1 to vfio-pci, launch vhost-user with testpmd and set flush interval to 1:: - ./dpdk-devbind.py -b vfio-pci xx:xx.x - ./x86_64-native-linuxapp-gcc/app/testpmd -l 29-31 -n 4 \ + ./usertools/dpdk-devbind.py -b vfio-pci xx:xx.x + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-31 -n 4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2' -- -i --txd=1024 --rxd=1024 --txq=2 --rxq=2 --nb-cores=2 set fwd csum stop @@ -423,8 +423,8 @@ NIC2(In kernel) -> NIC1(DPDK) -> testpmd(csum fwd) -> Vhost -> Virtio-net 2. Bind cbdma port and nic1 to vfio-pci, launch vhost-user with testpmd and set flush interval to 1:: - ./dpdk-devbind.py -b vfio-pci xx:xx.x - ./x86_64-native-linuxapp-gcc/app/testpmd -l 29-31 -n 4 \ + ./usertools/dpdk-devbind.py -b vfio-pci xx:xx.x + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-31 -n 4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,dmas=[txq0@80:04.0;txq1@80:04.1]' -- -i --txd=1024 --rxd=1024 --txq=2 --rxq=2 --nb-cores=2 set fwd csum stop diff --git a/test_plans/dpdk_gso_lib_test_plan.rst b/test_plans/dpdk_gso_lib_test_plan.rst index 2c8b32ad..42765461 100644 --- a/test_plans/dpdk_gso_lib_test_plan.rst +++ b/test_plans/dpdk_gso_lib_test_plan.rst @@ -98,8 +98,8 @@ Test Case1: DPDK GSO test with tcp traffic 2. Bind nic1 to vfio-pci, launch vhost-user with testpmd:: - ./dpdk-devbind.py -b vfio-pci xx:xx.x # xx:xx.x is the pci addr of nic1 - ./testpmd -l 2-4 -n 4 \ + ./usertools/dpdk-devbind.py -b vfio-pci xx:xx.x # xx:xx.x is the pci addr of nic1 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --txd=1024 --rxd=1024 testpmd>set fwd csum testpmd>stop @@ -158,8 +158,8 @@ Test Case3: DPDK GSO test with vxlan traffic 2. Bind nic1 to vfio-pci, launch vhost-user with testpmd:: - ./dpdk-devbind.py -b vfio-pci xx:xx.x - ./testpmd -l 2-4 -n 4 \ + ./usertools/dpdk-devbind.py -b vfio-pci xx:xx.x + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --txd=1024 --rxd=1024 testpmd>set fwd csum testpmd>stop @@ -212,8 +212,8 @@ Test Case4: DPDK GSO test with gre traffic 2. Bind nic1 to vfio-pci, launch vhost-user with testpmd:: - ./dpdk-devbind.py -b vfio-pci xx:xx.x - ./testpmd -l 2-4 -n 4 \ + ./usertools/dpdk-devbind.py -b vfio-pci xx:xx.x + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --txd=1024 --rxd=1024 testpmd>set fwd csum testpmd>stop diff --git a/test_plans/dpdk_hugetlbfs_mount_size_test_plan.rst b/test_plans/dpdk_hugetlbfs_mount_size_test_plan.rst index bda21d39..1ddf6e1c 100644 --- a/test_plans/dpdk_hugetlbfs_mount_size_test_plan.rst +++ b/test_plans/dpdk_hugetlbfs_mount_size_test_plan.rst @@ -51,14 +51,14 @@ Test Case 1: default hugepage size w/ and w/o numa 2. Bind one nic port to vfio-pci driver, launch testpmd:: - ./dpdk-testpmd -c 0x3 -n 4 --huge-dir /mnt/huge --file-prefix=abc -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 --huge-dir /mnt/huge --file-prefix=abc -- -i testpmd>start 3. Send packet with packet generator, check testpmd could forward packets correctly. 4. Goto step 2 resart testpmd with numa support:: - ./dpdk-testpmd -c 0x3 -n 4 --huge-dir /mnt/huge --file-prefix=abc -- -i --numa + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 --huge-dir /mnt/huge --file-prefix=abc -- -i --numa testpmd>start 5. Send packets with packet generator, make sure testpmd could receive and fwd packets correctly. @@ -73,10 +73,10 @@ Test Case 2: mount size exactly match total hugepage size with two mount points 2. Bind two nic ports to vfio-pci driver, launch testpmd with numactl:: - numactl --membind=1 ./dpdk-testpmd -l 31-32 -n 4 --legacy-mem --socket-mem 0,2048 --huge-dir /mnt/huge1 --file-prefix=abc -a 82:00.0 -- -i --socket-num=1 --no-numa + numactl --membind=1 ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 31-32 -n 4 --legacy-mem --socket-mem 0,2048 --huge-dir /mnt/huge1 --file-prefix=abc -a 82:00.0 -- -i --socket-num=1 --no-numa testpmd>start - numactl --membind=1 ./dpdk-testpmd -l 33-34 -n 4 --legacy-mem --socket-mem 0,2048 --huge-dir /mnt/huge2 --file-prefix=bcd -a 82:00.1 -- -i --socket-num=1 --no-numa + numactl --membind=1 ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 33-34 -n 4 --legacy-mem --socket-mem 0,2048 --huge-dir /mnt/huge2 --file-prefix=bcd -a 82:00.1 -- -i --socket-num=1 --no-numa testpmd>start 3. Send packets with packet generator, make sure two testpmd could receive and fwd packets correctly. @@ -90,7 +90,7 @@ Test Case 3: mount size greater than total hugepage size with single mount point 2. Bind one nic port to vfio-pci driver, launch testpmd:: - ./dpdk-testpmd -c 0x3 -n 4 --legacy-mem --huge-dir /mnt/huge --file-prefix=abc -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 --legacy-mem --huge-dir /mnt/huge --file-prefix=abc -- -i testpmd>start 3. Send packets with packet generator, make sure testpmd could receive and fwd packets correctly. @@ -106,13 +106,13 @@ Test Case 4: mount size greater than total hugepage size with multiple mount poi 2. Bind one nic port to vfio-pci driver, launch testpmd:: - numactl --membind=0 ./dpdk-testpmd -c 0x3 -n 4 --legacy-mem --socket-mem 2048,0 --huge-dir /mnt/huge1 --file-prefix=abc -- -i --socket-num=0 --no-numa + numactl --membind=0 ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 --legacy-mem --socket-mem 2048,0 --huge-dir /mnt/huge1 --file-prefix=abc -- -i --socket-num=0 --no-numa testpmd>start - numactl --membind=0 ./dpdk-testpmd -c 0xc -n 4 --legacy-mem --socket-mem 2048,0 --huge-dir /mnt/huge2 --file-prefix=bcd -- -i --socket-num=0 --no-numa + numactl --membind=0 ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc -n 4 --legacy-mem --socket-mem 2048,0 --huge-dir /mnt/huge2 --file-prefix=bcd -- -i --socket-num=0 --no-numa testpmd>start - numactl --membind=0 ./dpdk-testpmd -c 0x30 -n 4 --legacy-mem --socket-mem 1024,0 --huge-dir /mnt/huge3 --file-prefix=fgh -- -i --socket-num=0 --no-numa + numactl --membind=0 ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --legacy-mem --socket-mem 1024,0 --huge-dir /mnt/huge3 --file-prefix=fgh -- -i --socket-num=0 --no-numa testpmd>start 3. Send packets with packet generator, check first and second testpmd will start correctly while third one will report error with not enough mem in socket 0. @@ -124,6 +124,6 @@ Test Case 5: run dpdk app in limited hugepages controlled by cgroup cgcreate -g hugetlb:/test-subgroup cgset -r hugetlb.1GB.limit_in_bytes=2147483648 test-subgroup - cgexec -g hugetlb:test-subgroup numactl -m 1 ./dpdk-testpmd -c 0x3000 -n 4 -- -i --socket-num=1 --no-numa + cgexec -g hugetlb:test-subgroup numactl -m 1 ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3000 -n 4 -- -i --socket-num=1 --no-numa 2. Start testpmd and send packets with packet generator, make sure testpmd could receive and fwd packets correctly. diff --git a/test_plans/loopback_virtio_user_server_mode_test_plan.rst b/test_plans/loopback_virtio_user_server_mode_test_plan.rst index e936fad9..3ba8d983 100644 --- a/test_plans/loopback_virtio_user_server_mode_test_plan.rst +++ b/test_plans/loopback_virtio_user_server_mode_test_plan.rst @@ -44,14 +44,14 @@ Test Case 1: Basic test for packed ring server mode 1. Launch virtio-user as server mode:: - ./testpmd -l 1-2 -n 4 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,server=1,queues=1,packed_vq=1 -- -i --rxq=1 --txq=1 --no-numa >set fwd mac >start 2. Launch vhost as client mode:: - ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \ --vdev 'net_vhost0,iface=/tmp/sock0,client=1,queues=1' -- -i --rxq=1 --txq=1 --nb-cores=1 >set fwd mac >start tx_first 32 @@ -65,14 +65,14 @@ Test Case 2: Basic test for split ring server mode 1. Launch virtio-user as server mode:: - ./testpmd -l 1-2 -n 4 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,server=1,queues=1 -- -i --rxq=1 --txq=1 --no-numa >set fwd mac >start 2. Launch vhost as client mode:: - ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \ --vdev 'net_vhost0,iface=/tmp/sock0,client=1,queues=1' -- -i --rxq=1 --txq=1 --nb-cores=1 >set fwd mac >start tx_first 32 @@ -87,14 +87,14 @@ Test Case 3: loopback reconnect test with split ring mergeable path and server m 1. launch vhost as client mode with 8 queues:: rm -rf vhost-net* - ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=8' -- -i --nb-cores=2 --rxq=8 --txq=8 >set fwd mac >start 2. Launch virtio-user as server mode with 8 queues and check throughput can get expected:: - ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=8,mrg_rxbuf=1,in_order=0 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=8 --txq=8 >set fwd mac @@ -109,7 +109,7 @@ Test Case 3: loopback reconnect test with split ring mergeable path and server m 4. Relaunch vhost and send chain packets:: - ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=8' -- -i --nb-cores=2 --rxq=8 --txq=8 >set fwd mac >set txpkts 2000,2000,2000,2000 @@ -132,7 +132,7 @@ Test Case 3: loopback reconnect test with split ring mergeable path and server m 8. Relaunch virtio-user and send packets:: - ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=8,mrg_rxbuf=1,in_order=0 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=8 --txq=8 >set fwd mac @@ -164,14 +164,14 @@ Test Case 4: loopback reconnect test with split ring inorder mergeable path and 1. launch vhost as client mode with 2 queues:: rm -rf vhost-net* - ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2 >set fwd mac >start 2. Launch virtio-user as server mode with 2 queues, check throughput can get expected:: - ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,mrg_rxbuf=1,in_order=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 >set fwd mac @@ -186,7 +186,7 @@ Test Case 4: loopback reconnect test with split ring inorder mergeable path and 4. Relaunch vhost and send packets:: - ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2 >set fwd mac >set txpkts 2000,2000,2000,2000 @@ -209,7 +209,7 @@ Test Case 4: loopback reconnect test with split ring inorder mergeable path and 8. Relaunch virtio-user and send packets:: - ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,mrg_rxbuf=1,in_order=1\ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 >set fwd mac @@ -241,14 +241,14 @@ Test Case 5: loopback reconnect test with split ring inorder non-mergeable path 1. launch vhost as client mode with 2 queues:: rm -rf vhost-net* - ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2 >set fwd mac >start 2. Launch virtio-user as server mode with 2 queues check throughput can get expected:: - ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,mrg_rxbuf=0,in_order=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 >set fwd mac @@ -262,7 +262,7 @@ Test Case 5: loopback reconnect test with split ring inorder non-mergeable path 4. Relaunch vhost and send packets:: - ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2 >set fwd mac >start tx_first 32 @@ -284,7 +284,7 @@ Test Case 5: loopback reconnect test with split ring inorder non-mergeable path 8. Relaunch virtio-user and send packets:: - ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,mrg_rxbuf=0,in_order=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 >set fwd mac @@ -314,14 +314,14 @@ Test Case 6: loopback reconnect test with split ring non-mergeable path and serv 1. launch vhost as client mode with 2 queues:: rm -rf vhost-net* - ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2 >set fwd mac >start 2. Launch virtio-user as server mode with 2 queues and check throughput can get expected:: - ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,mrg_rxbuf=0,in_order=0,vectorized=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 >set fwd mac @@ -335,7 +335,7 @@ Test Case 6: loopback reconnect test with split ring non-mergeable path and serv 4. Relaunch vhost and send packets:: - ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2 >set fwd mac >start tx_first 32 @@ -357,7 +357,7 @@ Test Case 6: loopback reconnect test with split ring non-mergeable path and serv 8. Relaunch virtio-user and send packets:: - ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,mrg_rxbuf=0,in_order=0,vectorized=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 >set fwd mac @@ -387,14 +387,14 @@ Test Case 7: loopback reconnect test with split ring vector_rx path and server m 1. launch vhost as client mode with 2 queues:: rm -rf vhost-net* - ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2 >set fwd mac >start 2. Launch virtio-user as server mode with 2 queues and check throughput can get expected:: - ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,mrg_rxbuf=0,in_order=0,vectorized=1 \ -- -i --nb-cores=2 --rxq=2 --txq=2 >set fwd mac @@ -408,7 +408,7 @@ Test Case 7: loopback reconnect test with split ring vector_rx path and server m 4. Relaunch vhost and send packets:: - ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2 >set fwd mac >start tx_first 32 @@ -430,7 +430,7 @@ Test Case 7: loopback reconnect test with split ring vector_rx path and server m 8. Relaunch virtio-user and send packets:: - ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,mrg_rxbuf=0,in_order=0,vectorized=1 \ -- -i --nb-cores=2 --rxq=2 --txq=2 >set fwd mac @@ -460,14 +460,14 @@ Test Case 8: loopback reconnect test with packed ring mergeable path and server 1. launch vhost as client mode with 2 queues:: rm -rf vhost-net* - ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2 >set fwd mac >start 2. Launch virtio-user as server mode with 2 queues and check throughput can get expected:: - ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,packed_vq=1,mrg_rxbuf=1,in_order=0 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 >set fwd mac @@ -482,7 +482,7 @@ Test Case 8: loopback reconnect test with packed ring mergeable path and server 4. Relaunch vhost and send packets:: - ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2 >set fwd mac >set txpkts 2000,2000,2000,2000 @@ -505,7 +505,7 @@ Test Case 8: loopback reconnect test with packed ring mergeable path and server 8. Relaunch virtio-user and send packets:: - ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,packed_vq=1,mrg_rxbuf=1,in_order=0 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 >set fwd mac @@ -537,14 +537,14 @@ Test Case 9: loopback reconnect test with packed ring non-mergeable path and ser 1. launch vhost as client mode with 2 queues:: rm -rf vhost-net* - ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2 >set fwd mac >start 2. Launch virtio-user as server mode with 2 queues and check throughput can get expected:: - ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=0 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 >set fwd mac @@ -558,7 +558,7 @@ Test Case 9: loopback reconnect test with packed ring non-mergeable path and ser 4. Relaunch vhost and send packets:: - ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2 >set fwd mac >start tx_first 32 @@ -580,7 +580,7 @@ Test Case 9: loopback reconnect test with packed ring non-mergeable path and ser 8. Relaunch virtio-user and send packets:: - ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=0 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 >set fwd mac @@ -610,14 +610,14 @@ Test Case 10: loopback reconnect test with packed ring inorder mergeable path an 1. launch vhost as client mode with 8 queues:: rm -rf vhost-net* - ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=8' -- -i --nb-cores=2 --rxq=8 --txq=8 >set fwd mac >start 2. Launch virtio-user as server mode with 8 queues and check throughput can get expected:: - ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=8,packed_vq=1,mrg_rxbuf=1,in_order=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=8 --txq=8 >set fwd mac @@ -632,7 +632,7 @@ Test Case 10: loopback reconnect test with packed ring inorder mergeable path an 4. Relaunch vhost and send packets:: - ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=8' -- -i --nb-cores=2 --rxq=8 --txq=8 >set fwd mac >set txpkts 2000,2000,2000,2000 @@ -655,7 +655,7 @@ Test Case 10: loopback reconnect test with packed ring inorder mergeable path an 8. Relaunch virtio-user and send packets:: - ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=8,packed_vq=1,mrg_rxbuf=1,in_order=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=8 --txq=8 >set fwd mac @@ -687,14 +687,14 @@ Test Case 11: loopback reconnect test with packed ring inorder non-mergeable pat 1. launch vhost as client mode with 2 queues:: rm -rf vhost-net* - ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2 >set fwd mac >start 2. Launch virtio-user as server mode with 2 queues and check throughput can get expected:: - ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \ -- -i --rx-offloads=0x10 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 >set fwd mac @@ -708,7 +708,7 @@ Test Case 11: loopback reconnect test with packed ring inorder non-mergeable pat 4. Relaunch vhost and send packets:: - ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2 >set fwd mac >start tx_first 32 @@ -730,7 +730,7 @@ Test Case 11: loopback reconnect test with packed ring inorder non-mergeable pat 8. Relaunch virtio-user and send packets:: - ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \ -- -i --rx-offloads=0x10 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 >set fwd mac @@ -760,14 +760,14 @@ Test Case 12: loopback reconnect test with packed ring vectorized path and serve 1. launch vhost as client mode with 2 queues:: rm -rf vhost-net* - ./testpmd -c 0xe -n 4 --log-level=pmd.net.vhost.driver,8 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --log-level=pmd.net.vhost.driver,8 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2 >set fwd mac >start 2. Launch virtio-user as server mode with 2 queues and check throughput can get expected:: - ./testpmd -n 4 -l 5-7 --log-level=pmd.net.virtio.driver,8 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --log-level=pmd.net.virtio.driver,8 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 >set fwd mac @@ -781,7 +781,7 @@ Test Case 12: loopback reconnect test with packed ring vectorized path and serve 4. Relaunch vhost and send packets:: - ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2 >set fwd mac >start tx_first 32 @@ -803,7 +803,7 @@ Test Case 12: loopback reconnect test with packed ring vectorized path and serve 8. Relaunch virtio-user and send packets:: - ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 >set fwd mac @@ -832,11 +832,11 @@ Test Case 13: loopback packed ring and split ring mergeable path payload check t 1. launch vhost:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 32-33 --no-pci --file-prefix=vhost -n 4 --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1' -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + ././x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 32-33 --no-pci --file-prefix=vhost -n 4 --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1' -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 2. Launch virtio-user with packed ring mergeable inorder path:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio --no-pci --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,packed_vq=1,server=1 -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + ././x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio --no-pci --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,packed_vq=1,server=1 -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 testpmd> set fwd csum testpmd> start @@ -858,7 +858,7 @@ Test Case 13: loopback packed ring and split ring mergeable path payload check t 7. Quit and relaunch virtio with packed ring mergeable path as below:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,packed_vq=1,server=1 -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + ././x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,packed_vq=1,server=1 -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 testpmd> set fwd csum testpmd> start @@ -866,7 +866,7 @@ Test Case 13: loopback packed ring and split ring mergeable path payload check t 9. Quit and relaunch virtio with split ring mergeable inorder path as below:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,server=1 -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + ././x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,server=1 -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 testpmd>set fwd csum testpmd>start @@ -874,7 +874,7 @@ Test Case 13: loopback packed ring and split ring mergeable path payload check t 11. Quit and relaunch virtio with split ring mergeable path as below:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,server=1 -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + ././x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,server=1 -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 testpmd> set fwd csum testpmd> start @@ -885,11 +885,11 @@ Test Case 14: loopback packed ring and split ring mergeable path cbdma test payl 1. bind 8 cbdma port to vfio-pci and launch vhost:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 32-33 -n 4 --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + ././x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 32-33 -n 4 --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 2. Launch virtio-user with packed ring mergeable inorder path:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,packed_vq=1,server=1 -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + ././x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,packed_vq=1,server=1 -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 testpmd>set fwd csum testpmd>start @@ -911,7 +911,7 @@ Test Case 14: loopback packed ring and split ring mergeable path cbdma test payl 7. Quit and relaunch virtio with packed ring mergeable path as below:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=testpmd0 --no-pci --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,packed_vq=1,server=1 -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + ././x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=testpmd0 --no-pci --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,packed_vq=1,server=1 -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 testpmd>set fwd csum testpmd>start @@ -919,7 +919,7 @@ Test Case 14: loopback packed ring and split ring mergeable path cbdma test payl 9. Quit and relaunch virtio with split ring mergeable inorder path as below:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=testpmd0 --no-pci --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,server=1 -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + ././x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=testpmd0 --no-pci --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,server=1 -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 testpmd>set fwd csum testpmd>start @@ -927,7 +927,7 @@ Test Case 14: loopback packed ring and split ring mergeable path cbdma test payl 11. Quit and relaunch virtio with split ring mergeable path as below:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=testpmd0 --no-pci --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,server=1 -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + ././x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=testpmd0 --no-pci --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,server=1 -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 testpmd> set fwd csum testpmd> start @@ -935,6 +935,6 @@ Test Case 14: loopback packed ring and split ring mergeable path cbdma test payl 13. Quit and relaunch vhost w/ iova=pa:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 --file-prefix=vhost -n 4 --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' --iova=pa -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 + ././x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 --file-prefix=vhost -n 4 --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' --iova=pa -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024 14. rerun step3-5. diff --git a/test_plans/perf_virtio_user_loopback_test_plan.rst b/test_plans/perf_virtio_user_loopback_test_plan.rst index b1b24e7f..94f0c7f6 100644 --- a/test_plans/perf_virtio_user_loopback_test_plan.rst +++ b/test_plans/perf_virtio_user_loopback_test_plan.rst @@ -45,13 +45,13 @@ Test Case 1: loopback test with packed ring mergeable path 1. Launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 --no-pci \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 --no-pci \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=1,in_order=0 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024 @@ -73,13 +73,13 @@ Test Case 2: loopback test with packed ring non-mergeable path 1. Launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 --no-pci \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 --no-pci \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=0,in_order=0 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024 @@ -101,13 +101,13 @@ Test Case 3: loopback test with packed ring inorder mergeable path 1. Launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 --no-pci \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 --no-pci \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,in_order=1,mrg_rxbuf=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024 @@ -129,13 +129,13 @@ Test Case 4: loopback test with packed ring inorder non-mergeable path 1. Launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 --no-pci \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 --no-pci \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,in_order=1,mrg_rxbuf=0 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024 @@ -157,13 +157,13 @@ Test Case 5: loopback test with split ring mergeable path 1. Launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 --no-pci \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 --no-pci \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024 @@ -185,13 +185,13 @@ Test Case 6: loopback test with split ring non-mergeable path 1. Launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 --no-pci \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 --no-pci \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=0 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024 @@ -213,13 +213,13 @@ Test Case 7: loopback test with split ring vector_rx path 1. Launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 --no-pci \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 --no-pci \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=0 \ -- -i --nb-cores=1 --txd=1024 --rxd=1024 @@ -241,13 +241,13 @@ Test Case 8: loopback test with split ring inorder mergeable path 1. Launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 --no-pci \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 --no-pci \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=1,mrg_rxbuf=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024 @@ -269,13 +269,13 @@ Test Case 9: loopback test with split ring inorder non-mergeable path 1. Launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 --no-pci \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 --no-pci \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=1,mrg_rxbuf=0 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024 @@ -297,13 +297,13 @@ Test Case 10: loopback test with packed ring vectorized path 1. Launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 --no-pci \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 --no-pci \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,in_order=1,mrg_rxbuf=0,vectorized=1 \ -- -i --rss-ip --nb-cores=1 --txd=1024 --rxd=1024 diff --git a/test_plans/pvp_diff_qemu_version_test_plan.rst b/test_plans/pvp_diff_qemu_version_test_plan.rst index 125c9a65..6c2b712f 100644 --- a/test_plans/pvp_diff_qemu_version_test_plan.rst +++ b/test_plans/pvp_diff_qemu_version_test_plan.rst @@ -50,7 +50,7 @@ Test Case 1: PVP multi qemu version test with virtio 0.95 mergeable path 1. Bind one port to vfio-pci, then launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -c 0xe -n 4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \ -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac @@ -73,7 +73,7 @@ Test Case 1: PVP multi qemu version test with virtio 0.95 mergeable path -vnc :10 4. On VM, bind virtio net to vfio-pci and run testpmd :: - ./testpmd -c 0x3 -n 3 -- -i \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -- -i \ --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -88,7 +88,7 @@ Test Case 2: PVP test with virtio 1.0 mergeable path 1. Bind one port to vfio-pci, then launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -c 0xe -n 4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \ -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac @@ -112,7 +112,7 @@ Test Case 2: PVP test with virtio 1.0 mergeable path 3. On VM, bind virtio net to vfio-pci and run testpmd:: - ./testpmd -c 0x3 -n 3 -- -i \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -- -i \ --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start diff --git a/test_plans/pvp_multi_paths_performance_test_plan.rst b/test_plans/pvp_multi_paths_performance_test_plan.rst index 3a0a8a72..244f0e20 100644 --- a/test_plans/pvp_multi_paths_performance_test_plan.rst +++ b/test_plans/pvp_multi_paths_performance_test_plan.rst @@ -59,7 +59,7 @@ Test Case 1: pvp test with virtio 1.1 mergeable path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-3 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' \ -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac @@ -67,7 +67,7 @@ Test Case 1: pvp test with virtio 1.1 mergeable path 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=1,in_order=0 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac @@ -83,7 +83,7 @@ Test Case 2: pvp test with virtio 1.1 non-mergeable path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-3 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' \ -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac @@ -91,7 +91,7 @@ Test Case 2: pvp test with virtio 1.1 non-mergeable path 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=0,in_order=0 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac @@ -107,7 +107,7 @@ Test Case 3: pvp test with inorder mergeable path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-3 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' \ -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac @@ -115,7 +115,7 @@ Test Case 3: pvp test with inorder mergeable path 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,mrg_rxbuf=1,in_order=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac @@ -131,7 +131,7 @@ Test Case 4: pvp test with inorder non-mergeable path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' \ -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac @@ -139,7 +139,7 @@ Test Case 4: pvp test with inorder non-mergeable path 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=1,mrg_rxbuf=0 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac @@ -155,7 +155,7 @@ Test Case 5: pvp test with mergeable path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' \ -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac @@ -163,7 +163,7 @@ Test Case 5: pvp test with mergeable path 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac @@ -179,7 +179,7 @@ Test Case 6: pvp test with non-mergeable path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' \ -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac @@ -187,7 +187,7 @@ Test Case 6: pvp test with non-mergeable path 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=0,vectorized=1 \ -- -i --rx-offloads=0x10 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac @@ -203,7 +203,7 @@ Test Case 7: pvp test with vectorized_rx path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' \ -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac @@ -211,7 +211,7 @@ Test Case 7: pvp test with vectorized_rx path 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=0,vectorized=1 \ -- -i --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac @@ -227,7 +227,7 @@ Test Case 8: pvp test with virtio 1.1 inorder mergeable path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-3 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' \ -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac @@ -235,7 +235,7 @@ Test Case 8: pvp test with virtio 1.1 inorder mergeable path 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=1,in_order=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac @@ -251,7 +251,7 @@ Test Case 9: pvp test with virtio 1.1 inorder non-mergeable path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-3 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' \ -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac @@ -259,7 +259,7 @@ Test Case 9: pvp test with virtio 1.1 inorder non-mergeable path 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=0,in_order=1 \ -- -i --rx-offloads=0x10 --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac @@ -275,14 +275,14 @@ Test Case 10: pvp test with virtio 1.1 vectorized path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' \ -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \ -- -i --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac diff --git a/test_plans/pvp_multi_paths_vhost_single_core_performance_test_plan.rst b/test_plans/pvp_multi_paths_vhost_single_core_performance_test_plan.rst index b10c415a..36ca3af7 100644 --- a/test_plans/pvp_multi_paths_vhost_single_core_performance_test_plan.rst +++ b/test_plans/pvp_multi_paths_vhost_single_core_performance_test_plan.rst @@ -50,14 +50,14 @@ Test Case 1: vhost single core performance test with virtio 1.1 mergeable path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \ --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -l 7-9 -n 4 --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 7-9 -n 4 --file-prefix=virtio \ --vdev=virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=0 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --txd=1024 --rxd=1024 >set fwd io @@ -71,14 +71,14 @@ Test Case 2: vhost single core performance test with virtio 1.1 non-mergeable pa 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \ --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -l 7-9 -n 4 --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 7-9 -n 4 --file-prefix=virtio \ --vdev=virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=0 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --txd=1024 --rxd=1024 >set fwd io @@ -92,14 +92,14 @@ Test Case 3: vhost single core performance test with inorder mergeable path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \ --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -l 7-9 -n 4 --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 7-9 -n 4 --file-prefix=virtio \ --vdev=virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net,queues=1,in_order=1,mrg_rxbuf=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --txd=1024 --rxd=1024 >set fwd io @@ -113,14 +113,14 @@ Test Case 4: vhost single core performance test with inorder non-mergeable path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \ --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -l 7-9 -n 4 --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 7-9 -n 4 --file-prefix=virtio \ --vdev=virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net,queues=1,in_order=1,mrg_rxbuf=0 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --txd=1024 --rxd=1024 >set fwd io @@ -134,14 +134,14 @@ Test Case 5: vhost single core performance test with mergeable path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \ --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -l 7-9 -n 4 --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 7-9 -n 4 --file-prefix=virtio \ --vdev=virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net,queues=1,in_order=0,mrg_rxbuf=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --txd=1024 --rxd=1024 >set fwd io @@ -155,14 +155,14 @@ Test Case 6: vhost single core performance test with non-mergeable path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \ --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -l 7-9 -n 4 --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 7-9 -n 4 --file-prefix=virtio \ --vdev=virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net,queues=1,in_order=0,mrg_rxbuf=0,vectorized=1 \ -- -i --enable-hw-vlan-strip --rss-ip --nb-cores=2 --txd=1024 --rxd=1024 >set fwd io @@ -176,14 +176,14 @@ Test Case 7: vhost single core performance test with vectorized_rx path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \ --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -l 7-9 -n 4 --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 7-9 -n 4 --file-prefix=virtio \ --vdev=virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net,queues=1,in_order=0,mrg_rxbuf=0,vectorized=1 \ -- -i --nb-cores=2 --txd=1024 --rxd=1024 >set fwd io @@ -197,14 +197,14 @@ Test Case 8: vhost single core performance test with virtio 1.1 inorder mergeabl 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \ --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -l 7-9 -n 4 --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 7-9 -n 4 --file-prefix=virtio \ --vdev=virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --txd=1024 --rxd=1024 >set fwd io @@ -218,14 +218,14 @@ Test Case 9: vhost single core performance test with virtio 1.1 inorder non-merg 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \ --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -l 7-9 -n 4 --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 7-9 -n 4 --file-prefix=virtio \ --vdev=virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=0 \ -- -i --rss-ip --nb-cores=2 --txd=1024 --rxd=1024 >set fwd io @@ -239,14 +239,14 @@ Test Case 10: vhost single core performance test with virtio 1.1 vectorized path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \ --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -l 7-9 -n 4 --file-prefix=virtio --force-max-simd-bitwidth=512 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 7-9 -n 4 --file-prefix=virtio --force-max-simd-bitwidth=512 \ --vdev=virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \ -- -i --rss-ip --nb-cores=2 --txd=1024 --rxd=1024 >set fwd io diff --git a/test_plans/pvp_multi_paths_virtio_single_core_performance_test_plan.rst b/test_plans/pvp_multi_paths_virtio_single_core_performance_test_plan.rst index ea7ff698..481ef47d 100644 --- a/test_plans/pvp_multi_paths_virtio_single_core_performance_test_plan.rst +++ b/test_plans/pvp_multi_paths_virtio_single_core_performance_test_plan.rst @@ -50,13 +50,13 @@ Test Case 1: virtio single core performance test with virtio 1.1 mergeable path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=2 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=2 --txd=1024 --rxd=1024 testpmd>set fwd io testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=1,in_order=0 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac @@ -70,14 +70,14 @@ Test Case 2: virtio single core performance test with virtio 1.1 non-mergeable p 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=2 --txd=1024 --rxd=1024 testpmd>set fwd io testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=0,in_order=0 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac @@ -91,14 +91,14 @@ Test Case 3: virtio single core performance test with inorder mergeable path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=2 --txd=1024 --rxd=1024 testpmd>set fwd io testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=1,mrg_rxbuf=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac @@ -112,14 +112,14 @@ Test Case 4: virtio single core performance test with inorder non-mergeable path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=2 --txd=1024 --rxd=1024 testpmd>set fwd io testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=1,mrg_rxbuf=0 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac @@ -133,14 +133,14 @@ Test Case 5: virtio single core performance test with mergeable path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=2 --txd=1024 --rxd=1024 testpmd>set fwd io testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac @@ -154,14 +154,14 @@ Test Case 6: virtio single core performance test with non-mergeable path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=2 --txd=1024 --rxd=1024 testpmd>set fwd io testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=0,vectorized=1 \ -- -i --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac @@ -175,14 +175,14 @@ Test Case 7: virtio single core performance test with vectorized_rx path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=2 --txd=1024 --rxd=1024 testpmd>set fwd io testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=0,vectorized=1 \ -- -i --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac @@ -196,14 +196,14 @@ Test Case 8: virtio single core performance test with virtio 1.1 inorder mergeab 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=2 --txd=1024 --rxd=1024 testpmd>set fwd io testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=1,in_order=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac @@ -217,14 +217,14 @@ Test Case 9: virtio single core performance test with virtio 1.1 inorder non-mer 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=2 --txd=1024 --rxd=1024 testpmd>set fwd io testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=0 \ -- -i --rss-ip --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac @@ -238,14 +238,14 @@ Test Case 10: virtio single core performance test with virtio 1.1 vectorized pat 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 --no-pci \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 --no-pci \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --txd=1024 --rxd=1024 testpmd>set fwd io testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \ -- -i --rss-ip --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac diff --git a/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst b/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst index ddf8beca..f0b50143 100644 --- a/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst +++ b/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst @@ -51,7 +51,7 @@ Test Case 1: pvp test with virtio 0.95 mergeable path 1. Bind one port to vfio-pci, then launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -c 0xe -n 4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \ -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac @@ -73,7 +73,7 @@ Test Case 1: pvp test with virtio 0.95 mergeable path 3. On VM, bind virtio net to vfio-pci and run testpmd:: - ./testpmd -c 0x3 -n 3 -- -i \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -- -i \ --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -98,7 +98,7 @@ Test Case 2: pvp test with virtio 0.95 normal path 1. Bind one port to vfio-pci, then launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -c 0xe -n 4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \ -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac @@ -119,7 +119,7 @@ Test Case 2: pvp test with virtio 0.95 normal path 3. On VM, bind virtio net to vfio-pci and run testpmd with tx-offloads:: - ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x0 --enable-hw-vlan-strip \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x0 --enable-hw-vlan-strip \ --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -144,7 +144,7 @@ Test Case 3: pvp test with virtio 0.95 vrctor_rx path 1. Bind one port to vfio-pci, then launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -c 0xe -n 4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \ -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac @@ -165,7 +165,7 @@ Test Case 3: pvp test with virtio 0.95 vrctor_rx path 3. On VM, bind virtio net to vfio-pci and run testpmd without ant tx-offloads:: - ./testpmd -c 0x3 -n 3 -- -i \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -- -i \ --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -190,7 +190,7 @@ Test Case 4: pvp test with virtio 1.0 mergeable path 1. Bind one port to vfio-pci, then launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -c 0xe -n 4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \ -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac @@ -211,7 +211,7 @@ Test Case 4: pvp test with virtio 1.0 mergeable path 3. On VM, bind virtio net to vfio-pci and run testpmd:: - ./testpmd -c 0x3 -n 3 -- -i \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -- -i \ --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -236,7 +236,7 @@ Test Case 5: pvp test with virtio 1.0 normal path 1. Bind one port to vfio-pci, then launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -c 0xe -n 4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \ -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac @@ -257,7 +257,7 @@ Test Case 5: pvp test with virtio 1.0 normal path 3. On VM, bind virtio net to vfio-pci and run testpmd with tx-offloads:: - ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x0 --enable-hw-vlan-strip\ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x0 --enable-hw-vlan-strip\ --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -282,7 +282,7 @@ Test Case 6: pvp test with virtio 1.0 vrctor_rx path 1. Bind one port to vfio-pci, then launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -c 0xe -n 4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \ -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac @@ -303,7 +303,7 @@ Test Case 6: pvp test with virtio 1.0 vrctor_rx path 3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads:: - ./testpmd -c 0x3 -n 3 -- -i \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -- -i \ --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start diff --git a/test_plans/pvp_share_lib_test_plan.rst b/test_plans/pvp_share_lib_test_plan.rst index a1a6c56f..c5a3c9b8 100644 --- a/test_plans/pvp_share_lib_test_plan.rst +++ b/test_plans/pvp_share_lib_test_plan.rst @@ -56,13 +56,13 @@ Test Case1: Vhost/virtio-user pvp share lib test with niantic 4. Bind niantic port with vfio-pci, use option ``-d`` to load the dynamic pmd when launch vhost:: - ./testpmd -c 0x03 -n 4 -d librte_net_vhost.so.21.0 -d librte_net_i40e.so.21.0 -d librte_mempool_ring.so.21.0 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x03 -n 4 -d librte_net_vhost.so.21.0 -d librte_net_i40e.so.21.0 -d librte_mempool_ring.so.21.0 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i testpmd>start 5. Launch virtio-user:: - ./testpmd -c 0x0c -n 4 -d librte_net_virtio.so.21.0 -d librte_mempool_ring.so.21.0 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x0c -n 4 -d librte_net_virtio.so.21.0 -d librte_mempool_ring.so.21.0 \ --no-pci --file-prefix=virtio --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net -- -i testpmd>start @@ -77,6 +77,6 @@ Similar as Test Case1, all steps are similar except step 4: 4. Bind fortville port with vfio-pci, use option ``-d`` to load the dynamic pmd when launch vhost:: - ./testpmd -c 0x03 -n 4 -d librte_net_vhost.so -d librte_net_i40e.so -d librte_mempool_ring.so \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x03 -n 4 -d librte_net_vhost.so -d librte_net_i40e.so -d librte_mempool_ring.so \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i testpmd>start diff --git a/test_plans/pvp_vhost_user_reconnect_test_plan.rst b/test_plans/pvp_vhost_user_reconnect_test_plan.rst index f13bbb0a..a70b6d3b 100644 --- a/test_plans/pvp_vhost_user_reconnect_test_plan.rst +++ b/test_plans/pvp_vhost_user_reconnect_test_plan.rst @@ -61,7 +61,7 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG 1. Bind one port to vfio-pci, then launch vhost with client mode by below commands:: - ./testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1 testpmd>set fwd mac testpmd>start @@ -81,7 +81,7 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG 3. On VM, bind virtio net to vfio-pci and run testpmd:: - ./testpmd -c 0x3 -n 4 -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -92,7 +92,7 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG 5. On host, quit vhost-user, then re-launch the vhost-user with below command:: testpmd>quit - ./testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1 testpmd>set fwd mac testpmd>start @@ -106,7 +106,7 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG 1. Bind one port to vfio-pci, then launch vhost with client mode by below commands:: - ./testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1 testpmd>set fwd mac testpmd>start @@ -126,7 +126,7 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG 3. On VM, bind virtio net to vfio-pci and run testpmd:: - ./testpmd -c 0x3 -n 4 -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -151,7 +151,7 @@ Test Case 4: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from 1. Bind one port to vfio-pci, launch the vhost by below command:: - ./testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -183,13 +183,13 @@ Test Case 4: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from 3. On VM1, bind virtio1 to vfio-pci and run testpmd:: - ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 4. On VM2, bind virtio2 to vfio-pci and run testpmd:: - ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -200,7 +200,7 @@ Test Case 4: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from 6. On host, quit vhost-user, then re-launch the vhost-user with below command:: testpmd>quit - ./testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -213,7 +213,7 @@ Test Case 5: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from 1. Bind one port to vfio-pci, launch the vhost by below command:: - ./testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -245,13 +245,13 @@ Test Case 5: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from 3. On VM1, bind virtio1 to vfio-pci and run testpmd:: - ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 4. On VM2, bind virtio2 to vfio-pci and run testpmd:: - ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --port-topology=chain --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --port-topology=chained --port-topology=chain --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -280,7 +280,7 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2 1. Launch the vhost by below commands, enable the client mode and tso:: - ./testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>start 3. Launch VM1 and VM2:: @@ -324,7 +324,7 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2 6. Kill the vhost-user, then re-launch the vhost-user:: testpmd>quit - ./testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>start 7. Rerun step5, ensure the vhost-user can reconnect to VM again, and the iperf traffic can be continue. @@ -335,7 +335,7 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2 1. Launch the vhost by below commands, enable the client mode and tso:: - ./testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>start 3. Launch VM1 and VM2:: @@ -394,7 +394,7 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG 1. Bind one port to vfio-pci, then launch vhost with client mode by below commands:: - ./testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1 testpmd>set fwd mac testpmd>start @@ -414,7 +414,7 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG 3. On VM, bind virtio net to vfio-pci and run testpmd:: - ./testpmd -c 0x3 -n 4 -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -425,7 +425,7 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG 5. On host, quit vhost-user, then re-launch the vhost-user with below command:: testpmd>quit - ./testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1 testpmd>set fwd mac testpmd>start @@ -439,7 +439,7 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG 1. Bind one port to vfio-pci, then launch vhost with client mode by below commands:: - ./testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1 testpmd>set fwd mac testpmd>start @@ -459,7 +459,7 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG 3. On VM, bind virtio net to vfio-pci and run testpmd:: - ./testpmd -c 0x3 -n 4 -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -484,7 +484,7 @@ Test Case 13: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect fro 1. Bind one port to vfio-pci, launch the vhost by below command:: - ./testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -516,13 +516,13 @@ Test Case 13: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect fro 3. On VM1, bind virtio1 to vfio-pci and run testpmd:: - ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 4. On VM2, bind virtio2 to vfio-pci and run testpmd:: - ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -533,7 +533,7 @@ Test Case 13: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect fro 6. On host, quit vhost-user, then re-launch the vhost-user with below command:: testpmd>quit - ./testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -546,7 +546,7 @@ Test Case 14: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect fro 1. Bind one port to vfio-pci, launch the vhost by below command:: - ./testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -578,13 +578,13 @@ Test Case 14: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect fro 3. On VM1, bind virtio1 to vfio-pci and run testpmd:: - ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 4. On VM2, bind virtio2 to vfio-pci and run testpmd:: - ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --port-topology=chain --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --port-topology=chained --port-topology=chain --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -613,7 +613,7 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2 1. Launch the vhost by below commands, enable the client mode and tso:: - ./testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>start 3. Launch VM1 and VM2:: @@ -657,7 +657,7 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2 6. Kill the vhost-user, then re-launch the vhost-user:: testpmd>quit - ./testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>start 7. Rerun step5, ensure the vhost-user can reconnect to VM again, and the iperf traffic can be continue. @@ -668,7 +668,7 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2 1. Launch the vhost by below commands, enable the client mode and tso:: - ./testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>start 3. Launch VM1 and VM2:: diff --git a/test_plans/pvp_virtio_bonding_test_plan.rst b/test_plans/pvp_virtio_bonding_test_plan.rst index 2434802c..b8449cba 100644 --- a/test_plans/pvp_virtio_bonding_test_plan.rst +++ b/test_plans/pvp_virtio_bonding_test_plan.rst @@ -52,7 +52,7 @@ Flow: TG--> NIC --> Vhost --> Virtio3 --> Virtio4 --> Vhost--> NIC--> TG 1. Bind one port to vfio-pci,launch vhost by below command:: - ./testpmd -l 1-6 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' --vdev 'net_vhost2,iface=vhost-net2,client=1,queues=1' --vdev 'net_vhost3,iface=vhost-net3,client=1,queues=1' -- -i --port-topology=chained --nb-cores=4 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-6 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' --vdev 'net_vhost2,iface=vhost-net2,client=1,queues=1' --vdev 'net_vhost3,iface=vhost-net3,client=1,queues=1' -- -i --port-topology=chained --nb-cores=4 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -81,11 +81,11 @@ Flow: TG--> NIC --> Vhost --> Virtio3 --> Virtio4 --> Vhost--> NIC--> TG 3. On vm, bind four virtio-net devices to vfio-pci:: - ./dpdk-devbind.py -b vfio-pci xx:xx.x xx:xx.x xx:xx.x xx:xx.x + ./usertools/dpdk-devbind.py -b vfio-pci xx:xx.x xx:xx.x xx:xx.x xx:xx.x 4. Launch testpmd in VM:: - ./testpmd -l 0-5 -n 4 -- -i --port-topology=chained --nb-cores=5 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 0-5 -n 4 -- -i --port-topology=chained --nb-cores=5 5. Create one bonded device in mode 0 on socket 0:: @@ -114,7 +114,7 @@ Test case 2: vhost-user/virtio-pmd pvp bonding test with different mode from 1 t 1. Bind one port to vfio-pci,launch vhost by below command:: - ./testpmd -l 1-6 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' --vdev 'net_vhost2,iface=vhost-net2,client=1,queues=1' --vdev 'net_vhost3,iface=vhost-net3,client=1,queues=1' -- -i --port-topology=chained --nb-cores=4 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-6 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' --vdev 'net_vhost2,iface=vhost-net2,client=1,queues=1' --vdev 'net_vhost3,iface=vhost-net3,client=1,queues=1' -- -i --port-topology=chained --nb-cores=4 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -143,11 +143,11 @@ Test case 2: vhost-user/virtio-pmd pvp bonding test with different mode from 1 t 3. On vm, bind four virtio-net devices to vfio-pci:: - ./dpdk-devbind.py -b vfio-pci xx:xx.x xx:xx.x xx:xx.x xx:xx.x + ./usertools/dpdk-devbind.py -b vfio-pci xx:xx.x xx:xx.x xx:xx.x xx:xx.x 4. Launch testpmd in VM:: - ./testpmd -l 0-5 -n 4 -- -i --port-topology=chained --nb-cores=5 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 0-5 -n 4 -- -i --port-topology=chained --nb-cores=5 5. Create bonding device with mode 1 to mode 6:: diff --git a/test_plans/pvp_virtio_user_2M_hugepages_test_plan.rst b/test_plans/pvp_virtio_user_2M_hugepages_test_plan.rst index a4ac1f18..1160e384 100644 --- a/test_plans/pvp_virtio_user_2M_hugepages_test_plan.rst +++ b/test_plans/pvp_virtio_user_2M_hugepages_test_plan.rst @@ -46,12 +46,12 @@ Test Case1: Basic test for virtio-user split ring 2M hugepage 2. Bind one port to vfio-pci, launch vhost:: - ./testpmd -l 3-4 -n 4 --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 --file-prefix=vhost \ --vdev 'net_vhost0,iface=/tmp/sock0,queues=1' -- -i 3. Launch virtio-user with 2M hugepage:: - ./testpmd -l 5-6 -n 4 --no-pci --single-file-segments --file-prefix=virtio-user \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --no-pci --single-file-segments --file-prefix=virtio-user \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,queues=1 -- -i @@ -66,12 +66,12 @@ Test Case1: Basic test for virtio-user packed ring 2M hugepage 2. Bind one port to vfio-pci, launch vhost:: - ./testpmd -l 3-4 -n 4 --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 --file-prefix=vhost \ --vdev 'net_vhost0,iface=/tmp/sock0,queues=1' -- -i 3. Launch virtio-user with 2M hugepage:: - ./testpmd -l 5-6 -n 4 --no-pci --single-file-segments --file-prefix=virtio-user \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --no-pci --single-file-segments --file-prefix=virtio-user \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,packed_vq=1,queues=1 -- -i diff --git a/test_plans/pvp_virtio_user_4k_pages_test_plan.rst b/test_plans/pvp_virtio_user_4k_pages_test_plan.rst index ea3dcc88..a34a4422 100644 --- a/test_plans/pvp_virtio_user_4k_pages_test_plan.rst +++ b/test_plans/pvp_virtio_user_4k_pages_test_plan.rst @@ -47,7 +47,7 @@ Test Case1: Basic test vhost/virtio-user split ring with 4K-pages modprobe vfio-pci ./usertools/dpdk-devbind.py --bind=vfio-pci xx:xx.x - ./testpmd -l 3-4 -n 4 -m 1024 --no-huge --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 -m 1024 --no-huge --file-prefix=vhost \ --vdev 'net_vhost0,iface=/tmp/vhost-net,queues=1' -- -i --no-numa --socket-num=0 testpmd>start @@ -58,7 +58,7 @@ Test Case1: Basic test vhost/virtio-user split ring with 4K-pages 3. Launch virtio-user with 4K-pages:: - ./testpmd -l 5-6 -n 4 --no-huge -m 1024 --file-prefix=virtio-user \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --no-huge -m 1024 --file-prefix=virtio-user \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1 -- -i testpmd>start @@ -73,7 +73,7 @@ Test Case2: Basic test vhost/virtio-user packed ring with 4K-pages modprobe vfio-pci ./usertools/dpdk-devbind.py --bind=vfio-pci xx:xx.x - ./testpmd -l 3-4 -n 4 -m 1024 --no-huge --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 -m 1024 --no-huge --file-prefix=vhost \ --vdev 'net_vhost0,iface=/tmp/vhost-net,queues=1' -- -i --no-numa --socket-num=0 testpmd>start @@ -84,7 +84,7 @@ Test Case2: Basic test vhost/virtio-user packed ring with 4K-pages 3. Launch virtio-user with 4K-pages:: - ./testpmd -l 5-6 -n 4 --no-huge -m 1024 --file-prefix=virtio-user \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --no-huge -m 1024 --file-prefix=virtio-user \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,packed_vq=1,queues=1 -- -i testpmd>start diff --git a/test_plans/pvp_virtio_user_multi_queues_port_restart_test_plan.rst b/test_plans/pvp_virtio_user_multi_queues_port_restart_test_plan.rst index 629f6f42..18b476f6 100644 --- a/test_plans/pvp_virtio_user_multi_queues_port_restart_test_plan.rst +++ b/test_plans/pvp_virtio_user_multi_queues_port_restart_test_plan.rst @@ -54,14 +54,14 @@ Test Case 1: pvp 2 queues test with packed ring mergeable path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2 testpmd>set fwd mac testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-7 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,packed_vq=1,mrg_rxbuf=1,in_order=0,queue_size=255 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 --txd=255 --rxd=255 @@ -92,14 +92,14 @@ Test Case 2: pvp 2 queues test with packed ring non-mergeable path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2 testpmd>set fwd mac testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-7 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=0,queue_size=255 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 --txd=255 --rxd=255 @@ -125,14 +125,14 @@ Test Case 3: pvp 2 queues test with split ring inorder mergeable path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2 testpmd>set fwd mac testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-7 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,in_order=1,mrg_rxbuf=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 @@ -158,14 +158,14 @@ Test Case 4: pvp 2 queues test with split ring inorder non-mergeable path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2 testpmd>set fwd mac testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-7 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,in_order=1,mrg_rxbuf=0,vectorized=1 \ -- -i --rx-offloads=0x10 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 @@ -191,14 +191,14 @@ Test Case 5: pvp 2 queues test with split ring mergeable path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2 testpmd>set fwd mac testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-7 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,in_order=0,mrg_rxbuf=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 @@ -224,14 +224,14 @@ Test Case 6: pvp 2 queues test with split ring non-mergeable path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2 testpmd>set fwd mac testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-7 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,in_order=0,mrg_rxbuf=0,vectorized=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 @@ -257,14 +257,14 @@ Test Case 7: pvp 2 queues test with split ring vector_rx path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2 testpmd>set fwd mac testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-7 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,in_order=0,mrg_rxbuf=0,vectorized=1 \ -- -i --tx-offloads=0x0 --rss-ip --nb-cores=2 --rxq=2 --txq=2 @@ -290,14 +290,14 @@ Test Case 8: pvp 2 queues test with packed ring inorder mergeable path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2 testpmd>set fwd mac testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-7 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,packed_vq=1,mrg_rxbuf=1,in_order=1,queue_size=255 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 --txd=255 --rxd=255 @@ -323,14 +323,14 @@ Test Case 9: pvp 2 queues test with packed ring inorder non-mergeable path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2 testpmd>set fwd mac testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-7 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=255 \ -- -i --rx-offloads=0x10 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 --txd=255 --rxd=255 @@ -356,14 +356,14 @@ Test Case 10: pvp 2 queues test with packed ring vectorized path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2 testpmd>set fwd mac testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-7 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 \ --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=255 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 --txd=255 --rxd=255 diff --git a/test_plans/vhost_1024_ethports_test_plan.rst b/test_plans/vhost_1024_ethports_test_plan.rst index a9ef4fb5..a95042f2 100644 --- a/test_plans/vhost_1024_ethports_test_plan.rst +++ b/test_plans/vhost_1024_ethports_test_plan.rst @@ -49,7 +49,7 @@ Test Case1: Basic test for launch vhost with 1023 ethports 2. Launch vhost with 1023 vdev:: - ./testpmd -c 0x3000 -n 4 --file-prefix=vhost --vdev 'eth_vhost0,iface=vhost-net,queues=1' \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3000 -n 4 --file-prefix=vhost --vdev 'eth_vhost0,iface=vhost-net,queues=1' \ --vdev 'eth_vhost1,iface=vhost-net1,queues=1' ... -- -i # only list two vdev, here ommit other 1021 vdevs, from eth_vhost2 to eth_vhost1022 3. Change "CONFIG_RTE_MAX_ETHPORTS" back to 32 in DPDK configure file:: diff --git a/test_plans/vhost_event_idx_interrupt_test_plan.rst b/test_plans/vhost_event_idx_interrupt_test_plan.rst index 111de954..4873ba54 100644 --- a/test_plans/vhost_event_idx_interrupt_test_plan.rst +++ b/test_plans/vhost_event_idx_interrupt_test_plan.rst @@ -53,7 +53,7 @@ Test Case 1: wake up split ring vhost-user core with event idx interrupt mode 1. Launch l3fwd-power example app with client mode:: - ./l3fwd-power -l 1 \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1 \ -n 4 --no-pci\ --log-level=9 \ --vdev 'eth_vhost0,iface=/vhost-net0,queues=1,client=1' \ @@ -76,7 +76,7 @@ Test Case 1: wake up split ring vhost-user core with event idx interrupt mode 3. Relauch l3fwd-power sample for port up:: - ./l3fwd-power -l 1 \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1 \ -n 4 --no-pci\ --log-level=9 \ --vdev 'eth_vhost0,iface=/vhost-net0,queues=1,client=1' \ @@ -98,7 +98,7 @@ Test Case 2: wake up split ring vhost-user cores with event idx interrupt mode 1 1. Launch l3fwd-power example app with client mode:: - ./l3fwd-power -l 1-16 \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-16 \ -n 4 --no-pci\ --log-level=9 \ --vdev 'eth_vhost0,iface=/vhost-net0,queues=16,client=1' \ @@ -121,7 +121,7 @@ Test Case 2: wake up split ring vhost-user cores with event idx interrupt mode 1 3. Relauch l3fwd-power sample for port up:: - ./l3fwd-power -l 1-16 \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-16 \ -n 4 --no-pci\ --log-level=9 \ --vdev 'eth_vhost0,iface=/vhost-net0,queues=16,client=1' \ @@ -165,7 +165,7 @@ Test Case 3: wake up split ring vhost-user cores by multi virtio-net in VMs with 1. Launch l3fwd-power example app with client mode:: - ./l3fwd-power -l 1-2 \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-2 \ -n 4 --no-pci\ --log-level=9 \ --vdev 'eth_vhost0,iface=/vhost-net0,queues=1,client=1' \ @@ -198,7 +198,7 @@ Test Case 3: wake up split ring vhost-user cores by multi virtio-net in VMs with 3. Relauch l3fwd-power sample for port up:: - ./l3fwd-power -l 1-2 \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-2 \ -n 4 --no-pci\ --log-level=9 \ --vdev 'eth_vhost0,iface=/vhost-net0,queues=1,client=1' \ @@ -228,7 +228,7 @@ Test Case 4: wake up packed ring vhost-user core with event idx interrupt mode 1. Launch l3fwd-power example app with client mode:: - ./l3fwd-power -l 1 \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1 \ -n 4 --no-pci\ --log-level=9 \ --vdev 'eth_vhost0,iface=/vhost-net0,queues=1,client=1' \ @@ -250,7 +250,7 @@ Test Case 4: wake up packed ring vhost-user core with event idx interrupt mode 3. Relauch l3fwd-power sample for port up:: - ./l3fwd-power -l 1 \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1 \ -n 4 --no-pci\ --log-level=9 \ --vdev 'eth_vhost0,iface=/vhost-net0,queues=1,client=1' \ @@ -272,7 +272,7 @@ Test Case 5: wake up packed ring vhost-user cores with event idx interrupt mode 1. Launch l3fwd-power example app with client mode:: - ./l3fwd-power -l 1-16 \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-16 \ -n 4 --no-pci\ --log-level=9 \ --vdev 'eth_vhost0,iface=/vhost-net0,queues=16,client=1' \ @@ -294,7 +294,7 @@ Test Case 5: wake up packed ring vhost-user cores with event idx interrupt mode 3. Relauch l3fwd-power sample for port up:: - ./l3fwd-power -l 1-16 \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-16 \ -n 4 --no-pci\ --log-level=9 \ --vdev 'eth_vhost0,iface=/vhost-net0,queues=16,client=1' \ @@ -338,7 +338,7 @@ Test Case 6: wake up packed ring vhost-user cores by multi virtio-net in VMs wit 1. Launch l3fwd-power example app with client mode:: - ./l3fwd-power -l 1-2 \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-2 \ -n 4 --no-pci\ --log-level=9 \ --vdev 'eth_vhost0,iface=/vhost-net0,queues=1,client=1' \ @@ -371,7 +371,7 @@ Test Case 6: wake up packed ring vhost-user cores by multi virtio-net in VMs wit 3. Relauch l3fwd-power sample for port up:: - ./l3fwd-power -l 1-2 \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-2 \ -n 4 --no-pci\ --log-level=9 \ --vdev 'eth_vhost0,iface=/vhost-net0,queues=1,client=1' \ @@ -401,7 +401,7 @@ Test Case 7: wake up split ring vhost-user cores with event idx interrupt mode a 1. Bind 16 cbdma ports to vfio-pci driver, then launch l3fwd-power example app with client mode:: - ./l3fwd-power -l 1-16 -n 4 --log-level=9 \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-16 -n 4 --log-level=9 \ --vdev 'eth_vhost0,iface=/vhost-net0,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' \ -- -p 0x1 --parse-ptype 1 \ --config "(0,0,1),(0,1,2),(0,2,3),(0,3,4),(0,4,5),(0,5,6),(0,6,7),(0,7,8),(0,8,9),(0,9,10),(0,10,11),(0,11,12),(0,12,13),(0,13,14),(0,14,15),(0,15,16)" @@ -421,7 +421,7 @@ Test Case 7: wake up split ring vhost-user cores with event idx interrupt mode a 3. Relauch l3fwd-power sample for port up:: - ./l3fwd-power -l 1-16 -n 4 --log-level=9 \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-16 -n 4 --log-level=9 \ --vdev 'eth_vhost0,iface=/vhost-net0,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' \ -- -p 0x1 --parse-ptype 1 \ --config "(0,0,1),(0,1,2),(0,2,3),(0,3,4),(0,4,5),(0,5,6),(0,6,7),(0,7,8),(0,8,9),(0,9,10),(0,10,11),(0,11,12),(0,12,13),(0,13,14),(0,14,15),(0,15,16)" @@ -462,7 +462,7 @@ Test Case 8: wake up split ring vhost-user cores by multi virtio-net in VMs with 1. Bind two cbdma ports to vfio-pci driver, then launch l3fwd-power example app with client mode:: - ./l3fwd-power -l 1-2 -n 4 --log-level=9 \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-2 -n 4 --log-level=9 \ --vdev 'eth_vhost0,iface=/vhost-net0,queues=1,client=1,dmas=[txq0@00:04.0]' \ --vdev 'eth_vhost1,iface=/vhost-net1,queues=1,client=1,dmas=[txq0@80:04.0]' \ -- -p 0x3 --parse-ptype 1 --config "(0,0,1),(1,0,2)" @@ -491,7 +491,7 @@ Test Case 8: wake up split ring vhost-user cores by multi virtio-net in VMs with 3. Relauch l3fwd-power sample for port up:: - ./l3fwd-power -l 1-2 -n 4 --log-level=9 \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-2 -n 4 --log-level=9 \ --vdev 'eth_vhost0,iface=/vhost-net0,queues=1,client=1,dmas=[txq0@00:04.0]' \ --vdev 'eth_vhost1,iface=/vhost-net1,queues=1,client=1,dmas=[txq0@80:04.0]' \ -- -p 0x3 --parse-ptype 1 --config "(0,0,1),(1,0,2)" @@ -517,7 +517,7 @@ Test Case 9: wake up packed ring vhost-user cores with event idx interrupt mode 1. Bind 16 cbdma ports to vfio-pci driver, then launch l3fwd-power example app with client mode:: - ./l3fwd-power -l 1-16 -n 4 --log-level=9 \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-16 -n 4 --log-level=9 \ --vdev 'eth_vhost0,iface=/vhost-net0,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' \ -- -p 0x1 --parse-ptype 1 \ --config "(0,0,1),(0,1,2),(0,2,3),(0,3,4),(0,4,5),(0,5,6),(0,6,7),(0,7,8),(0,8,9),(0,9,10),(0,10,11),(0,11,12),(0,12,13),(0,13,14),(0,14,15),(0,15,16)" @@ -537,7 +537,7 @@ Test Case 9: wake up packed ring vhost-user cores with event idx interrupt mode 3. Relauch l3fwd-power sample for port up:: - ./l3fwd-power -l 1-16 -n 4 --log-level=9 \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-16 -n 4 --log-level=9 \ --vdev 'eth_vhost0,iface=/vhost-net0,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' \ -- -p 0x1 --parse-ptype 1 \ --config "(0,0,1),(0,1,2),(0,2,3),(0,3,4),(0,4,5),(0,5,6),(0,6,7),(0,7,8),(0,8,9),(0,9,10),(0,10,11),(0,11,12),(0,12,13),(0,13,14),(0,14,15),(0,15,16)" @@ -578,7 +578,7 @@ Test Case 10: wake up packed ring vhost-user cores by multi virtio-net in VMs wi 1. Bind two cbdma ports to vfio-pci driver, then launch l3fwd-power example app with client mode:: - ./l3fwd-power -l 1-2 -n 4 --log-level=9 \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-2 -n 4 --log-level=9 \ --vdev 'eth_vhost0,iface=/vhost-net0,queues=1,client=1,dmas=[txq0@00:04.0]' \ --vdev 'eth_vhost1,iface=/vhost-net1,queues=1,client=1,dmas=[txq0@80:04.0]' \ -- -p 0x3 --parse-ptype 1 --config "(0,0,1),(1,0,2)" @@ -607,7 +607,7 @@ Test Case 10: wake up packed ring vhost-user cores by multi virtio-net in VMs wi 3. Relauch l3fwd-power sample for port up:: - ./l3fwd-power -l 1-2 -n 4 --log-level=9 \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-2 -n 4 --log-level=9 \ --vdev 'eth_vhost0,iface=/vhost-net0,queues=1,client=1,dmas=[txq0@00:04.0]' \ --vdev 'eth_vhost1,iface=/vhost-net1,queues=1,client=1,dmas=[txq0@80:04.0]' \ -- -p 0x3 --parse-ptype 1 --config "(0,0,1),(1,0,2)" diff --git a/test_plans/vhost_multi_queue_qemu_test_plan.rst b/test_plans/vhost_multi_queue_qemu_test_plan.rst index 445848ff..6815b6b6 100644 --- a/test_plans/vhost_multi_queue_qemu_test_plan.rst +++ b/test_plans/vhost_multi_queue_qemu_test_plan.rst @@ -45,7 +45,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG 1. Bind one port to vfio-pci, then launch testpmd by below command: rm -rf vhost-net* - ./testpmd -c 0xe -n 4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 \ --vdev 'eth_vhost0,iface=vhost-net,queues=2' -- \ -i --nb-cores=2 --rxq=2 --txq=2 testpmd>set fwd mac @@ -63,7 +63,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG -vnc :2 -daemonize 3. On VM, bind virtio net to vfio-pci and run testpmd :: - ./testpmd -c 0x07 -n 3 -- -i \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x07 -n 3 -- -i \ --rxq=2 --txq=2 --txqflags=0xf01 --rss-ip --nb-cores=2 testpmd>set fwd mac testpmd>start @@ -88,7 +88,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG ensure the vhost using 2 queues:: rm -rf vhost-net* - ./testpmd -c 0xe -n 4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 \ --vdev 'eth_vhost0,iface=vhost-net,queues=2' -- \ -i --nb-cores=2 --rxq=2 --txq=2 testpmd>set fwd mac @@ -109,7 +109,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG 3. On VM, bind virtio net to vfio-pci and run testpmd, using one queue for testing at first:: - ./testpmd -c 0x7 -n 3 -- -i --rxq=1 --txq=1 --tx-offloads=0x0 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7 -n 3 -- -i --rxq=1 --txq=1 --tx-offloads=0x0 \ --rss-ip --nb-cores=1 testpmd>set fwd mac testpmd>start @@ -164,7 +164,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG ensure the vhost using 2 queues:: rm -rf vhost-net* - ./testpmd -c 0xe -n 4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 \ --vdev 'eth_vhost0,iface=vhost-net,queues=2' -- \ -i --nb-cores=1 --rxq=1 --txq=1 testpmd>set fwd mac @@ -185,7 +185,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG 3. On VM, bind virtio net to vfio-pci and run testpmd, using one queue for testing at first:: - ./testpmd -c 0x7 -n 4 -- -i --rxq=2 --txq=2 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7 -n 4 -- -i --rxq=2 --txq=2 \ --tx-offloads=0x0 --rss-ip --nb-cores=2 testpmd>set fwd mac testpmd>start diff --git a/test_plans/vhost_pmd_xstats_test_plan.rst b/test_plans/vhost_pmd_xstats_test_plan.rst index 8caee819..effac670 100644 --- a/test_plans/vhost_pmd_xstats_test_plan.rst +++ b/test_plans/vhost_pmd_xstats_test_plan.rst @@ -53,14 +53,14 @@ Test Case 1: xstats test with packed ring mergeable path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2 testpmd>set fwd mac testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-7 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,packed_vq=1,mrg_rxbuf=1,in_order=0 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 @@ -83,14 +83,14 @@ Test Case 2: xstats test with packed ring non-mergeable path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2 testpmd>set fwd mac testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-7 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=0 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 @@ -111,14 +111,14 @@ Test Case 3: xstats stability test with split ring inorder mergeable path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2 testpmd>set fwd mac testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-7 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,in_order=1,mrg_rxbuf=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 @@ -141,14 +141,14 @@ Test Case 4: xstats test with split ring inorder non-mergeable path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2 testpmd>set fwd mac testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-7 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,in_order=1,mrg_rxbuf=0 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 @@ -169,14 +169,14 @@ Test Case 5: xstats test with split ring mergeable path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2 testpmd>set fwd mac testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-7 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,in_order=0,mrg_rxbuf=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 @@ -197,14 +197,14 @@ Test Case 6: xstats test with split ring non-mergeable path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2 testpmd>set fwd mac testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-7 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,in_order=0,mrg_rxbuf=0,vectorized=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 @@ -225,14 +225,14 @@ Test Case 7: xstats test with split ring vector_rx path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2 testpmd>set fwd mac testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-7 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,in_order=0,mrg_rxbuf=0,vectorized=1 \ -- -i --tx-offloads=0x0 --rss-ip --nb-cores=2 --rxq=2 --txq=2 @@ -253,14 +253,14 @@ Test Case 8: xstats test with packed ring inorder mergeable path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2 testpmd>set fwd mac testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-7 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,packed_vq=1,mrg_rxbuf=1,in_order=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 @@ -283,14 +283,14 @@ Test Case 9: xstats test with packed ring inorder non-mergeable path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2 testpmd>set fwd mac testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-7 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \ -- -i --rx-offloads=0x10 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 @@ -311,14 +311,14 @@ Test Case 10: xstats test with packed ring vectorized path 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2 testpmd>set fwd mac testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-7 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \ -- -i --rss-ip --nb-cores=2 --rxq=2 --txq=2 @@ -339,14 +339,14 @@ Test Case 11: xstats test with packed ring vectorized path with ring size is not 1. Bind one port to vfio-pci, then launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-4 \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2 testpmd>set fwd mac testpmd>start 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-7 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-7 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=255 \ -- -i --rss-ip --nb-cores=2 --rxq=2 --txq=2 --txd=255 --rxd=255 diff --git a/test_plans/vhost_user_interrupt_test_plan.rst b/test_plans/vhost_user_interrupt_test_plan.rst index 0fb2f3b6..fd9394c3 100644 --- a/test_plans/vhost_user_interrupt_test_plan.rst +++ b/test_plans/vhost_user_interrupt_test_plan.rst @@ -51,12 +51,12 @@ Test Case1: Wake up split ring vhost-user core with l3fwd-power sample 1. Launch virtio-user with server mode:: - ./testpmd -l 7-8 -n 4 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 7-8 -n 4 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,server=1,queues=1 -- -i 2. Build l3fwd-power sample and launch l3fwd-power with a virtual vhost device:: - ./l3fwd-power -l 0-3 -n 4 --no-pci \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 0-3 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=/tmp/sock0,queues=1,client=1' -- -p 0x1 --parse-ptype 1 --config "(0,0,2)" 3. Send packet by testpmd, check vhost-user core will keep wakeup status:: @@ -71,12 +71,12 @@ Test Case2: Wake up split ring vhost-user cores with l3fwd-power sample when mul 1. Launch virtio-user with server mode:: - ./testpmd -l 1-5 -n 4 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,server=1,queues=4 -- -i --rxq=4 --txq=4 --rss-ip 2. Build l3fwd-power sample and launch l3fwd-power with a virtual vhost device:: - ./l3fwd-power -l 9-12 -n 4 --no-pci --log-level=9 \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 9-12 -n 4 --no-pci --log-level=9 \ --vdev 'eth_vhost0,iface=/tmp/sock0,queues=4,client=1' -- -p 0x1 --parse-ptype 1 \ --config "(0,0,9),(0,1,10),(0,2,11),(0,3,12)" @@ -92,12 +92,12 @@ Test Case3: Wake up packed ring vhost-user core with l3fwd-power sample 1. Launch virtio-user with server mode:: - ./testpmd -l 7-8 -n 4 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 7-8 -n 4 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,server=1,queues=1,packed_vq=1 -- -i 2. Build l3fwd-power sample and launch l3fwd-power with a virtual vhost device:: - ./l3fwd-power -l 0-3 -n 4 --no-pci \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 0-3 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=/tmp/sock0,queues=1,client=1' -- -p 0x1 --parse-ptype 1 --config "(0,0,2)" 3. Send packet by testpmd, check vhost-user core will keep wakeup status:: @@ -112,12 +112,12 @@ Test Case4: Wake up packed ring vhost-user cores with l3fwd-power sample when m 1. Launch virtio-user with server mode:: - ./testpmd -l 1-5 -n 4 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,server=1,queues=4,packed_vq=1,mrg_rxbuf=0 -- -i --rxq=4 --txq=4 --rss-ip 2. Build l3fwd-power sample and launch l3fwd-power with a virtual vhost device:: - ./l3fwd-power -l 9-12 -n 4 --no-pci --log-level=9 \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 9-12 -n 4 --no-pci --log-level=9 \ --vdev 'eth_vhost0,iface=/tmp/sock0,queues=4,client=1' -- -p 0x1 --parse-ptype 1 \ --config "(0,0,9),(0,1,10),(0,2,11),(0,3,12)" @@ -133,12 +133,12 @@ Test Case5: Wake up split ring vhost-user cores with l3fwd-power sample when mul 1. Launch virtio-user with server mode:: - ./testpmd -l 1-5 -n 4 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,server=1,queues=4 -- -i --rxq=4 --txq=4 --rss-ip 2. Bind 4 cbdma ports to vfio-pci driver, then launch l3fwd-power with a virtual vhost device:: - ./l3fwd-power -l 9-12 -n 4 --log-level=9 \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 9-12 -n 4 --log-level=9 \ --vdev 'eth_vhost0,iface=/tmp/sock0,queues=4,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3]' -- -p 0x1 --parse-ptype 1 \ --config "(0,0,9),(0,1,10),(0,2,11),(0,3,12)" @@ -154,12 +154,12 @@ Test Case6: Wake up packed ring vhost-user cores with l3fwd-power sample when mu 1. Launch virtio-user with server mode:: - ./testpmd -l 1-5 -n 4 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,server=1,queues=4,packed_vq=1 -- -i --rxq=4 --txq=4 --rss-ip 2. Bind 4 cbdma ports to vfio-pci driver, then launch l3fwd-power with a virtual vhost device:: - ./l3fwd-power -l 9-12 -n 4 --log-level=9 \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 9-12 -n 4 --log-level=9 \ --vdev 'eth_vhost0,iface=/tmp/sock0,queues=4,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3]' -- -p 0x1 --parse-ptype 1 \ --config "(0,0,9),(0,1,10),(0,2,11),(0,3,12)" diff --git a/test_plans/vhost_user_live_migration_test_plan.rst b/test_plans/vhost_user_live_migration_test_plan.rst index 276de3b9..9083cd5d 100644 --- a/test_plans/vhost_user_live_migration_test_plan.rst +++ b/test_plans/vhost_user_live_migration_test_plan.rst @@ -76,8 +76,8 @@ On host server side: 2. Bind host port to vfio-pci and start testpmd with vhost port:: - host server# ./tools/dpdk-devbind.py -b vfio-pci 82:00.1 - host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i + host server# ./usertools/dpdk-devbind.py -b vfio-pci 82:00.1 + host server# ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i host server# testpmd>start 3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port:: @@ -99,8 +99,8 @@ On the backup server, run the vhost testpmd on the host and launch VM: backup server # mkdir /mnt/huge backup server # mount -t hugetlbfs hugetlbfs /mnt/huge - backup server # ./tools/dpdk-devbind.py -b vfio-pci 82:00.0 - backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i + backup server # ./usertools/dpdk-devbind.py -b vfio-pci 82:00.0 + backup server # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i backup server # testpmd>start 5. Launch VM on the backup server, the script is similar to host, need add " -incoming tcp:0:4444 " for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server:: @@ -125,13 +125,14 @@ On the backup server, run the vhost testpmd on the host and launch VM: 7. Run testpmd in VM:: host VM# cd /root/ - host VM# make -j 110 install T=x86_64-native-linuxapp-gcc + host VM# CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc + host VM# ninja -C x86_64-native-linuxapp-gcc host VM# modprobe uio host VM# insmod ./x86_64-native-linuxapp-gcc/kmod/vfio-pci.ko - host VM# ./tools/dpdk_nic_bind.py --bind=vfio-pci 00:03.0 + host VM# ./usertools/dpdk-devbind.py --bind=vfio-pci 00:03.0 host VM# echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages host VM# screen -S vm - host VM# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 -- -i + host VM# ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i host VM# testpmd>set fwd rxonly host VM# testpmd>set verbose 1 host VM# testpmd>start @@ -176,8 +177,8 @@ On host server side: 2. Bind host port to vfio-pci and start testpmd with vhost port,note not start vhost port before launching qemu:: - host server# ./tools/dpdk-devbind.py -b vfio-pci 82:00.1 - host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i + host server# ./usertools/dpdk-devbind.py -b vfio-pci 82:00.1 + host server# ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i 3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port:: @@ -198,8 +199,8 @@ On the backup server, run the vhost testpmd on the host and launch VM: backup server # mkdir /mnt/huge backup server # mount -t hugetlbfs hugetlbfs /mnt/huge - backup server # ./tools/dpdk-devbind.py -b vfio-pci 82:00.0 - backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i + backup server # ./usertools/dpdk-devbind.py -b vfio-pci 82:00.0 + backup server # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i 5. Launch VM on the backup server, the script is similar to host, need add " -incoming tcp:0:4444 " for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server:: @@ -223,13 +224,14 @@ On the backup server, run the vhost testpmd on the host and launch VM: 7. Run testpmd in VM:: host VM# cd /root/ - host VM# make -j 110 install T=x86_64-native-linuxapp-gcc + host VM# CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc + host VM# ninja -C x86_64-native-linuxapp-gcc host VM# modprobe uio host VM# insmod ./x86_64-native-linuxapp-gcc/kmod/vfio-pci.ko - host VM# ./tools/dpdk_nic_bind.py --bind=vfio-pci 00:03.0 + host VM# ./usertools/dpdk-devbind.py --bind=vfio-pci 00:03.0 host VM# echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages host VM# screen -S vm - host VM# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 -- -i + host VM# ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i host VM# testpmd>set fwd rxonly host VM# testpmd>set verbose 1 host VM# testpmd>start @@ -276,8 +278,8 @@ On host server side: 2. Bind host port to vfio-pci and start testpmd with vhost port:: - host server# ./tools/dpdk-devbind.py -b vfio-pci 82:00.1 - host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i + host server# ./usertools/dpdk-devbind.py -b vfio-pci 82:00.1 + host server# ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i host server# testpmd>start 3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port:: @@ -299,8 +301,8 @@ On the backup server, run the vhost testpmd on the host and launch VM: backup server # mkdir /mnt/huge backup server # mount -t hugetlbfs hugetlbfs /mnt/huge - backup server # ./tools/dpdk-devbind.py -b vfio-pci 82:00.0 - backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i + backup server # ./usertools/dpdk-devbind.py -b vfio-pci 82:00.0 + backup server # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i backup server # testpmd>start 5. Launch VM on the backup server, the script is similar to host, need add " -incoming tcp:0:4444 " for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server:: @@ -364,8 +366,8 @@ On host server side: 2. Bind host port to vfio-pci and start testpmd with vhost port:: - host server# ./tools/dpdk-devbind.py -b vfio-pci 82:00.1 - host server# ./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --vdev 'net_vhost0,iface=./vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4 + host server# ./usertools/dpdk-devbind.py -b vfio-pci 82:00.1 + host server# ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --vdev 'net_vhost0,iface=./vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4 host server# testpmd>start 3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port:: @@ -387,8 +389,8 @@ On the backup server, run the vhost testpmd on the host and launch VM: backup server # mkdir /mnt/huge backup server # mount -t hugetlbfs hugetlbfs /mnt/huge - backup server # ./tools/dpdk-devbind.py -b vfio-pci 82:00.0 - backup server#./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --vdev 'net_vhost0,iface=./vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4 + backup server # ./usertools/dpdk-devbind.py -b vfio-pci 82:00.0 + backup server#./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --vdev 'net_vhost0,iface=./vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4 backup server # testpmd>start 5. Launch VM on the backup server, the script is similar to host, need add " -incoming tcp:0:4444 " for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server:: @@ -456,8 +458,8 @@ On host server side: 2. Bind host port to vfio-pci and start testpmd with vhost port:: - host server# ./tools/dpdk-devbind.py -b vfio-pci 82:00.1 - host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i + host server# ./usertools/dpdk-devbind.py -b vfio-pci 82:00.1 + host server# ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i host server# testpmd>start 3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port:: @@ -479,8 +481,8 @@ On the backup server, run the vhost testpmd on the host and launch VM: backup server # mkdir /mnt/huge backup server # mount -t hugetlbfs hugetlbfs /mnt/huge - backup server # ./tools/dpdk-devbind.py -b vfio-pci 82:00.0 - backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i + backup server # ./usertools/dpdk-devbind.py -b vfio-pci 82:00.0 + backup server # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i backup server # testpmd>start 5. Launch VM on the backup server, the script is similar to host, need add " -incoming tcp:0:4444 " for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server:: @@ -505,13 +507,14 @@ On the backup server, run the vhost testpmd on the host and launch VM: 7. Run testpmd in VM:: host VM# cd /root/ - host VM# make -j 110 install T=x86_64-native-linuxapp-gcc + host VM# CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc + host VM# ninja -C x86_64-native-linuxapp-gcc host VM# modprobe uio host VM# insmod ./x86_64-native-linuxapp-gcc/kmod/vfio-pci.ko - host VM# ./tools/dpdk_nic_bind.py --bind=vfio-pci 00:03.0 + host VM# ./usertools/dpdk-devbind.py --bind=vfio-pci 00:03.0 host VM# echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages host VM# screen -S vm - host VM# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 -- -i + host VM# ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i host VM# testpmd>set fwd rxonly host VM# testpmd>set verbose 1 host VM# testpmd>start @@ -556,8 +559,8 @@ On host server side: 2. Bind host port to vfio-pci and start testpmd with vhost port,note not start vhost port before launching qemu:: - host server# ./tools/dpdk-devbind.py -b vfio-pci 82:00.1 - host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i + host server# ./usertools/dpdk-devbind.py -b vfio-pci 82:00.1 + host server# ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i 3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port:: @@ -578,8 +581,8 @@ On the backup server, run the vhost testpmd on the host and launch VM: backup server # mkdir /mnt/huge backup server # mount -t hugetlbfs hugetlbfs /mnt/huge - backup server # ./tools/dpdk-devbind.py -b vfio-pci 82:00.0 - backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i + backup server # ./usertools/dpdk-devbind.py -b vfio-pci 82:00.0 + backup server # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i 5. Launch VM on the backup server, the script is similar to host, need add " -incoming tcp:0:4444 " for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server:: @@ -603,13 +606,14 @@ On the backup server, run the vhost testpmd on the host and launch VM: 7. Run testpmd in VM:: host VM# cd /root/ - host VM# make -j 110 install T=x86_64-native-linuxapp-gcc + host VM# CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc + host VM# ninja -C x86_64-native-linuxapp-gcc host VM# modprobe uio host VM# insmod ./x86_64-native-linuxapp-gcc/kmod/vfio-pci.ko - host VM# ./tools/dpdk_nic_bind.py --bind=vfio-pci 00:03.0 + host VM# ./usertools/dpdk-devbind.py --bind=vfio-pci 00:03.0 host VM# echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages host VM# screen -S vm - host VM# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 -- -i + host VM# ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i host VM# testpmd>set fwd rxonly host VM# testpmd>set verbose 1 host VM# testpmd>start @@ -656,8 +660,8 @@ On host server side: 2. Bind host port to vfio-pci and start testpmd with vhost port:: - host server# ./tools/dpdk-devbind.py -b vfio-pci 82:00.1 - host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i + host server# ./usertools/dpdk-devbind.py -b vfio-pci 82:00.1 + host server# ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i host server# testpmd>start 3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port:: @@ -679,8 +683,8 @@ On the backup server, run the vhost testpmd on the host and launch VM: backup server # mkdir /mnt/huge backup server # mount -t hugetlbfs hugetlbfs /mnt/huge - backup server # ./tools/dpdk-devbind.py -b vfio-pci 82:00.0 - backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i + backup server # ./usertools/dpdk-devbind.py -b vfio-pci 82:00.0 + backup server # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i backup server # testpmd>start 5. Launch VM on the backup server, the script is similar to host, need add " -incoming tcp:0:4444 " for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server:: @@ -744,8 +748,8 @@ On host server side: 2. Bind host port to vfio-pci and start testpmd with vhost port:: - host server# ./tools/dpdk-devbind.py -b vfio-pci 82:00.1 - host server# ./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --vdev 'net_vhost0,iface=./vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4 + host server# ./usertools/dpdk-devbind.py -b vfio-pci 82:00.1 + host server# ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --vdev 'net_vhost0,iface=./vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4 host server# testpmd>start 3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port:: @@ -767,8 +771,8 @@ On the backup server, run the vhost testpmd on the host and launch VM: backup server # mkdir /mnt/huge backup server # mount -t hugetlbfs hugetlbfs /mnt/huge - backup server # ./tools/dpdk-devbind.py -b vfio-pci 82:00.0 - backup server#./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --vdev 'net_vhost0,iface=./vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4 + backup server # ./usertools/dpdk-devbind.py -b vfio-pci 82:00.0 + backup server#./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --vdev 'net_vhost0,iface=./vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4 backup server # testpmd>start 5. Launch VM on the backup server, the script is similar to host, need add " -incoming tcp:0:4444 " for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server:: diff --git a/test_plans/vhost_virtio_pmd_interrupt_test_plan.rst b/test_plans/vhost_virtio_pmd_interrupt_test_plan.rst index 9a108b3d..ce7f2ef4 100644 --- a/test_plans/vhost_virtio_pmd_interrupt_test_plan.rst +++ b/test_plans/vhost_virtio_pmd_interrupt_test_plan.rst @@ -55,7 +55,7 @@ Test Case 1: Basic virtio interrupt test with 4 queues 1. Bind one NIC port to vfio-pci, then launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -c 0x7c -n 4 --vdev 'net_vhost0,iface=vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4 --rss-ip + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7c -n 4 --vdev 'net_vhost0,iface=vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4 --rss-ip 2. Launch VM1, set queues=4, vectors>=2xqueues+2, mq=on:: @@ -71,13 +71,13 @@ Test Case 1: Basic virtio interrupt test with 4 queues 3. Bind virtio port to vfio-pci:: - modprobe vfio enable_unsafe_noiommu_mode=1 - modprobe vfio-pci - ./usertools/dpdk-devbind.py --bind=vfio-pci xx:xx.x + modprobe vfio enable_unsafe_noiommu_mode=1 + modprobe vfio-pci + ./usertools/dpdk-devbind.py --bind=vfio-pci xx:xx.x 4. In VM, launch l3fwd-power sample:: - ./l3fwd-power -c 0xf -n 4 --log-level='user1,7' -- -p 1 -P --config="(0,0,0),(0,1,1),(0,2,2),(0,3,3)" --no-numa --parse-ptype + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -c 0xf -n 4 --log-level='user1,7' -- -p 1 -P --config="(0,0,0),(0,1,1),(0,2,2),(0,3,3)" --no-numa --parse-ptype 5. Send random dest ip address packets to host nic with packet generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up. @@ -91,7 +91,7 @@ Test Case 2: Basic virtio interrupt test with 16 queues 1. Bind one NIC port to vfio-pci, then launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -c 0x1ffff -n 4 --vdev 'eth_vhost0,iface=vhost-net,queues=16' -- -i --nb-cores=16 --rxq=16 --txq=16 --rss-ip + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x1ffff -n 4 --vdev 'eth_vhost0,iface=vhost-net,queues=16' -- -i --nb-cores=16 --rxq=16 --txq=16 --rss-ip 2. Launch VM1, set queues=16, vectors>=2xqueues+2, mq=on:: @@ -113,7 +113,7 @@ Test Case 2: Basic virtio interrupt test with 16 queues 4. In VM, launch l3fwd-power sample:: - ./l3fwd-power -c 0x0ffff -n 4 --log-level='user1,7' -- -p 1 -P --config '(0,0,0),(0,1,1),(0,2,2),(0,3,3)(0,4,4),(0,5,5),(0,6,6),(0,7,7)(0,8,8),(0,9,9),(0,10,10),(0,11,11)(0,12,12),(0,13,13),(0,14,14),(0,15,15)' --no-numa --parse-ptype + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -c 0x0ffff -n 4 --log-level='user1,7' -- -p 1 -P --config '(0,0,0),(0,1,1),(0,2,2),(0,3,3)(0,4,4),(0,5,5),(0,6,6),(0,7,7)(0,8,8),(0,9,9),(0,10,10),(0,11,11)(0,12,12),(0,13,13),(0,14,14),(0,15,15)' --no-numa --parse-ptype 5. Send random dest ip address packets to host nic with packet generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up. @@ -127,7 +127,7 @@ Test Case 3: Basic virtio-1.0 interrupt test with 4 queues 1. Bind one NIC port to vfio-pci, then launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -c 0x7c -n 4 --vdev 'net_vhost0,iface=vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4 --rss-ip + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7c -n 4 --vdev 'net_vhost0,iface=vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4 --rss-ip 2. Launch VM1, set queues=4, vectors>=2xqueues+2, mq=on:: @@ -149,7 +149,7 @@ Test Case 3: Basic virtio-1.0 interrupt test with 4 queues 4. In VM, launch l3fwd-power sample:: - ./l3fwd-power -c 0xf -n 4 --log-level='user1,7' -- -p 1 -P --config="(0,0,0),(0,1,1),(0,2,2),(0,3,3)" --no-numa --parse-ptype + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -c 0xf -n 4 --log-level='user1,7' -- -p 1 -P --config="(0,0,0),(0,1,1),(0,2,2),(0,3,3)" --no-numa --parse-ptype 5. Send random dest ip address packets to host nic with packet generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up. @@ -163,7 +163,7 @@ Test Case 4: Packed ring virtio interrupt test with 16 queues 1. Bind one NIC port to vfio-pci, then launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -c 0x1ffff -n 4 --vdev 'eth_vhost0,iface=vhost-net,queues=16' -- -i --nb-cores=16 --rxq=16 --txq=16 --rss-ip + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x1ffff -n 4 --vdev 'eth_vhost0,iface=vhost-net,queues=16' -- -i --nb-cores=16 --rxq=16 --txq=16 --rss-ip 2. Launch VM1, set queues=16, vectors>=2xqueues+2, mq=on:: @@ -185,7 +185,7 @@ Test Case 4: Packed ring virtio interrupt test with 16 queues 4. In VM, launch l3fwd-power sample:: - ./l3fwd-power -c 0x0ffff -n 4 --log-level='user1,7' -- -p 1 -P --config '(0,0,0),(0,1,1),(0,2,2),(0,3,3)(0,4,4),(0,5,5),(0,6,6),(0,7,7)(0,8,8),(0,9,9),(0,10,10),(0,11,11)(0,12,12),(0,13,13),(0,14,14),(0,15,15)' --no-numa --parse-ptype + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -c 0x0ffff -n 4 --log-level='user1,7' -- -p 1 -P --config '(0,0,0),(0,1,1),(0,2,2),(0,3,3)(0,4,4),(0,5,5),(0,6,6),(0,7,7)(0,8,8),(0,9,9),(0,10,10),(0,11,11)(0,12,12),(0,13,13),(0,14,14),(0,15,15)' --no-numa --parse-ptype 5. Send random dest ip address packets to host nic with packet generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up. @@ -198,7 +198,7 @@ Test Case 5: Basic virtio interrupt test with 16 queues and cbdma enabled 1. Bind 16 cbdma channels and one NIC port to vfio-pci, then launch testpmd by below command:: - ./testpmd -c 0x1ffff -n 4 --vdev 'eth_vhost0,iface=vhost-net,queues=16,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' -- -i --nb-cores=16 --rxq=16 --txq=16 --rss-ip + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x1ffff -n 4 --vdev 'eth_vhost0,iface=vhost-net,queues=16,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' -- -i --nb-cores=16 --rxq=16 --txq=16 --rss-ip 2. Launch VM1, set queues=16, vectors>=2xqueues+2, mq=on:: @@ -220,7 +220,7 @@ Test Case 5: Basic virtio interrupt test with 16 queues and cbdma enabled 4. In VM, launch l3fwd-power sample:: - ./l3fwd-power -c 0x0ffff -n 4 --log-level='user1,7' -- -p 1 -P --config '(0,0,0),(0,1,1),(0,2,2),(0,3,3)(0,4,4),(0,5,5),(0,6,6),(0,7,7)(0,8,8),(0,9,9),(0,10,10),(0,11,11)(0,12,12),(0,13,13),(0,14,14),(0,15,15)' --no-numa --parse-ptype + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -c 0x0ffff -n 4 --log-level='user1,7' -- -p 1 -P --config '(0,0,0),(0,1,1),(0,2,2),(0,3,3)(0,4,4),(0,5,5),(0,6,6),(0,7,7)(0,8,8),(0,9,9),(0,10,10),(0,11,11)(0,12,12),(0,13,13),(0,14,14),(0,15,15)' --no-numa --parse-ptype 5. Send random dest ip address packets to host nic with packet generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up. @@ -233,7 +233,7 @@ Test Case 6: Basic virtio-1.0 interrupt test with 4 queues and cbdma enabled 1. Bind four cbdma channels and one NIC port to vfio-pci, then launch testpmd by below command:: - ./testpmd -c 0x7c -n 4 --vdev 'net_vhost0,iface=vhost-net,queues=4,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3]' -- -i --nb-cores=4 --rxq=4 --txq=4 --rss-ip + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7c -n 4 --vdev 'net_vhost0,iface=vhost-net,queues=4,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3]' -- -i --nb-cores=4 --rxq=4 --txq=4 --rss-ip 2. Launch VM1, set queues=4, vectors>=2xqueues+2, mq=on:: @@ -255,7 +255,7 @@ Test Case 6: Basic virtio-1.0 interrupt test with 4 queues and cbdma enabled 4. In VM, launch l3fwd-power sample:: - ./l3fwd-power -c 0xf -n 4 --log-level='user1,7' -- -p 1 -P --config="(0,0,0),(0,1,1),(0,2,2),(0,3,3)" --no-numa --parse-ptype + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -c 0xf -n 4 --log-level='user1,7' -- -p 1 -P --config="(0,0,0),(0,1,1),(0,2,2),(0,3,3)" --no-numa --parse-ptype 5. Send random dest ip address packets to host nic with packet generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up. @@ -268,7 +268,7 @@ Test Case 7: Packed ring virtio interrupt test with 16 queues and cbdma enabled 1. Bind 16 cbdma channels ports and one NIC port to vfio-pci, then launch testpmd by below command:: - ./testpmd -c 0x1ffff -n 4 --vdev 'eth_vhost0,iface=vhost-net,queues=16,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' -- -i --nb-cores=16 --rxq=16 --txq=16 --rss-ip + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x1ffff -n 4 --vdev 'eth_vhost0,iface=vhost-net,queues=16,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' -- -i --nb-cores=16 --rxq=16 --txq=16 --rss-ip 2. Launch VM1, set queues=16, vectors>=2xqueues+2, mq=on:: @@ -290,7 +290,7 @@ Test Case 7: Packed ring virtio interrupt test with 16 queues and cbdma enabled 4. In VM, launch l3fwd-power sample:: - ./l3fwd-power -c 0x0ffff -n 4 --log-level='user1,7' -- -p 1 -P --config '(0,0,0),(0,1,1),(0,2,2),(0,3,3)(0,4,4),(0,5,5),(0,6,6),(0,7,7)(0,8,8),(0,9,9),(0,10,10),(0,11,11)(0,12,12),(0,13,13),(0,14,14),(0,15,15)' --no-numa --parse-ptype + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -c 0x0ffff -n 4 --log-level='user1,7' -- -p 1 -P --config '(0,0,0),(0,1,1),(0,2,2),(0,3,3)(0,4,4),(0,5,5),(0,6,6),(0,7,7)(0,8,8),(0,9,9),(0,10,10),(0,11,11)(0,12,12),(0,13,13),(0,14,14),(0,15,15)' --no-numa --parse-ptype 5. Send random dest ip address packets to host nic with packet generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up. diff --git a/test_plans/vhost_virtio_user_interrupt_test_plan.rst b/test_plans/vhost_virtio_user_interrupt_test_plan.rst index 239b1671..be663b5f 100644 --- a/test_plans/vhost_virtio_user_interrupt_test_plan.rst +++ b/test_plans/vhost_virtio_user_interrupt_test_plan.rst @@ -46,12 +46,12 @@ flow: TG --> NIC --> Vhost --> Virtio 1. Bind one NIC port to vfio-pci, launch testpmd with a virtual vhost device as backend:: - ./testpmd -c 0x7c -n 4 --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i --rxq=1 --txq=1 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7c -n 4 --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i --rxq=1 --txq=1 testpmd>start 2. Start l3fwd-power with a virtio-user device:: - ./l3fwd-power -c 0xc000 -n 4 --log-level='user1,7' --no-pci --file-prefix=l3fwd-pwd \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -c 0xc000 -n 4 --log-level='user1,7' --no-pci --file-prefix=l3fwd-pwd \ --vdev=virtio_user0,path=./vhost-net -- -p 1 -P --config="(0,0,14)" --parse-ptype 3. Send packets with packet generator, check the virtio-user related core can be wakeup status. @@ -67,7 +67,7 @@ flow: Tap --> Vhost-net --> Virtio 1. Start l3fwd-power with a virtio-user device, vhost-net as backend:: - ./l3fwd-power -c 0xc000 -n 4 --log-level='user1,7' --no-pci --file-prefix=l3fwd-pwd \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -c 0xc000 -n 4 --log-level='user1,7' --no-pci --file-prefix=l3fwd-pwd \ --vdev=virtio_user0,path=/dev/vhost-net -- -p 1 -P --config="(0,0,14)" --parse-ptype 2. Vhost-net will generate one tap device, normally, it's TAP0, config it and generate packets on it using pind cmd:: @@ -89,13 +89,13 @@ flow: Vhost <--> Virtio 1. Start vhost-user side:: - ./testpmd -c 0x3000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i testpmd>set fwd mac testpmd>start 2. Start virtio-user side:: - ./testpmd -c 0xc000 -n 4 --no-pci --file-prefix=virtio --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net -- -i --tx-offloads=0x00 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc000 -n 4 --no-pci --file-prefix=virtio --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net -- -i --tx-offloads=0x00 testpmd>set fwd mac testpmd>start @@ -116,12 +116,12 @@ flow: TG --> NIC --> Vhost --> Virtio 1. Bind one NIC port to vfio-pci, launch testpmd with a virtual vhost device as backend:: - ./testpmd -c 0x7c -n 4 --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i --rxq=1 --txq=1 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7c -n 4 --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i --rxq=1 --txq=1 testpmd>start 2. Start l3fwd-power with a virtio-user device:: - ./l3fwd-power -c 0xc000 -n 4 --log-level='user1,7' --no-pci --file-prefix=l3fwd-pwd \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -c 0xc000 -n 4 --log-level='user1,7' --no-pci --file-prefix=l3fwd-pwd \ --vdev=virtio_user0,path=./vhost-net,packed_vq=1 -- -p 1 -P --config="(0,0,14)" --parse-ptype 3. Send packets with packet generator, check the virtio-user related core can be wakeup status. @@ -137,7 +137,7 @@ flow: Tap --> Vhost-net --> Virtio 1. Start l3fwd-power with a virtio-user device, vhost-net as backend:: - ./l3fwd-power -c 0xc000 -n 4 --log-level='user1,7' --no-pci --file-prefix=l3fwd-pwd \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -c 0xc000 -n 4 --log-level='user1,7' --no-pci --file-prefix=l3fwd-pwd \ --vdev=virtio_user0,path=/dev/vhost-net,packed_vq=1 -- -p 1 -P --config="(0,0,14)" --parse-ptype 2. Vhost-net will generate one tap device, normally, it's TAP0, config it and generate packets on it using pind cmd:: @@ -159,13 +159,13 @@ flow: Vhost <--> Virtio 1. Start vhost-user side:: - ./testpmd -c 0x3000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i testpmd>set fwd mac testpmd>start 2. Start virtio-user side:: - ./testpmd -c 0xc000 -n 4 --no-pci --file-prefix=virtio --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1 -- -i --tx-offloads=0x00 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc000 -n 4 --no-pci --file-prefix=virtio --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1 -- -i --tx-offloads=0x00 testpmd>set fwd mac testpmd>start @@ -186,13 +186,13 @@ flow: Vhost <--> Virtio 1. Bind one cbdma port to vfio-pci driver, then start vhost-user side:: - ./testpmd -c 0x3000 -n 4 -a 00:04.0 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0,dmas=[txq0@00:04.0]' -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3000 -n 4 -a 00:04.0 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0,dmas=[txq0@00:04.0]' -- -i testpmd>set fwd mac testpmd>start 2. Start virtio-user side:: - ./testpmd -c 0xc000 -n 4 --no-pci --file-prefix=virtio --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net -- -i --tx-offloads=0x00 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc000 -n 4 --no-pci --file-prefix=virtio --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net -- -i --tx-offloads=0x00 testpmd>set fwd mac testpmd>start @@ -213,12 +213,12 @@ flow: TG --> NIC --> Vhost --> Virtio 1. Bind one cbdma port and one NIC port to vfio-pci, launch testpmd with a virtual vhost device as backend:: - ./testpmd -c 0x7c -n 4 --vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0@00:04.0]' -- -i --rxq=1 --txq=1 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7c -n 4 --vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0@00:04.0]' -- -i --rxq=1 --txq=1 testpmd>start 2. Start l3fwd-power with a virtio-user device:: - ./l3fwd-power -c 0xc000 -n 4 --log-level='user1,7' --no-pci --file-prefix=l3fwd-pwd \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -c 0xc000 -n 4 --log-level='user1,7' --no-pci --file-prefix=l3fwd-pwd \ --vdev=virtio_user0,path=./vhost-net -- -p 1 -P --config="(0,0,14)" --parse-ptype 3. Send packets with packet generator, check the virtio-user related core can be wakeup status. @@ -234,13 +234,13 @@ flow: Vhost <--> Virtio 1. Bind one cbdma port to vfio-pci driver, then start vhost-user side:: - ./testpmd -c 0x3000 -n 4 -a 00:04.0 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0,dmas=[txq0@00:04.0]' -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3000 -n 4 -a 00:04.0 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0,dmas=[txq0@00:04.0]' -- -i testpmd>set fwd mac testpmd>start 2. Start virtio-user side:: - ./testpmd -c 0xc000 -n 4 --no-pci --file-prefix=virtio --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1 -- -i --tx-offloads=0x00 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc000 -n 4 --no-pci --file-prefix=virtio --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1 -- -i --tx-offloads=0x00 testpmd>set fwd mac testpmd>start @@ -261,12 +261,12 @@ flow: TG --> NIC --> Vhost --> Virtio 1. Bind one cbdma port and one NIC port to vfio-pci, launch testpmd with a virtual vhost device as backend:: - ./testpmd -c 0x7c -n 4 --vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0@00:04.0]' -- -i --rxq=1 --txq=1 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7c -n 4 --vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0@00:04.0]' -- -i --rxq=1 --txq=1 testpmd>start 2. Start l3fwd-power with a virtio-user device:: - ./l3fwd-power -c 0xc000 -n 4 --log-level='user1,7' --no-pci --file-prefix=l3fwd-pwd \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -c 0xc000 -n 4 --log-level='user1,7' --no-pci --file-prefix=l3fwd-pwd \ --vdev=virtio_user0,path=./vhost-net,packed_vq=1 -- -p 1 -P --config="(0,0,14)" --parse-ptype 3. Send packets with packet generator, check the virtio-user related core can be wakeup status. diff --git a/test_plans/virtio_event_idx_interrupt_test_plan.rst b/test_plans/virtio_event_idx_interrupt_test_plan.rst index 4ffb4d20..3f67e335 100644 --- a/test_plans/virtio_event_idx_interrupt_test_plan.rst +++ b/test_plans/virtio_event_idx_interrupt_test_plan.rst @@ -52,7 +52,7 @@ Test Case 1: Compare interrupt times with and without split ring virtio event id 1. Bind one nic port to vfio-pci, then launch the vhost sample by below commands:: rm -rf vhost-net* - ./testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1' -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>start @@ -85,7 +85,7 @@ Test Case 2: Split ring virtio-pci driver reload test 1. Bind one nic port to vfio-pci, then launch the vhost sample by below commands:: rm -rf vhost-net* - ./testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>start 2. Launch VM:: @@ -107,8 +107,8 @@ Test Case 2: Split ring virtio-pci driver reload test 4. Reload virtio-net driver by below cmds:: ifconfig [ens3] down - ./dpdk-devbind.py -u [00:03.0] # [00:03.0] is the pci addr of virtio-net - ./dpdk-devbind.py -b virtio-pci [00:03.0] + ./usertools/dpdk-devbind.py -u [00:03.0] # [00:03.0] is the pci addr of virtio-net + ./usertools/dpdk-devbind.py -b virtio-pci [00:03.0] 5. Check virtio device can receive packets again:: @@ -123,7 +123,7 @@ Test Case 3: Wake up split ring virtio-net cores with event idx interrupt mode 1 1. Bind one nic port to vfio-pci, then launch the vhost sample by below commands:: rm -rf vhost-net* - ./testpmd -l 1-17 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=16' -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-17 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=16' -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16 testpmd>start 2. Launch VM:: @@ -158,7 +158,7 @@ Test Case 4: Compare interrupt times with and without packed ring virtio event i 1. Bind one nic port to vfio-pci, then launch the vhost sample by below commands:: rm -rf vhost-net* - ./testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1' -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>start @@ -191,7 +191,7 @@ Test Case 5: Packed ring virtio-pci driver reload test 1. Bind one nic port to vfio-pci, then launch the vhost sample by below commands:: rm -rf vhost-net* - ./testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>start 2. Launch VM:: @@ -213,8 +213,8 @@ Test Case 5: Packed ring virtio-pci driver reload test 4. Reload virtio-net driver by below cmds:: ifconfig [ens3] down - ./dpdk-devbind.py -u [00:03.0] # [00:03.0] is the pci addr of virtio-net - ./dpdk-devbind.py -b virtio-pci [00:03.0] + ./usertools/dpdk-devbind.py -u [00:03.0] # [00:03.0] is the pci addr of virtio-net + ./usertools/dpdk-devbind.py -b virtio-pci [00:03.0] 5. Check virtio device can receive packets again:: @@ -229,7 +229,7 @@ Test Case 6: Wake up packed ring virtio-net cores with event idx interrupt mode 1. Bind one nic port to vfio-pci, then launch the vhost sample by below commands:: rm -rf vhost-net* - ./testpmd -l 1-17 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=16' -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-17 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=16' -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16 testpmd>start 2. Launch VM:: @@ -264,7 +264,7 @@ Test Case 7: Split ring virtio-pci driver reload test with CBDMA enabled 1. Bind one nic port and one cbdma channel to vfio-pci, then launch the vhost sample by below commands:: rm -rf vhost-net* - ./testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1,dmas=[txq0@00:04.0]' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1,dmas=[txq0@00:04.0]' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>start 2. Launch VM:: @@ -286,8 +286,8 @@ Test Case 7: Split ring virtio-pci driver reload test with CBDMA enabled 4. Reload virtio-net driver by below cmds:: ifconfig [ens3] down - ./dpdk-devbind.py -u [00:03.0] # [00:03.0] is the pci addr of virtio-net - ./dpdk-devbind.py -b virtio-pci [00:03.0] + ./usertools/dpdk-devbind.py -u [00:03.0] # [00:03.0] is the pci addr of virtio-net + ./usertools/dpdk-devbind.py -b virtio-pci [00:03.0] 5. Check virtio device can receive packets again:: @@ -302,7 +302,7 @@ Test Case 8: Wake up split ring virtio-net cores with event idx interrupt mode a 1. Bind one nic port and 16 cbdma channels to vfio-pci, then launch the vhost sample by below commands:: rm -rf vhost-net* - ./testpmd -l 1-17 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-17 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16 testpmd>start 2. Launch VM:: @@ -337,7 +337,7 @@ Test Case 9: Packed ring virtio-pci driver reload test with CBDMA enabled 1. Bind one nic port and one cbdma channel to vfio-pci, then launch the vhost sample by below commands:: rm -rf vhost-net* - ./testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1,dmas=[txq0@00:04.0]' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1,dmas=[txq0@00:04.0]' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>start 2. Launch VM:: @@ -359,8 +359,8 @@ Test Case 9: Packed ring virtio-pci driver reload test with CBDMA enabled 4. Reload virtio-net driver by below cmds:: ifconfig [ens3] down - ./dpdk-devbind.py -u [00:03.0] # [00:03.0] is the pci addr of virtio-net - ./dpdk-devbind.py -b virtio-pci [00:03.0] + ./usertools/dpdk-devbind.py -u [00:03.0] # [00:03.0] is the pci addr of virtio-net + ./usertools/dpdk-devbind.py -b virtio-pci [00:03.0] 5. Check virtio device can receive packets again:: @@ -375,7 +375,7 @@ Test Case 10: Wake up packed ring virtio-net cores with event idx interrupt mode 1. Bind one nic port and 16 cbdma channels to vfio-pci, then launch the vhost sample by below commands:: rm -rf vhost-net* - ./testpmd -l 1-17 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-17 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16 testpmd>start 2. Launch VM:: diff --git a/test_plans/virtio_pvp_regression_test_plan.rst b/test_plans/virtio_pvp_regression_test_plan.rst index 476b4274..15ea8c4d 100644 --- a/test_plans/virtio_pvp_regression_test_plan.rst +++ b/test_plans/virtio_pvp_regression_test_plan.rst @@ -52,7 +52,7 @@ Test Case 1: pvp test with virtio 0.95 mergeable path 1. Bind one port to vfio-pci, then launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -l 1-3 -n 4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-3 -n 4 \ --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1' -- -i --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -72,7 +72,7 @@ Test Case 1: pvp test with virtio 0.95 mergeable path 3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads:: - ./testpmd -c 0x7 -n 4 -- -i --enable-hw-vlan-strip\ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7 -n 4 -- -i --enable-hw-vlan-strip\ --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -91,7 +91,7 @@ Test Case 2: pvp test with virtio 0.95 non-mergeable path 1. Bind one port to vfio-pci, then launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -l 1-3 -n 4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-3 -n 4 \ --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1' -- -i --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -111,7 +111,7 @@ Test Case 2: pvp test with virtio 0.95 non-mergeable path 3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads:: - ./testpmd -c 0x7 -n 4 -- -i --enable-hw-vlan-strip\ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7 -n 4 -- -i --enable-hw-vlan-strip\ --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -130,7 +130,7 @@ Test Case 3: pvp test with virtio 0.95 vrctor_rx path 1. Bind one port to vfio-pci, then launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -l 1-3 -n 4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-3 -n 4 \ --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1' -- -i --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -150,7 +150,7 @@ Test Case 3: pvp test with virtio 0.95 vrctor_rx path 3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads, [0000:xx.00] is [Bus,Device,Function] of virtio-net:: - ./testpmd -c 0x7 -n 3 -a 0000:xx.00,vectorized -- -i \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7 -n 3 -a 0000:xx.00,vectorized -- -i \ --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -169,7 +169,7 @@ Test Case 4: pvp test with virtio 1.0 mergeable path 1. Bind one port to vfio-pci, then launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -l 1-3 -n 4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-3 -n 4 \ --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1' -- -i --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -189,7 +189,7 @@ Test Case 4: pvp test with virtio 1.0 mergeable path 3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads:: - ./testpmd -c 0x7 -n 4 -- -i --enable-hw-vlan-strip\ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7 -n 4 -- -i --enable-hw-vlan-strip\ --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -208,7 +208,7 @@ Test Case 5: pvp test with virtio 1.0 non-mergeable path 1. Bind one port to vfio-pci, then launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -l 1-3 -n 4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-3 -n 4 \ --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1' -- -i --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -228,7 +228,7 @@ Test Case 5: pvp test with virtio 1.0 non-mergeable path 3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads:: - ./testpmd -c 0x7 -n 4 -- -i --enable-hw-vlan-strip\ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7 -n 4 -- -i --enable-hw-vlan-strip\ --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -247,7 +247,7 @@ Test Case 6: pvp test with virtio 1.0 vrctor_rx path 1. Bind one port to vfio-pci, then launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -l 1-3 -n 4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-3 -n 4 \ --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1' -- -i --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -267,7 +267,7 @@ Test Case 6: pvp test with virtio 1.0 vrctor_rx path 3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads, [0000:xx.00] is [Bus,Device,Function] of virtio-net:: - ./testpmd -c 0x7 -n 3 -a 0000:xx.00,vectorized -- -i \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7 -n 3 -a 0000:xx.00,vectorized -- -i \ --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -286,7 +286,7 @@ Test Case 7: pvp test with virtio 1.1 mergeable path 1. Bind one port to vfio-pci, then launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -l 1-3 -n 4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-3 -n 4 \ --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1' -- -i --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -306,7 +306,7 @@ Test Case 7: pvp test with virtio 1.1 mergeable path 3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads:: - ./testpmd -c 0x7 -n 4 -- -i --enable-hw-vlan-strip \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7 -n 4 -- -i --enable-hw-vlan-strip \ --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -325,7 +325,7 @@ Test Case 8: pvp test with virtio 1.1 non-mergeable path 1. Bind one port to vfio-pci, then launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -l 1-3 -n 4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-3 -n 4 \ --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1' -- -i --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -345,7 +345,7 @@ Test Case 8: pvp test with virtio 1.1 non-mergeable path 3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads:: - ./testpmd -c 0x7 -n 4 -- -i --enable-hw-vlan-strip \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7 -n 4 -- -i --enable-hw-vlan-strip \ --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start diff --git a/test_plans/vm2vm_virtio_net_perf_test_plan.rst b/test_plans/vm2vm_virtio_net_perf_test_plan.rst index d4c49851..d7194477 100644 --- a/test_plans/vm2vm_virtio_net_perf_test_plan.rst +++ b/test_plans/vm2vm_virtio_net_perf_test_plan.rst @@ -58,7 +58,7 @@ Test Case 1: VM2VM split ring vhost-user/virtio-net test with tcp traffic 1. Launch the Vhost sample on socket 0 by below commands:: rm -rf vhost-net* - ./dpdk-testpmd -l 2-4 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' \ --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=2 --txd=1024 --rxd=1024 testpmd>start @@ -113,7 +113,7 @@ Test Case 2: VM2VM split ring vhost-user/virtio-net CBDMA enable test with tcp t 1. Launch the Vhost sample by below commands:: rm -rf vhost-net* - ./dpdk-testpmd -l 2-4 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@00:04.0]' \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@00:04.0]' \ --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0@00:04.1]' -- -i --nb-cores=2 --txd=1024 --rxd=1024 testpmd>start @@ -170,7 +170,7 @@ Test Case 3: VM2VM split ring vhost-user/virtio-net test with udp traffic 1. Launch the Vhost sample by below commands:: rm -rf vhost-net* - ./dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' \ --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>start @@ -225,7 +225,7 @@ Test Case 4: Check split ring virtio-net device capability 1. Launch the Vhost sample by below commands:: rm -rf vhost-net* - ./dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' \ --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=2 --txd=1024 --rxd=1024 testpmd>start @@ -273,7 +273,7 @@ Test Case 5: VM2VM virtio-net split ring mergeable 8 queues CBDMA enable test wi 1. Launch the Vhost sample by below commands:: rm -rf vhost-net* - ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7]' \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7]' \ --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7]' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 testpmd>start @@ -324,7 +324,7 @@ Test Case 5: VM2VM virtio-net split ring mergeable 8 queues CBDMA enable test wi 7. Quit vhost ports and relaunch vhost ports w/o CBDMA channels:: - ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \ --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 testpmd>start @@ -339,7 +339,7 @@ Test Case 5: VM2VM virtio-net split ring mergeable 8 queues CBDMA enable test wi 10. Quit vhost ports and relaunch vhost ports with 1 queues:: - ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \ --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1 testpmd>start @@ -366,7 +366,7 @@ Test Case 6: VM2VM virtio-net split ring non-mergeable 8 queues CBDMA enable tes 1. Launch the Vhost sample by below commands:: rm -rf vhost-net* - ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7]' \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7]' \ --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7]' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 testpmd>start @@ -417,7 +417,7 @@ Test Case 6: VM2VM virtio-net split ring non-mergeable 8 queues CBDMA enable tes 7. Quit vhost ports and relaunch vhost ports w/o CBDMA channels:: - ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \ --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 testpmd>start @@ -432,7 +432,7 @@ Test Case 6: VM2VM virtio-net split ring non-mergeable 8 queues CBDMA enable tes 10. Quit vhost ports and relaunch vhost ports with 1 queues:: - ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' \ --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=1 --txq=1 testpmd>start @@ -459,7 +459,7 @@ Test Case 7: VM2VM packed ring vhost-user/virtio-net test with tcp traffic 1. Launch the Vhost sample by below commands::,packed=on rm -rf vhost-net* - ./dpdk-testpmd -l 2-4 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' \ --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=2 --txd=1024 --rxd=1024 testpmd>start @@ -514,7 +514,7 @@ Test Case 8: VM2VM packed ring vhost-user/virtio-net CBDMA enable test with tcp 1. Launch the Vhost sample by below commands:: rm -rf vhost-net* - ./dpdk-testpmd -l 2-4 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@00:04.0]' \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@00:04.0]' \ --vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0@00:04.1]' -- -i --nb-cores=2 --txd=1024 --rxd=1024 testpmd>start @@ -571,7 +571,7 @@ Test Case 9: VM2VM packed ring vhost-user/virtio-net test with udp traffic 1. Launch the Vhost sample by below commands:: rm -rf vhost-net* - ./dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' \ --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=2 --txd=1024 --rxd=1024 testpmd>start @@ -626,7 +626,7 @@ Test Case 10: Check packed ring virtio-net device capability 1. Launch the Vhost sample by below commands:: rm -rf vhost-net* - ./dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xF0000000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' \ --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=2 --txd=1024 --rxd=1024 testpmd>start @@ -674,7 +674,7 @@ Test Case 11: VM2VM virtio-net packed ring mergeable 8 queues CBDMA enable test 1. Launch the Vhost sample by below commands:: rm -rf vhost-net* - ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7]' \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7]' \ --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7]' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 testpmd>start @@ -731,7 +731,7 @@ Test Case 12: VM2VM virtio-net packed ring non-mergeable 8 queues CBDMA enable t 1. Launch the Vhost sample by below commands:: rm -rf vhost-net* - ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7]' \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3;txq4@00:04.4;txq5@00:04.5;txq6@00:04.6;txq7@00:04.7]' \ --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7]' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 testpmd>start diff --git a/test_plans/vm2vm_virtio_pmd_test_plan.rst b/test_plans/vm2vm_virtio_pmd_test_plan.rst index 30499bcd..903695ff 100644 --- a/test_plans/vm2vm_virtio_pmd_test_plan.rst +++ b/test_plans/vm2vm_virtio_pmd_test_plan.rst @@ -49,7 +49,7 @@ Test Case 1: VM2VM vhost-user/virtio-pmd with vector_rx path 1. Bind one physical nic port to vfio-pci, then launch the testpmd by below commands:: rm -rf vhost-net* - ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -79,13 +79,13 @@ Test Case 1: VM2VM vhost-user/virtio-pmd with vector_rx path 3. On VM1, bind vdev with vfio-pci driver,then run testpmd, set rxonly for virtio1, [0000:xx.00] is [Bus,Device,Function] of virtio-net:: - ./dpdk-testpmd -c 0x3 -n 4 -a 0000:xx.00,vectorized=1 -- -i --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -a 0000:xx.00,vectorized=1 -- -i --txd=1024 --rxd=1024 testpmd>set fwd rxonly testpmd>start 4. On VM2, bind vdev with vfio-pci driver,then run testpmd, set txonly for virtio2 and send 64B packets, [0000:xx.00] is [Bus,Device,Function] of virtio-net:: - ./dpdk-testpmd -c 0x3 -n 4 -a 0000:xx.00,vectorized=1 -- -i --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -a 0000:xx.00,vectorized=1 -- -i --txd=1024 --rxd=1024 testpmd>set fwd txonly testpmd>set txpkts 64 testpmd>start tx_first 32 @@ -104,7 +104,7 @@ Test Case 2: VM2VM vhost-user/virtio-pmd with normal path 1. Bind one physical nic port to vfio-pci, then launch the testpmd by below commands:: rm -rf vhost-net* - ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -134,13 +134,13 @@ Test Case 2: VM2VM vhost-user/virtio-pmd with normal path 3. On VM1, bind vdev with vfio-pci driver,then run testpmd, set rxonly for virtio1 :: - ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024 testpmd>set fwd rxonly testpmd>start 4. On VM2, bind vdev with vfio-pci driver,then run testpmd, set rxonly for virtio2 and send 64B packets :: - ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024 testpmd>set fwd txonly testpmd>set txpkts 64 testpmd>start tx_first 32 @@ -159,7 +159,7 @@ Test Case 3: VM2VM vhost-user/virtio1.0-pmd with vector_rx path 1. Bind one physical nic port to vfio-pci, then launch the testpmd by below commands:: rm -rf vhost-net* - ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -189,13 +189,13 @@ Test Case 3: VM2VM vhost-user/virtio1.0-pmd with vector_rx path 3. On VM1, bind vdev with vfio-pci driver,then run testpmd, set rxonly for virtio1, [0000:xx.00] is [Bus,Device,Function] of virtio-net:: - ./dpdk-testpmd -c 0x3 -n 4 -a 0000:xx.00,vectorized=1 -- -i --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -a 0000:xx.00,vectorized=1 -- -i --txd=1024 --rxd=1024 testpmd>set fwd rxonly testpmd>start 4. On VM2, bind vdev with vfio-pci driver,then run testpmd, set txonly for virtio2, [0000:xx.00] is [Bus,Device,Function] of virtio-net:: - ./dpdk-testpmd -c 0x3 -n 4 -a 0000:xx.00,vectorized=1 -- -i --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -a 0000:xx.00,vectorized=1 -- -i --txd=1024 --rxd=1024 testpmd>set fwd txonly testpmd>set txpkts 64 testpmd>start tx_first 32 @@ -214,7 +214,7 @@ Test Case 4: VM2VM vhost-user/virtio1.0-pmd with normal path 1. Bind one physical nic port to vfio-pci, then launch the testpmd by below commands:: rm -rf vhost-net* - ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -244,13 +244,13 @@ Test Case 4: VM2VM vhost-user/virtio1.0-pmd with normal path 3. On VM1, bind vdev with vfio-pci driver,then run testpmd, set rxonly for virtio1 :: - ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024 testpmd>set fwd rxonly testpmd>start 4. On VM2, bind vdev with vfio-pci driver,then run testpmd, set txonly for virtio2 :: - ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024 testpmd>set fwd txonly testpmd>set txpkts 64 testpmd>start tx_first 32 @@ -268,7 +268,7 @@ Test Case 5: VM2VM vhost-user/virtio-pmd mergeable path with payload valid check 1. Bind virtio with vfio-pci driver, launch the testpmd by below commands:: - ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -311,7 +311,7 @@ Test Case 5: VM2VM vhost-user/virtio-pmd mergeable path with payload valid check 4. Bind virtio with vfio-pci driver,then run testpmd, set rxonly mode for virtio-pmd on VM1:: - ./dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 testpmd>set fwd rxonly testpmd>start @@ -321,7 +321,7 @@ Test Case 5: VM2VM vhost-user/virtio-pmd mergeable path with payload valid check 6. On VM2, bind virtio with vfio-pci driver,then run testpmd, config tx_packets to 8k length with chain mode:: - ./dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 testpmd>set fwd mac testpmd>set txpkts 2000,2000,2000,2000 @@ -334,7 +334,7 @@ Test Case 5: VM2VM vhost-user/virtio-pmd mergeable path with payload valid check 9. Relaunch testpmd in VM1:: - ./dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 testpmd>set fwd rxonly testpmd>start @@ -344,7 +344,7 @@ Test Case 5: VM2VM vhost-user/virtio-pmd mergeable path with payload valid check 11. Relaunch testpmd on VM2, send ten 64B packets from virtio-pmd on VM2:: - ./dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>set burst 1 testpmd>start tx_first 10 @@ -356,7 +356,7 @@ Test Case 6: VM2VM vhost-user/virtio1.0-pmd mergeable path with payload valid ch 1. Bind virtio with vfio-pci driver, launch the testpmd by below commands:: - ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -399,7 +399,7 @@ Test Case 6: VM2VM vhost-user/virtio1.0-pmd mergeable path with payload valid ch 4. Bind virtio with vfio-pci driver,then run testpmd, set rxonly mode for virtio-pmd on VM1:: - ./dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 testpmd>set fwd rxonly testpmd>start @@ -409,7 +409,7 @@ Test Case 6: VM2VM vhost-user/virtio1.0-pmd mergeable path with payload valid ch 6. On VM2, bind virtio with vfio-pci driver,then run testpmd, config tx_packets to 8k length with chain mode:: - ./dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 testpmd>set fwd mac testpmd>set txpkts 2000,2000,2000,2000 @@ -422,7 +422,7 @@ Test Case 6: VM2VM vhost-user/virtio1.0-pmd mergeable path with payload valid ch 9. Relaunch testpmd in VM1:: - ./dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 testpmd>set fwd rxonly testpmd>start @@ -432,7 +432,7 @@ Test Case 6: VM2VM vhost-user/virtio1.0-pmd mergeable path with payload valid ch 11. Relaunch testpmd On VM2, send ten 64B packets from virtio-pmd on VM2:: - ./dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 testpmd>set fwd mac testpmd>set burst 1 testpmd>start tx_first 10 @@ -444,7 +444,7 @@ Test Case 7: VM2VM vhost-user/virtio1.1-pmd mergeable path with payload valid ch 1. Bind virtio with vfio-pci driver, launch the testpmd by below commands:: - ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -487,7 +487,7 @@ Test Case 7: VM2VM vhost-user/virtio1.1-pmd mergeable path with payload valid ch 4. Bind virtio with vfio-pci driver,then run testpmd, set rxonly mode for virtio-pmd on VM1:: - ./dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 testpmd>set fwd rxonly testpmd>start @@ -497,7 +497,7 @@ Test Case 7: VM2VM vhost-user/virtio1.1-pmd mergeable path with payload valid ch 6. On VM2, bind virtio with vfio-pci driver,then run testpmd, config tx_packets to 8k length with chain mode:: - ./dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 testpmd>set fwd mac testpmd>set txpkts 2000,2000,2000,2000 @@ -510,7 +510,7 @@ Test Case 7: VM2VM vhost-user/virtio1.1-pmd mergeable path with payload valid ch 9. Relaunch testpmd in VM1:: - ./dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 testpmd>set fwd rxonly testpmd>start @@ -520,7 +520,7 @@ Test Case 7: VM2VM vhost-user/virtio1.1-pmd mergeable path with payload valid ch 11. Relaunch testpmd On VM2, send ten 64B packets from virtio-pmd on VM2:: - ./dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 testpmd>set fwd mac testpmd>set burst 1 testpmd>start tx_first 10 @@ -533,7 +533,7 @@ Test Case 8: VM2VM vhost-user/virtio1.1-pmd with normal path 1. Bind one physical nic port to vfio-pci, then launch the testpmd by below commands:: rm -rf vhost-net* - ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -563,13 +563,13 @@ Test Case 8: VM2VM vhost-user/virtio1.1-pmd with normal path 3. On VM1, bind vdev with vfio-pci driver,then run testpmd, set rxonly for virtio1 :: - ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024 testpmd>set fwd rxonly testpmd>start 4. On VM2, bind vdev with vfio-pci driver,then run testpmd, set txonly for virtio2 :: - ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024 testpmd>set fwd txonly testpmd>set txpkts 64 testpmd>start tx_first 32 @@ -587,7 +587,7 @@ Test Case 9: VM2VM virtio-pmd split ring mergeable path 8 queues CBDMA enable wi 1. Bind 16 cbdma channels to vfio-pci driver, then launch the testpmd with 2 vhost port and 8 queues by below commands:: - ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 testpmd>vhost enable tx all testpmd>start @@ -625,13 +625,13 @@ Test Case 9: VM2VM virtio-pmd split ring mergeable path 8 queues CBDMA enable wi 4. Launch testpmd in VM1:: - ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 testpmd>set mac fwd testpmd>start 5. Launch testpmd in VM2, sent imix pkts from VM2:: - ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 testpmd>set mac fwd testpmd>set txpkts 64,256,512,1024,2000,64,256,512,1024,2000 testpmd>start tx_first 1 @@ -643,7 +643,7 @@ Test Case 9: VM2VM virtio-pmd split ring mergeable path 8 queues CBDMA enable wi 7. Relaunch and start vhost side testpmd with below cmd:: - ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 testpmd>start @@ -661,7 +661,7 @@ Test Case 10: VM2VM virtio-pmd split ring mergeable path dynamic queue size CBDM 1. Bind 16 cbdma channels to vfio-pci driver, then launch the testpmd with 2 vhost ports below commands:: - ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4 testpmd>vhost enable tx all testpmd>start @@ -699,13 +699,13 @@ Test Case 10: VM2VM virtio-pmd split ring mergeable path dynamic queue size CBDM 4. Launch testpmd in VM1:: - ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 testpmd>set mac fwd testpmd>start 5. Launch testpmd in VM2 and send imix pkts, check imix packets can looped between two VMs for 1 mins and 4 queues (queue0 to queue3) have packets rx/tx:: - ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 testpmd>set mac fwd testpmd>set txpkts 64,256,512,1024,2000,64,256,512,1024,2000 testpmd>start tx_first 32 @@ -714,7 +714,7 @@ Test Case 10: VM2VM virtio-pmd split ring mergeable path dynamic queue size CBDM 6. Relaunch and start vhost side testpmd with eight queues:: - ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 testpmd>start @@ -733,7 +733,7 @@ Test Case 11: VM2VM virtio-pmd packed ring mergeable path 8 queues CBDMA enable 1. Bind 16 cbdma channels to vfio-pci driver, then launch the testpmd with 2 vhost port and 8 queues by below commands:: rm -rf vhost-net* - ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ --vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 testpmd>vhost enable tx all testpmd>start @@ -771,13 +771,13 @@ Test Case 11: VM2VM virtio-pmd packed ring mergeable path 8 queues CBDMA enable 4. Launch testpmd in VM1:: - ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 testpmd>set mac fwd testpmd>start 5. Launch testpmd in VM2 and send imix pkts, check imix packets can looped between two VMs for 1 mins and 8 queues all have packets rx/tx:: - ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 testpmd>set mac fwd testpmd>set txpkts 64,256,512,1024,20000,64,256,512,1024,20000 testpmd>start tx_first 32 @@ -803,7 +803,7 @@ Test Case 11: VM2VM virtio-pmd packed ring mergeable path 8 queues CBDMA enable modprobe vfio-pci echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ./usertools/dpdk-devbind.py --force --bind=vfio-pci 0000:00:05.0 - ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txq=8 --rxq=8 --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000 testpmd>set mac fwd testpmd>set txpkts 64,256,512,1024,20000,64,256,512,1024,20000 testpmd>start tx_first 32 diff --git a/test_plans/vm2vm_virtio_user_test_plan.rst b/test_plans/vm2vm_virtio_user_test_plan.rst index 9554fa40..f35cd831 100644 --- a/test_plans/vm2vm_virtio_user_test_plan.rst +++ b/test_plans/vm2vm_virtio_user_test_plan.rst @@ -51,13 +51,13 @@ Test Case 1: packed virtqueue vm2vm mergeable path test 1. Launch vhost by below command:: - ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx 2. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci --file-prefix=virtio1 \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=0 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -70,7 +70,7 @@ Test Case 1: packed virtqueue vm2vm mergeable path test 4. Launch virtio-user0 and send 8k length packets:: - ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=0 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -93,7 +93,7 @@ Test Case 1: packed virtqueue vm2vm mergeable path test 7. Launch testpmd by below command:: - ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx testpmd>set fwd rxonly @@ -105,7 +105,7 @@ Test Case 1: packed virtqueue vm2vm mergeable path test 9. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --no-pci \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 --no-pci \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=0 \ -- -i --nb-cores=1 --txd=256 --rxd=256 testpmd>set burst 1 @@ -129,13 +129,13 @@ Test Case 2: packed virtqueue vm2vm inorder mergeable path test 1. Launch testpmd by below command:: - ./testpmd -l 1-2 -n 4 --no-pci \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx 2. Launch virtio-user1 by below command:: - ./testpmd -n 4 -l 7-8 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci --file-prefix=virtio1 \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=1 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -148,7 +148,7 @@ Test Case 2: packed virtqueue vm2vm inorder mergeable path test 4. Launch virtio-user0 and send 8k length packets:: - ./testpmd -n 4 -l 5-6 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=1 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -166,7 +166,7 @@ Test Case 2: packed virtqueue vm2vm inorder mergeable path test 6. Launch testpmd by below command:: - ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx testpmd>set fwd rxonly @@ -178,7 +178,7 @@ Test Case 2: packed virtqueue vm2vm inorder mergeable path test 8. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=1 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -199,13 +199,13 @@ Test Case 3: packed virtqueue vm2vm non-mergeable path test 1. Launch testpmd by below command:: - ./testpmd -l 1-2 -n 4 --no-pci \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx 2. Launch virtio-user1 by below command:: - ./testpmd -n 4 -l 7-8 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci --file-prefix=virtio1 \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=0 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -216,7 +216,7 @@ Test Case 3: packed virtqueue vm2vm non-mergeable path test 4. Launch virtio-user0 and send 8k length packets:: - ./testpmd -n 4 -l 5-6 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=0 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -233,7 +233,7 @@ Test Case 3: packed virtqueue vm2vm non-mergeable path test 6. Launch testpmd by below command:: - ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx testpmd>set fwd rxonly @@ -245,7 +245,7 @@ Test Case 3: packed virtqueue vm2vm non-mergeable path test 8. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=0 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -262,13 +262,13 @@ Test Case 4: packed virtqueue vm2vm inorder non-mergeable path test 1. Launch testpmd by below command:: - ./testpmd -l 1-2 -n 4 --no-pci \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx 2. Launch virtio-user1 by below command:: - ./testpmd -n 4 -l 7-8 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci --file-prefix=virtio1 \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,packed_vec=1 \ -- -i --rx-offloads=0x10 --nb-cores=1 --txd=256 --rxd=256 @@ -281,7 +281,7 @@ Test Case 4: packed virtqueue vm2vm inorder non-mergeable path test 4. Launch virtio-user0 and send 8k length packets:: - ./testpmd -n 4 -l 5-6 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,packed_vec=1 \ -- -i --rx-offloads=0x10 --nb-cores=1 --txd=256 --rxd=256 @@ -298,7 +298,7 @@ Test Case 4: packed virtqueue vm2vm inorder non-mergeable path test 6. Launch testpmd by below command:: - ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx testpmd>set fwd rxonly @@ -310,7 +310,7 @@ Test Case 4: packed virtqueue vm2vm inorder non-mergeable path test 8. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,packed_vec=1 \ -- -i --rx-offloads=0x10 --nb-cores=1 --txd=256 --rxd=256 @@ -327,13 +327,13 @@ Test Case 5: split virtqueue vm2vm mergeable path test 1. Launch vhost by below command:: - ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx 2. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci --file-prefix=virtio1 \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=0 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -346,7 +346,7 @@ Test Case 5: split virtqueue vm2vm mergeable path test 4. Launch virtio-user0 and send 8k length packets:: - ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=0 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -369,7 +369,7 @@ Test Case 5: split virtqueue vm2vm mergeable path test 7. Launch testpmd by below command:: - ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx testpmd>set fwd rxonly @@ -381,7 +381,7 @@ Test Case 5: split virtqueue vm2vm mergeable path test 9. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=0 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -406,13 +406,13 @@ Test Case 6: split virtqueue vm2vm inorder mergeable path test 1. Launch testpmd by below command:: - ./testpmd -l 1-2 -n 4 --no-pci \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx 2. Launch virtio-user1 by below command:: - ./testpmd -n 4 -l 7-8 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci --file-prefix=virtio1 \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=1 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -425,7 +425,7 @@ Test Case 6: split virtqueue vm2vm inorder mergeable path test 4. Launch virtio-user0 and send 8k length packets:: - ./testpmd -n 4 -l 5-6 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=1 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -442,7 +442,7 @@ Test Case 6: split virtqueue vm2vm inorder mergeable path test 6. Launch testpmd by below command:: - ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx testpmd>set fwd rxonly @@ -454,7 +454,7 @@ Test Case 6: split virtqueue vm2vm inorder mergeable path test 8. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=1 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -475,13 +475,13 @@ Test Case 7: split virtqueue vm2vm non-mergeable path test 1. Launch testpmd by below command:: - ./testpmd -l 1-2 -n 4 --no-pci \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx 2. Launch virtio-user1 by below command:: - ./testpmd -n 4 -l 7-8 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci --file-prefix=virtio1 \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0 \ -- -i --nb-cores=1 --txd=256 --rxd=256 --enable-hw-vlan-strip @@ -492,7 +492,7 @@ Test Case 7: split virtqueue vm2vm non-mergeable path test 4. Launch virtio-user0 and send 8k length packets:: - ./testpmd -n 4 -l 5-6 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0 \ -- -i --nb-cores=1 --txd=256 --rxd=256 --enable-hw-vlan-strip @@ -509,7 +509,7 @@ Test Case 7: split virtqueue vm2vm non-mergeable path test 6. Launch testpmd by below command:: - ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx testpmd>set fwd rxonly @@ -521,7 +521,7 @@ Test Case 7: split virtqueue vm2vm non-mergeable path test 8. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0 \ -- -i --nb-cores=1 --txd=256 --rxd=256 --enable-hw-vlan-strip @@ -538,13 +538,13 @@ Test Case 8: split virtqueue vm2vm inorder non-mergeable path test 1. Launch testpmd by below command:: - ./testpmd -l 1-2 -n 4 --no-pci \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx 2. Launch virtio-user1 by below command:: - ./testpmd -n 4 -l 7-8 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci --file-prefix=virtio1 \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=1 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -557,7 +557,7 @@ Test Case 8: split virtqueue vm2vm inorder non-mergeable path test 4. Launch virtio-user0 and send 8k length packets:: - ./testpmd -n 4 -l 5-6 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=1 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -574,7 +574,7 @@ Test Case 8: split virtqueue vm2vm inorder non-mergeable path test 6. Launch testpmd by below command:: - ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx testpmd>set fwd rxonly @@ -586,7 +586,7 @@ Test Case 8: split virtqueue vm2vm inorder non-mergeable path test 8. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=1 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -603,13 +603,13 @@ Test Case 9: split virtqueue vm2vm vector_rx path test 1. Launch testpmd by below command:: - ./testpmd -l 1-2 -n 4 --no-pci \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx 2. Launch virtio-user1 by below command:: - ./testpmd -n 4 -l 7-8 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci --file-prefix=virtio1 \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=256 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -620,7 +620,7 @@ Test Case 9: split virtqueue vm2vm vector_rx path test 4. Launch virtio-user0 and send 8k length packets:: - ./testpmd -n 4 -l 5-6 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=256 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -637,7 +637,7 @@ Test Case 9: split virtqueue vm2vm vector_rx path test 6. Launch testpmd by below command:: - ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx testpmd>set fwd rxonly @@ -649,7 +649,7 @@ Test Case 9: split virtqueue vm2vm vector_rx path test 8. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=256 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -666,13 +666,13 @@ Test Case 10: packed virtqueue vm2vm vectorized path test 1. Launch testpmd by below command:: - ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx 2. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci --file-prefix=virtio1 --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=256 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -685,7 +685,7 @@ Test Case 10: packed virtqueue vm2vm vectorized path test 4. Launch virtio-user0 and send 8k length packets:: - ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=256 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -702,7 +702,7 @@ Test Case 10: packed virtqueue vm2vm vectorized path test 6. Launch testpmd by below command:: - ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx testpmd>set fwd rxonly @@ -714,7 +714,7 @@ Test Case 10: packed virtqueue vm2vm vectorized path test 8. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=256 \ -- -i --nb-cores=1 --txd=256 --rxd=256 @@ -731,13 +731,13 @@ Test Case 11: packed virtqueue vm2vm vectorized path test with ring size is not 1. Launch testpmd by below command:: - ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx 2. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci --file-prefix=virtio1 --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=255 \ -- -i --nb-cores=1 --txd=255 --rxd=255 @@ -750,7 +750,7 @@ Test Case 11: packed virtqueue vm2vm vectorized path test with ring size is not 4. Launch virtio-user0 and send 8k length packets:: - ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=255 \ -- -i --nb-cores=1 --txd=255 --rxd=255 @@ -767,7 +767,7 @@ Test Case 11: packed virtqueue vm2vm vectorized path test with ring size is not 6. Launch testpmd by below command:: - ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost \ --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx testpmd>set fwd rxonly @@ -779,7 +779,7 @@ Test Case 11: packed virtqueue vm2vm vectorized path test with ring size is not 8. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ --no-pci --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=255 \ -- -i --nb-cores=1 --txd=255 --rxd=255 diff --git a/test_plans/vswitch_sample_cbdma_test_plan.rst b/test_plans/vswitch_sample_cbdma_test_plan.rst index 9abc3a99..659609e5 100644 --- a/test_plans/vswitch_sample_cbdma_test_plan.rst +++ b/test_plans/vswitch_sample_cbdma_test_plan.rst @@ -66,12 +66,12 @@ Test Case1: PVP performance check with CBDMA channel using vhost async driver 2. On host, launch dpdk-vhost by below command:: - ./dpdk-vhost -l 31-32 -n 4 -- \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 31-32 -n 4 -- \ -p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat --stats 1 --socket-file /tmp/vhost-net --dmas [txd0@0000:00:04.0] --client 3. Launch virtio-user with packed ring:: - ./dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 --force-max-simd-bitwidth=512 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,mrg_rxbuf=0,in_order=1,vectorized=1,packed_vq=1,server=1 -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 4. Start pkts from virtio-user side to let vswitch know the mac addr:: @@ -83,14 +83,14 @@ Test Case1: PVP performance check with CBDMA channel using vhost async driver 6. Quit and re-launch virtio-user with packed ring size not power of 2:: - ./dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 --force-max-simd-bitwidth=512 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,mrg_rxbuf=0,in_order=1,vectorized=1,packed_vq=1,server=1,queue_size=1025 -- -i --rxq=1 --txq=1 --txd=1025 --rxd=1025 --nb-cores=1 7. Re-test step 4-5, record performance of different packet length. 8. Quit and re-launch virtio-user with split ring:: - ./dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1,mrg_rxbuf=0,in_order=1,vectorized=1,server=1 -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 9. Re-test step 4-5, record performance of different packet length. @@ -102,15 +102,15 @@ Test Case2: PVP test with two VM and two CBDMA channels using vhost async driver 2. On host, launch dpdk-vhost by below command:: - ./dpdk-vhost -l 26-28 -n 4 -- \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -- \ -p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat --stats 1 --socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 --dmas [txd0@0000:00:01.0,txd1@0000:00:01.1] --client 3. launch two virtio-user ports:: - ./dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net0,queues=1,server=1,mrg_rxbuf=1,in_order=0,packed_vq=1 -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 - ./dpdk-testpmd -l 31-32 -n 4 --no-pci --file-prefix=testpmd1 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 31-32 -n 4 --no-pci --file-prefix=testpmd1 \ --vdev=net_virtio_user0,mac=00:11:22:33:44:11,path=/tmp/vhost-net1,queues=1,server=1,mrg_rxbuf=1,in_order=1,vectorized=1 -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 4. Start pkts from two virtio-user side individually to let vswitch know the mac addr:: @@ -140,15 +140,15 @@ Test Case3: VM2VM forwarding test with two CBDMA channels 2. On host, launch dpdk-vhost by below command:: - ./dpdk-vhost -l 26-28 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat \ --socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 --dmas [txd0@0000:00:04.0,txd1@0000:00:04.1] --client 3. Launch virtio-user:: - ./dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net0,queues=1,server=1,mrg_rxbuf=1,in_order=0,packed_vq=1 -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 - ./dpdk-testpmd -l 31-32 -n 4 --no-pci --file-prefix=testpmd1 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 31-32 -n 4 --no-pci --file-prefix=testpmd1 \ --vdev=net_virtio_user0,mac=00:11:22:33:44:11,path=/tmp/vhost-net1,queues=1,server=1,mrg_rxbuf=1,in_order=1,vectorized=1 -- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 4. Loop pkts between two virtio-user sides, record performance number with 64b/2000b/8000b/IMIX pkts can get expected:: @@ -187,7 +187,7 @@ Test Case4: VM2VM test with cbdma channels register/unregister stable check 2. On host, launch dpdk-vhost by below command:: - ./dpdk-vhost -l 26-28 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat \ --socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 --dmas [txd0@0000:00:04.0,txd1@0000:00:04.1] --client 3. Start VM0 with qemu-5.2.0:: @@ -225,7 +225,7 @@ Test Case4: VM2VM test with cbdma channels register/unregister stable check 6. Start testpmd in VMs seperately:: - ./dpdk-testpmd -l 1-2 -n 4 -- -i --rxq=1 --txq=1 --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 -- -i --rxq=1 --txq=1 --nb-cores=1 --txd=1024 --rxd=1024 7. Loop pkts between two virtio-user sides, record performance number with 64b/2000b/8000b/IMIX pkts can get expected:: @@ -267,7 +267,7 @@ Test Case5: VM2VM split ring test with iperf and reconnect stable check 2. On host, launch dpdk-vhost by below command:: - ./dpdk-vhost -l 26-28 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat \ --socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 --dmas [txd0@0000:00:04.0,txd1@0000:00:04.1] --client 3. Start VM0 with qemu-5.2.0:: @@ -326,7 +326,7 @@ Test Case6: VM2VM packed ring test with iperf and reconnect stable test 2. On host, launch dpdk-vhost by below command:: - ./dpdk-vhost -l 26-28 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat \ + ./x86_64-native-linuxapp-gcc/examples/dpdk-vhost -l 26-28 -n 4 -- -p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat \ --socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 --dmas [txd0@0000:00:04.0,txd1@0000:00:04.1] 3. Start VM0 with qemu-5.2.0::