From patchwork Thu Feb 24 11:23:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Jiang X-Patchwork-Id: 108218 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 55075A04A2; Thu, 24 Feb 2022 04:23:56 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 50A4F41156; Thu, 24 Feb 2022 04:23:56 +0100 (CET) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 64BF540DF6 for ; Thu, 24 Feb 2022 04:23:54 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645673034; x=1677209034; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=dljdgmU53ngdwkm6RIk+G2FEIK1GHN2Cgct62724qp0=; b=ngp0vrcKpKs6BPYa+904TesZKL5EK0c+9ENZTnyJGlBV4VqLYpbJEmKe w4e9oeItrBj2b5s3H1QgzgUan7FnbMZSpxwDFwnjCJtFZOgrc3bWHCuVO QtEIrr7tSsu93Jbm9swTc04WMi+0LiPT0mRbCDB/MdndZ2uAz3icCurGi BBBH3psTzYVR/4EC3wk9qtnXSHfDO1hEsUlPg5NhKToJNPNY3zKiKDZ3C voKCPjuI69YnaGjwR1ghY+X3lkfX+CekPBszX8Mqu2I4hy4+sTJdHj5BV I5S+KC5yLxPX4l0BJtY9M7c+JdWVesoflCR+8VmzWiuh5NpJZHmQhqkHQ w==; X-IronPort-AV: E=McAfee;i="6200,9189,10267"; a="252059797" X-IronPort-AV: E=Sophos;i="5.88,392,1635231600"; d="scan'208";a="252059797" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Feb 2022 19:23:54 -0800 X-IronPort-AV: E=Sophos;i="5.88,392,1635231600"; d="scan'208";a="639567156" Received: from unknown (HELO localhost.localdomain) ([10.239.251.226]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Feb 2022 19:23:52 -0800 From: Yu Jiang To: lijuan.tu@intel.com, dts@dpdk.org Cc: Yu Jiang Subject: [dts][PATCH V2 1/6] test_plans/*: modify test plan to adapt meson build Date: Thu, 24 Feb 2022 11:23:20 +0000 Message-Id: <20220224112325.1488073-2-yux.jiang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220224112325.1488073-1-yux.jiang@intel.com> References: <20220224112325.1488073-1-yux.jiang@intel.com> MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org test_plans/*: modify test plan to adapt meson build, including vf_pf_reset,vhost_qemu_mtu,vm2vm_virtio_user,vmdq_dcb Signed-off-by: Yu Jiang --- test_plans/vf_pf_reset_test_plan.rst | 22 +++++++++++----------- test_plans/vhost_qemu_mtu_test_plan.rst | 4 ++-- test_plans/vm2vm_virtio_user_test_plan.rst | 4 ++-- test_plans/vmdq_dcb_test_plan.rst | 4 ---- 4 files changed, 15 insertions(+), 19 deletions(-) diff --git a/test_plans/vf_pf_reset_test_plan.rst b/test_plans/vf_pf_reset_test_plan.rst index 574d4510..d3a9f505 100644 --- a/test_plans/vf_pf_reset_test_plan.rst +++ b/test_plans/vf_pf_reset_test_plan.rst @@ -90,7 +90,7 @@ Test Case 1: vf reset -- create two vfs on one pf 5. Run testpmd:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -- -i \ + .//app/dpdk-testpmd -c 0x0f -n 4 -- -i \ --portmask=0x3 testpmd> set fwd mac testpmd> start @@ -159,11 +159,11 @@ Test Case 2: vf reset -- create two vfs on one pf, run testpmd separately 2. Start testpmd on two vf ports:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 \ + .//app/dpdk-testpmd -c 0xf -n 4 \ --socket-mem 1024,1024 -a 81:02.0 --file-prefix=test1 \ -- -i --eth-peer=0,00:11:22:33:44:12 \ - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf0 -n 4 \ + .//app/dpdk-testpmd -c 0xf0 -n 4 \ --socket-mem 1024,1024 -a 81:02.1 --file-prefix=test2 \ -- -i @@ -207,7 +207,7 @@ Test Case 3: vf reset -- create one vf on each pf 3. Start one testpmd on two vf ports:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -- -i \ + .//app/dpdk-testpmd -c 0x0f -n 4 -- -i \ --portmask=0x3 4. Start forwarding:: @@ -234,7 +234,7 @@ Test Case 4: vlan rx restore -- vf reset all ports 1. Execute the step1-step3 of test case 1, then start the testpmd:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -- -i \ + .//app/dpdk-testpmd -c 0x0f -n 4 -- -i \ --portmask=0x3 testpmd> set fwd mac @@ -289,7 +289,7 @@ test Case 5: vlan rx restore -- vf reset one port 1. Execute the step1-step3 of test case 1, then start the testpmd:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -- -i \ + .//app/dpdk-testpmd -c 0x0f -n 4 -- -i \ --portmask=0x3 testpmd> set fwd mac @@ -429,7 +429,7 @@ Test Case 7: vlan tx restore 2. Run testpmd:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -- -i \ + .//app/dpdk-testpmd -c 0x0f -n 4 -- -i \ --portmask=0x3 3. Add tx vlan offload on VF1 port, take care the first param is port, @@ -473,7 +473,7 @@ test Case 8: MAC address restore 3. Start testpmd on two vf ports:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 \ + .//app/dpdk-testpmd -c 0x0f -n 4 \ -- -i --portmask=0x3 4. Add MAC address to the vf0 ports:: @@ -544,7 +544,7 @@ test Case 9: vf reset (two vfs passed through to one VM) bind them to igb_uio driver,and then start testpmd:: ./usertools/dpdk-devbind.py -b igb_uio 00:05.0 00:05.1 - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 \ + .//app/dpdk-testpmd -c 0x0f -n 4 \ -a 00:05.0 -a 00:05.1 -- -i --portmask=0x3 5. Add MAC address to the vf0 ports, set it in mac forward mode:: @@ -618,14 +618,14 @@ test Case 10: vf reset (two vfs passed through to two VM) bind the port to igb_uio, then start testpmd on vf0 port:: ./tools/dpdk_nic_bind.py --bind=igb_uio 00:05.0 - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 \ + .//app/dpdk-testpmd -c 0xf -n 4 \ -- -i --eth-peer=0,vf1port_macaddr \ login vm1, got VF1 pci device id in vm1, assume it's 00:06.0, bind the port to igb_uio, then start testpmd on vf1 port:: ./tools/dpdk_nic_bind.py --bind=igb_uio 00:06.0 - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf0 -n 4 \ + .//app/dpdk-testpmd -c 0xf0 -n 4 \ -- -i 5. Add vlan on vf0 in vm0, and set fwd mode:: diff --git a/test_plans/vhost_qemu_mtu_test_plan.rst b/test_plans/vhost_qemu_mtu_test_plan.rst index 60b01cfb..62c2f7f8 100644 --- a/test_plans/vhost_qemu_mtu_test_plan.rst +++ b/test_plans/vhost_qemu_mtu_test_plan.rst @@ -46,7 +46,7 @@ Test Case: Test the MTU in virtio-net ===================================== 1. Launch the testpmd by below commands on host, and config mtu:: - ./testpmd -c 0xc -n 4 \ + .//app/dpdk-testpmd -c 0xc -n 4 \ --vdev 'net_vhost0,iface=vhost-net,queues=1' \ -- -i --txd=512 --rxd=128 --nb-cores=1 --port-topology=chained testpmd> set fwd mac @@ -70,7 +70,7 @@ Test Case: Test the MTU in virtio-net 4. Bind the virtio driver to igb_uio, launch testpmd in VM, and verify the mtu in port info is 9000:: - ./testpmd -c 0x03 -n 3 \ + .//app/dpdk-testpmd -c 0x03 -n 3 \ -- -i --txd=512 --rxd=128 --tx-offloads=0x0 --enable-hw-vlan-strip testpmd> set fwd mac testpmd> start diff --git a/test_plans/vm2vm_virtio_user_test_plan.rst b/test_plans/vm2vm_virtio_user_test_plan.rst index 6fc5f89f..c3aa328e 100644 --- a/test_plans/vm2vm_virtio_user_test_plan.rst +++ b/test_plans/vm2vm_virtio_user_test_plan.rst @@ -1599,13 +1599,13 @@ Test Case 24: packed virtqueue vm2vm vectorized-tx path multi-queues test indire 1. Launch vhost by below command:: - ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci \ + .//app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \ -i --nb-cores=1 --no-flush-rx 2. Launch virtio-user1 by below command:: - ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 --force-max-simd-bitwidth=512 \ + .//app/dpdk-testpmd -n 4 -l 7-8 --no-pci --file-prefix=virtio1 --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=1,vectorized=1,queue_size=256 \ -- -i --nb-cores=1 --txd=256 --rxd=256 testpmd>set fwd rxonly diff --git a/test_plans/vmdq_dcb_test_plan.rst b/test_plans/vmdq_dcb_test_plan.rst index 1c9b82ef..a4beaa93 100644 --- a/test_plans/vmdq_dcb_test_plan.rst +++ b/test_plans/vmdq_dcb_test_plan.rst @@ -64,7 +64,6 @@ Prerequisites to the pools numbers(inclusive) and the VLAN user priority field increments from 0 to 7 (inclusive) for each VLAN ID. - Build vmdq_dcb example, - make: make -C examples/vmdq_dcb RTE_SDK=`pwd` T=x86_64-native-linuxapp-gcc meson: .//examples/dpdk-vmdq_dcb -c 0xff -n 4 -- -p 0x3 --nb-pools 32 --nb-tcs 4 --enable-rss Test Case 1: Verify VMDQ & DCB with 32 Pools and 4 TCs @@ -72,7 +71,6 @@ Test Case 1: Verify VMDQ & DCB with 32 Pools and 4 TCs 1. Run the application as the following:: - make: ./examples/vmdq_dcb/build/vmdq_dcb_app -c 0xff -n 4 -- -p 0x3 --nb-pools 32 --nb-tcs 4 --enable-rss meson: .//examples/dpdk-vmdq_dcb -c 0xff -n 4 -- -p 0x3 --nb-pools 32 --nb-tcs 4 --enable-rss 2. Start traffic transmission using approx 10% of line rate. @@ -98,7 +96,6 @@ Test Case 2: Verify VMDQ & DCB with 16 Pools and 8 TCs 2. Repeat Test Case 1, with `--nb-pools 16` and `--nb-tcs 8` of the sample application:: - make: ./examples/vmdq_dcb/build/vmdq_dcb_app -c 0xff -n 4 -- -p 0x3 --nb-pools 16 --nb-tcs 8 --enable-rss meson: .//examples/dpdk-vmdq_dcb -c 0xff -n 4 -- -p 0x3 --nb-pools 16 --nb-tcs 8 --enable-rss Expected result: @@ -111,7 +108,6 @@ Expected result: 4. Repeat Test Case 1, with `--nb-pools 16` and `--nb-tcs 8` of the sample application:: - make: ./examples/vmdq_dcb/build/vmdq_dcb_app -c 0xff -n 4 -- -p 0x3 --nb-pools 16 --nb-tcs 8 --enable-rss meson: .//examples/dpdk-vmdq_dcb -c 0xff -n 4 -- -p 0x3 --nb-pools 16 --nb-tcs 8 --enable-rss Expected result: