From patchwork Thu Apr 14 14:05:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Jiang X-Patchwork-Id: 109712 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A27FAA050C; Thu, 14 Apr 2022 08:07:05 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 22570427FB; Thu, 14 Apr 2022 08:07:05 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id B116840687 for ; Thu, 14 Apr 2022 08:07:02 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649916423; x=1681452423; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=OBmWtJYG4cq3o7yVGJqCm5DkIN7uPZFIFDEURHuijN4=; b=ZNMfpawGRsfD+MoHDZNp7Fmkj5eIufSbdB+cRCS97+umKWboDs6u8gJp RthMwany1hQA023K9kL9SnMSnGlJA1pm4Iz8f/l9Xfti0RvCbTD9V3Fxn zAHInb+zgUYbxbS/m2q6u4FgouyBsCUhvZhd7KfIpLwqJjFqIpDE28MI0 ACt89zZH4l2LzVCyGU1G1tVGfhGfyfSlR4YCkRfBc6QcgofEAsGKTXCK4 7WkYroVUOy05rMpf81yBjmoeNd27yxKp1Q0wHthcor+XcW85AP+J3giwz NqgWlmoe1Ar20dcr/1rRvI/mzz3CNoZrAW10sjtpUsi+piVkd3j3d8fbF g==; X-IronPort-AV: E=McAfee;i="6400,9594,10316"; a="243444171" X-IronPort-AV: E=Sophos;i="5.90,259,1643702400"; d="scan'208";a="243444171" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 23:06:59 -0700 X-IronPort-AV: E=Sophos;i="5.90,259,1643702400"; d="scan'208";a="573625214" Received: from unknown (HELO localhost.localdomain) ([10.239.251.226]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 23:06:57 -0700 From: Yu Jiang To: lijuan.tu@intel.com, dts@dpdk.org Cc: Yu Jiang Subject: [dts][PATCH V1 1/3] test_plans/vf_interrupt_pmd&sriov_kvm: modify test plan to adapt meson build Date: Thu, 14 Apr 2022 14:05:47 +0000 Message-Id: <20220414140549.1579777-2-yux.jiang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220414140549.1579777-1-yux.jiang@intel.com> References: <20220414140549.1579777-1-yux.jiang@intel.com> MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org test_plans/vf_interrupt_pmd&sriov_kvm: modify test plan to adapt meson build Signed-off-by: Yu Jiang --- test_plans/sriov_kvm_test_plan.rst | 28 +++++++++++------------ test_plans/vf_interrupt_pmd_test_plan.rst | 2 +- 2 files changed, 15 insertions(+), 15 deletions(-) diff --git a/test_plans/sriov_kvm_test_plan.rst b/test_plans/sriov_kvm_test_plan.rst index ff3ecd07..1adc4dc2 100644 --- a/test_plans/sriov_kvm_test_plan.rst +++ b/test_plans/sriov_kvm_test_plan.rst @@ -57,7 +57,7 @@ used to generate 2VFs and make them in pci-stub modes.:: Start PF driver on the Host and skip the VFs.:: - ./x86_64-default-linuxapp-gcc/app/testpmd -c f \ + ./x86_64-default-linuxapp-gcc/app/dpdk-testpmd -c f \ -n 4 -b 0000:08:10.0 -b 0000:08:10.2 -- -i For VM0 start up command, you can refer to below command.:: @@ -95,11 +95,11 @@ If you want to run all common 2VM cases, please run testpmd on VM0 and VM1 and start traffic forward on the VM hosts. Some specific prerequisites need to be set up in each case:: - VF0 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i + VF0 ./x86_64-default-linuxapp-gcc/app/dpdk-testpmd -c f -n 4 -- -i VF0 testpmd-> set fwd rxonly VF0 testpmd-> start - VF1 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i + VF1 ./x86_64-default-linuxapp-gcc/app/dpdk-testpmd -c f -n 4 -- -i VF1 testpmd-> set fwd mac VF1 testpmd-> start @@ -109,12 +109,12 @@ Test Case: InterVM communication test on 2VMs Set the VF0 destination mac address to VF1 mac address, packets send from VF0 will be forwarded to VF1 and then send out:: - VF1 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i + VF1 ./x86_64-default-linuxapp-gcc/app/dpdk-testpmd -c f -n 4 -- -i VF1 testpmd-> show port info 0 VF1 testpmd-> set fwd mac VF1 testpmd-> start - VF0 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- --eth-peer=0,"VF1 mac" -i + VF0 ./x86_64-default-linuxapp-gcc/app/dpdk-testpmd -c f -n 4 -- --eth-peer=0,"VF1 mac" -i VF0 testpmd-> set fwd mac VF0 testpmd-> start @@ -255,16 +255,16 @@ driver will has been already attached to VMs:: On PF ./tools/pci_unbind.py --bind=igb_uio 0000:08:00.0 echo 4 > /sys/bus/pci/devices/0000\:08\:00.0/max_vfs - ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -b 0000:08:10.0 -b 0000:08:10.2 -b 0000:08:10.4 -b 0000:08:10.6 -- -i + ./x86_64-default-linuxapp-gcc/app/dpdk-testpmd -c f -n 4 -b 0000:08:10.0 -b 0000:08:10.2 -b 0000:08:10.4 -b 0000:08:10.6 -- -i If you want to run all common 4VM cases, please run testpmd on VM0, VM1, VM2 and VM3 and start traffic forward on the VM hosts. Some specific prerequisites are set up in each case:: - VF0 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i - VF1 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i - VF2 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i - VF3 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i + VF0 ./x86_64-default-linuxapp-gcc/app/dpdk-testpmd -c f -n 4 -- -i + VF1 ./x86_64-default-linuxapp-gcc/app/dpdk-testpmd -c f -n 4 -- -i + VF2 ./x86_64-default-linuxapp-gcc/app/dpdk-testpmd -c f -n 4 -- -i + VF3 ./x86_64-default-linuxapp-gcc/app/dpdk-testpmd -c f -n 4 -- -i Test Case: Scaling InterVM communication on 4VFs ================================================== @@ -272,21 +272,21 @@ Test Case: Scaling InterVM communication on 4VFs Set the VF0 destination mac address to VF1 mac address, packets send from VF0 will be forwarded to VF1 and then send out. Similar for VF2 and VF3:: - VF1 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i + VF1 ./x86_64-default-linuxapp-gcc/app/dpdk-testpmd -c f -n 4 -- -i VF1 testpmd-> show port info 0 VF1 testpmd-> set fwd mac VF1 testpmd-> start - VF0 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- --eth-peer=0,"VF1 mac" -i + VF0 ./x86_64-default-linuxapp-gcc/app/dpdk-testpmd -c f -n 4 -- --eth-peer=0,"VF1 mac" -i VF0 testpmd-> set fwd mac VF0 testpmd-> start - VF3 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i + VF3 ./x86_64-default-linuxapp-gcc/app/dpdk-testpmd -c f -n 4 -- -i VF3 testpmd-> show port info 0 VF3 testpmd-> set fwd mac VF3 testpmd-> start - VF2 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- --eth-peer=0,"VF3 mac" -i + VF2 ./x86_64-default-linuxapp-gcc/app/dpdk-testpmd -c f -n 4 -- --eth-peer=0,"VF3 mac" -i VF2 testpmd-> set fwd mac VF2 testpmd-> start diff --git a/test_plans/vf_interrupt_pmd_test_plan.rst b/test_plans/vf_interrupt_pmd_test_plan.rst index 1988b43d..ac49c5ed 100644 --- a/test_plans/vf_interrupt_pmd_test_plan.rst +++ b/test_plans/vf_interrupt_pmd_test_plan.rst @@ -246,7 +246,7 @@ Test Case6: VF multi-queue interrupt in VM with vfio-pci on i40e 4.Start l3fwd-power in VM:: - ./build/l3fwd-power -l 0-3 -n 4 -m 2048 -- -P -p 0x1 --config="(0,0,0),(0,1,1),(0,2,2),(0,3,3)" + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 0-3 -n 4 -m 2048 -- -P -p 0x1 --config="(0,0,0),(0,1,1),(0,2,2),(0,3,3)" 5. Send UDP packets with random ip and dest mac = vf mac addr::