From patchwork Sat Jan 22 18:20:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Jiang X-Patchwork-Id: 106220 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 95357A00C4; Sat, 22 Jan 2022 11:20:14 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8D77F4278D; Sat, 22 Jan 2022 11:20:14 +0100 (CET) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 18B6940040 for ; Sat, 22 Jan 2022 11:20:11 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1642846812; x=1674382812; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=MDky9GQDOMdZNA3joeasW7KJ45bua6VDtJgEMMQQsBc=; b=fqdVe2aNhZq+nw/QE2iFj+eQM8WQy2vLHMEnq9BKaFtgD6rjhgy+jtMg 9wr0KP6vhZSrhKMaG8rCMV0ydmxw4HemlodnSK+OG2dylgs7pIleIIaNq /85MOoSPFz6xWEaBuI6LnAMT4AWW+SFW6dQvHPm+uDrleNhaLpBWBsHM0 Mk6mAyMPZUxdkeH5K21WbuabIA26nbw8ptQHcnmkjfQDlxePzYTwZDVnE Sis5y8MjBOwvvMx1IIyYaZQEgpVWl9FvhTlXU0YGjTUcYBslBFh2GC7hq YnJ2yg+oyBwIRur9a3uoQHTJ1w48hL9GJR4M5dC3IT4lPoWvYa1so6nct Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10234"; a="245595017" X-IronPort-AV: E=Sophos;i="5.88,308,1635231600"; d="scan'208";a="245595017" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jan 2022 02:20:11 -0800 X-IronPort-AV: E=Sophos;i="5.88,308,1635231600"; d="scan'208";a="476222737" Received: from unknown (HELO localhost.localdomain) ([10.239.251.226]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jan 2022 02:20:10 -0800 From: Yu Jiang To: lijuan.tu@intel.com, dts@dpdk.org Cc: Yu Jiang Subject: [dts][PATCH V1 2/4] test_plans/*: modify test plan to adapt meson build Date: Sat, 22 Jan 2022 18:20:35 +0000 Message-Id: <20220122182037.921953-3-yux.jiang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220122182037.921953-1-yux.jiang@intel.com> References: <20220122182037.921953-1-yux.jiang@intel.com> MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org test_plans/*: modify test plan to adapt meson build Signed-off-by: Yu Jiang --- test_plans/l2tp_esp_coverage_test_plan.rst | 12 +-- test_plans/l3fwdacl_test_plan.rst | 39 +++++---- test_plans/large_vf_test_plan.rst | 10 +-- test_plans/link_flowctrl_test_plan.rst | 2 +- .../link_status_interrupt_test_plan.rst | 9 ++- ...ack_multi_paths_port_restart_test_plan.rst | 40 +++++----- .../loopback_multi_queues_test_plan.rst | 80 +++++++++---------- test_plans/mac_filter_test_plan.rst | 2 +- test_plans/macsec_for_ixgbe_test_plan.rst | 10 +-- ...ious_driver_event_indication_test_plan.rst | 8 +- .../metering_and_policing_test_plan.rst | 28 +++---- test_plans/mtu_update_test_plan.rst | 2 +- test_plans/multiple_pthread_test_plan.rst | 68 ++++++++-------- test_plans/ptpclient_test_plan.rst | 10 ++- test_plans/ptype_mapping_test_plan.rst | 2 +- test_plans/qinq_filter_test_plan.rst | 16 ++-- test_plans/qos_api_test_plan.rst | 18 ++--- test_plans/queue_region_test_plan.rst | 2 +- 18 files changed, 188 insertions(+), 170 deletions(-) diff --git a/test_plans/l2tp_esp_coverage_test_plan.rst b/test_plans/l2tp_esp_coverage_test_plan.rst index a768684f..f9edaee9 100644 --- a/test_plans/l2tp_esp_coverage_test_plan.rst +++ b/test_plans/l2tp_esp_coverage_test_plan.rst @@ -88,7 +88,7 @@ Test Case 1: test MAC_IPV4_L2TPv3 HW checksum offload 1. DUT enable rx checksum with "--enable-rx-cksum" when start testpmd:: - ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -a af:01.0 -- -i --enable-rx-cksum + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -a af:01.0 -- -i --enable-rx-cksum 2. DUT setup csum forwarding mode:: @@ -163,7 +163,7 @@ Test Case 2: test MAC_IPV4_ESP HW checksum offload 1. DUT enable rx checksum with "--enable-rx-cksum" when start testpmd, setup csum forwarding mode:: - ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -a af:01.0 -- -i --enable-rx-cksum + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -a af:01.0 -- -i --enable-rx-cksum 2. DUT setup csum forwarding mode:: @@ -1095,7 +1095,7 @@ Test Case 14: MAC_IPV4_L2TPv3 vlan strip on + HW checksum offload check The pre-steps are as l2tp_esp_iavf_test_plan. -1. ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-9 -n 4 -a af:01.0 -- -i --rxq=16 --txq=16 --portmask=0x1 --nb-cores=2 --enable-rx-cksum +1. ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-9 -n 4 -a af:01.0 -- -i --rxq=16 --txq=16 --portmask=0x1 --nb-cores=2 --enable-rx-cksum 2. DUT create fdir rules for MAC_IPV4_L2TPv3 with queue index and mark:: @@ -1189,7 +1189,7 @@ The pre-steps are as l2tp_esp_iavf_test_plan. Test Case 15: MAC_IPV4_L2TPv3 vlan insert on + SW checksum offload check ======================================================================== -1. ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-9 -n 4 -a af:01.0 -- -i --rxq=16 --txq=16 --portmask=0x1 --nb-cores=2 --enable-rx-cksum +1. ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-9 -n 4 -a af:01.0 -- -i --rxq=16 --txq=16 --portmask=0x1 --nb-cores=2 --enable-rx-cksum 2. DUT create fdir rules for MAC_IPV4_L2TPv3 with queue index and mark:: @@ -1279,7 +1279,7 @@ Test Case 16: MAC_IPV4_ESP vlan strip on + HW checksum offload check The pre-steps are as l2tp_esp_iavf_test_plan. -1. ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-9 -n 4 -a af:01.0 -- -i --rxq=16 --txq=16 --portmask=0x1 --nb-cores=2 --enable-rx-cksum +1. ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-9 -n 4 -a af:01.0 -- -i --rxq=16 --txq=16 --portmask=0x1 --nb-cores=2 --enable-rx-cksum 2. DUT create fdir rules for MAC_IPV4_ESP with queue index and mark:: @@ -1372,7 +1372,7 @@ The pre-steps are as l2tp_esp_iavf_test_plan. Test Case 17: MAC_IPV6_NAT-T-ESP vlan insert on + SW checksum offload check =========================================================================== -1. ./x86_64-native-linuxapp-gcc/app/testpmd -l 6-9 -n 4 -a af:01.0 -- -i --rxq=16 --txq=16 --portmask=0x1 --nb-cores=2 --enable-rx-cksum +1. ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 6-9 -n 4 -a af:01.0 -- -i --rxq=16 --txq=16 --portmask=0x1 --nb-cores=2 --enable-rx-cksum 2. DUT create fdir rules for MAC_IPV6_NAT-T-ESP with queue index and mark:: diff --git a/test_plans/l3fwdacl_test_plan.rst b/test_plans/l3fwdacl_test_plan.rst index 7079308c..4ea60686 100644 --- a/test_plans/l3fwdacl_test_plan.rst +++ b/test_plans/l3fwdacl_test_plan.rst @@ -73,6 +73,13 @@ Prerequisites insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko ./usertools/dpdk-devbind.py --bind=igb_uio 04:00.0 04:00.1 +Build dpdk and examples=l3fwd-acl: + CC=gcc meson -Denable_kmods=True -Dlibdir=lib --default-library=static + ninja -C + + meson configure -Dexamples=l3fwd-acl + ninja -C + Test Case: packet match ACL rule ================================ @@ -85,7 +92,7 @@ Ipv4 packet match source ip address 200.10.0.1 will be dropped:: Add one default rule in rule file /root/rule_ipv6.db R0:0:0:0:0:0:0:0/0 0:0:0:0:0:0:0:0/0 0 : 65535 0 : 65535 0x00/0x00 0 - ./examples/l3fwd-acl/build/l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)" + .//examples/dpdk-l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)" --rule_ipv4="/root/rule_ipv4.db" --rule_ipv6="/root/rule_ipv6.db" Send one ipv4 packet with source ip address 200.10.0.1 will be dropped. @@ -100,7 +107,7 @@ Ipv4 packet match destination ip address 100.10.0.1 will be dropped:: Add one default rule in rule file /root/rule_ipv6.db R0:0:0:0:0:0:0:0/0 0:0:0:0:0:0:0:0/0 0 : 65535 0 : 65535 0x00/0x00 0 - ./examples/l3fwd-acl/build/l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)" + .//examples/dpdk-l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)" --rule_ipv4="/root/rule_ipv4.db" --rule_ipv6="/root/rule_ipv6.db" Send one ipv4 packet with destination ip address 100.10.0.1 will be dropped. @@ -115,7 +122,7 @@ Ipv4 packet match source port 11 will be dropped:: Add one default rule in rule file /root/rule_ipv6.db R0:0:0:0:0:0:0:0/0 0:0:0:0:0:0:0:0/0 0 : 65535 0 : 65535 0x00/0x00 0 - ./examples/l3fwd-acl/build/l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)" + .//examples/dpdk-l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)" --rule_ipv4="/root/rule_ipv4.db" --rule_ipv6="/root/rule_ipv6.db" Send one ipv4 packet with source port 11 will be dropped. @@ -130,7 +137,7 @@ Ipv4 packet match destination port 101 will be dropped:: Add one default rule in rule file /root/rule_ipv6.db R0:0:0:0:0:0:0:0/0 0:0:0:0:0:0:0:0/0 0 : 65535 0 : 65535 0x00/0x00 0 - ./examples/l3fwd-acl/build/l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)" + .//examples/dpdk-l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)" --rule_ipv4="/root/rule_ipv4.db" --rule_ipv6="/root/rule_ipv6.db" Send one ipv4 packet with destination port 101 will be dropped. @@ -145,7 +152,7 @@ Ipv4 packet match protocol TCP will be dropped:: Add one default rule in rule file /root/rule_ipv6.db R0:0:0:0:0:0:0:0/0 0:0:0:0:0:0:0:0/0 0 : 65535 0 : 65535 0x00/0x00 0 - ./examples/l3fwd-acl/build/l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)" + .//examples/dpdk-l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)" --rule_ipv4="/root/rule_ipv4.db" --rule_ipv6="/root/rule_ipv6.db" Send one TCP ipv4 packet will be dropped. @@ -160,7 +167,7 @@ Ipv4 packet match 5-tuple will be dropped:: Add one default rule in rule file /root/rule_ipv6.db R0:0:0:0:0:0:0:0/0 0:0:0:0:0:0:0:0/0 0 : 65535 0 : 65535 0x00/0x00 0 - ./examples/l3fwd-acl/build/l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)" + .//examples/dpdk-l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)" --rule_ipv4="/root/rule_ipv4.db" --rule_ipv6="/root/rule_ipv6.db" Send one TCP ipv4 packet with source ip address 200.10.0.1, @@ -180,7 +187,7 @@ Ipv6 packet match source ipv6 address 2001:0db8:85a3:08d3:1319:8a2e:0370:7344/12 Add one default rule in rule file /root/rule_ipv4.db R0.0.0.0/0 0.0.0.0/0 0 : 65535 0 : 65535 0x00/0x00 0 - ./examples/l3fwd-acl/build/l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)" + .//examples/dpdk-l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)" --rule_ipv4="/root/rule_ipv4.db" --rule_ipv6="/root/rule_ipv6.db" Send one ipv6 packet with source ip address 2001:0db8:85a3:08d3:1319:8a2e:0370:7344/128 will be dropped. @@ -195,7 +202,7 @@ Ipv6 packet match destination ipv6 address 2002:0db8:85a3:08d3:1319:8a2e:0370:73 Add one default rule in rule file /root/rule_ipv4.db R0.0.0.0/0 0.0.0.0/0 0 : 65535 0 : 65535 0x00/0x00 0 - ./examples/l3fwd-acl/build/l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)" + .//examples/dpdk-l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)" --rule_ipv4="/root/rule_ipv4.db" --rule_ipv6="/root/rule_ipv6.db" Send one ipv6 packet with destination ip address 2002:0db8:85a3:08d3:1319:8a2e:0370:7344/128 will be dropped. @@ -210,7 +217,7 @@ Ipv6 packet match source port 11 will be dropped:: Add one default rule in rule file /root/rule_ipv4.db R0.0.0.0/0 0.0.0.0/0 0 : 65535 0 : 65535 0x00/0x00 0 - ./examples/l3fwd-acl/build/l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)" + .//examples/dpdk-l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)" --rule_ipv4="/root/rule_ipv4.db" --rule_ipv6="/root/rule_ipv6.db" Send one ipv6 packet with source port 11 will be dropped. @@ -225,7 +232,7 @@ Ipv6 packet match destination port 101 will be dropped:: Add one default rule in rule file /root/rule_ipv4.db R0.0.0.0/0 0.0.0.0/0 0 : 65535 0 : 65535 0x00/0x00 0 - ./examples/l3fwd-acl/build/l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)" + .//examples/dpdk-l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)" --rule_ipv4="/root/rule_ipv4.db" --rule_ipv6="/root/rule_ipv6.db" Send one ipv6 packet with destination port 101 will be dropped. @@ -240,7 +247,7 @@ Ipv6 packet match protocol TCP will be dropped:: Add one default rule in rule file /root/rule_ipv4.db R0.0.0.0/0 0.0.0.0/0 0 : 65535 0 : 65535 0x00/0x00 0 - ./examples/l3fwd-acl/build/l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)" + .//examples/dpdk-l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)" --rule_ipv4="/root/rule_ipv4.db" --rule_ipv6="/root/rule_ipv6.db" Send one TCP ipv6 packet will be dropped. @@ -255,7 +262,7 @@ Ipv6 packet match 5-tuple will be dropped:: Add one default rule in rule file /root/rule_ipv4.db R0.0.0.0/0 0.0.0.0/0 0 : 65535 0 : 65535 0x00/0x00 0 - ./examples/l3fwd-acl/build/l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)" + .//examples/dpdk-l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)" --rule_ipv4="/root/rule_ipv4.db" --rule_ipv6="/root/rule_ipv6.db" Send one TCP ipv6 packet with source ip address 2001:0db8:85a3:08d3:1319:8a2e:0370:7344/128, @@ -281,7 +288,7 @@ Add two exact rule as below in rule_ipv6.db:: Start l3fwd-acl and send packet:: - ./examples/l3fwd-acl/build/l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)" + .//examples/dpdk-l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)" --rule_ipv4="/root/rule_ipv4.db" --rule_ipv6="/root/rule_ipv6.db" Send one TCP ipv4 packet with source ip address 200.10.0.1, destination @@ -312,7 +319,7 @@ Add two LPM rule as below in rule_ipv6.db:: Start l3fwd-acl and send packet:: - ./examples/l3fwd-acl/build/l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)" + .//examples/dpdk-l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)" --rule_ipv4="/root/rule_ipv4.db" --rule_ipv6="/root/rule_ipv6.db" Send one TCP ipv4 packet with destination ip address 1.1.1.1 will be forward to PORT0. @@ -333,7 +340,7 @@ Packet match 5-tuple will be dropped:: @2001:0db8:85a3:08d3:1319:8a2e:0370:7344/128 2002:0db8:85a3:08d3:1319:8a2e:0370:7344/101 11 : 11 101 : 101 0x06/0xff R0:0:0:0:0:0:0:0/0 0:0:0:0:0:0:0:0/0 0 : 65535 0 : 65535 0x00/0x00 0 - ./examples/l3fwd-acl/build/l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)" + .//examples/dpdk-l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)" --rule_ipv4="/root/rule_ipv4.db" --rule_ipv6="/root/rule_ipv6.db" --scalar Send one TCP ipv4 packet with source ip address 200.10.0.1, destination ip address 100.10.0.1, @@ -363,7 +370,7 @@ Add two ACL rule as below in rule_ipv6.db:: Start l3fwd-acl:: - ./examples/l3fwd-acl/build/l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)" + .//examples/dpdk-l3fwd-acl -c ff -n 3 -- -p 0x3 --config="(0,0,2),(1,0,3)" --rule_ipv4="/root/rule_ipv4.db" --rule_ipv6="/root/rule_ipv6.db" The l3fwdacl will not set up because of ivalid ACL rule. diff --git a/test_plans/large_vf_test_plan.rst b/test_plans/large_vf_test_plan.rst index 71e66bf9..4e2d0555 100644 --- a/test_plans/large_vf_test_plan.rst +++ b/test_plans/large_vf_test_plan.rst @@ -57,7 +57,7 @@ Prerequisites 6. Start testpmd with "--txq=256 --rxq=256" to setup 256 queues:: - ./dpdk-testpmd -c ff -n 4 -- -i --rxq=256 --txq=256 --total-num-mbufs=500000 + .//app/dpdk-testpmd -c ff -n 4 -- -i --rxq=256 --txq=256 --total-num-mbufs=500000 Note:: @@ -325,10 +325,10 @@ Subcase 6: negative: fail to test exceed 256 queues --------------------------------------------------- Start testpmd on VF0 with 512 queues:: - ./dpdk-testpmd -c f -n 4 -- -i --txq=512 --rxq=512 + .//app/dpdk-testpmd -c f -n 4 -- -i --txq=512 --rxq=512 or:: - ./dpdk-testpmd -c f -n 4 -- -i --txq=256 --rxq=256 + .//app/dpdk-testpmd -c f -n 4 -- -i --txq=256 --rxq=256 testpmd> port stop all testpmd> port config all rxq 512 testpmd> port config all txq 512 @@ -408,11 +408,11 @@ Bind all VFs to vfio-pci, only have 32 ports, reached maximum number of ethernet Start testpmd with queue exceed 4 queues:: - ./dpdk-testpmd -c f -n 4 -- -i --txq=8 --rxq=8 + .//app/dpdk-testpmd -c f -n 4 -- -i --txq=8 --rxq=8 or:: - ./dpdktestpmd -c f -n 4 -- -i --txq=4 --rxq=4 + .//app/dpdk-testpmd -c f -n 4 -- -i --txq=4 --rxq=4 testpmd> port stop all testpmd> port config all rxq testpmd> port config all rxq 8 diff --git a/test_plans/link_flowctrl_test_plan.rst b/test_plans/link_flowctrl_test_plan.rst index d3bd8af8..373cd39a 100644 --- a/test_plans/link_flowctrl_test_plan.rst +++ b/test_plans/link_flowctrl_test_plan.rst @@ -91,7 +91,7 @@ Prerequisites Assuming that ports ``0`` and ``2`` are connected to a traffic generator, launch the ``testpmd`` with the following arguments:: - ./build/app/testpmd -cffffff -n 3 -- -i --burst=1 --txpt=32 \ + ./build/app/dpdk-testpmd -cffffff -n 3 -- -i --burst=1 --txpt=32 \ --txht=8 --txwt=0 --txfreet=0 --rxfreet=64 --mbcache=250 --portmask=0x5 The -n command is used to select the number of memory channels. diff --git a/test_plans/link_status_interrupt_test_plan.rst b/test_plans/link_status_interrupt_test_plan.rst index 32dea9a4..fe210916 100644 --- a/test_plans/link_status_interrupt_test_plan.rst +++ b/test_plans/link_status_interrupt_test_plan.rst @@ -73,11 +73,18 @@ to the device under test:: The test app need add a cmdline, ``--vfio-intr=int_x``. +Build dpdk and examples=link_status_interrupt: + CC=gcc meson -Denable_kmods=True -Dlibdir=lib --default-library=static + ninja -C + + meson configure -Dexamples=link_status_interrupt + ninja -C + Assume port 0 and 1 are connected to the remote ports, e.g. packet generator. To run the test application in linuxapp environment with 4 lcores, 2 ports and 2 RX queues per lcore:: - $ ./link_status_interrupt -c f -- -q 2 -p 0x3 + $ .//examples/dpdk-link_status_interrupt -c f -- -q 2 -p 0x3 Also, if the ports need to be tested are different, the port mask should be changed. The lcore used to run the test application and the number of queues diff --git a/test_plans/loopback_multi_paths_port_restart_test_plan.rst b/test_plans/loopback_multi_paths_port_restart_test_plan.rst index 8418996b..ba765caf 100644 --- a/test_plans/loopback_multi_paths_port_restart_test_plan.rst +++ b/test_plans/loopback_multi_paths_port_restart_test_plan.rst @@ -45,13 +45,13 @@ Test Case 1: loopback test with packed ring mergeable path 1. Launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 --no-pci \ + .//app/dpdk-testpmd -n 4 -l 2-4 --no-pci \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 \ + .//app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=1,in_order=0 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024 @@ -86,13 +86,13 @@ Test Case 2: loopback test with packed ring non-mergeable path 1. Launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 --no-pci \ + .//app/dpdk-testpmd -n 4 -l 2-4 --no-pci \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 \ + .//app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=0,in_order=0 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024 @@ -127,13 +127,13 @@ Test Case 3: loopback test with packed ring inorder mergeable path 1. Launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 --no-pci \ + .//app/dpdk-testpmd -n 4 -l 2-4 --no-pci \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 \ + .//app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=1,in_order=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024 @@ -168,13 +168,13 @@ Test Case 4: loopback test with packed ring inorder non-mergeable path 1. Launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 --no-pci \ + .//app/dpdk-testpmd -n 4 -l 2-4 --no-pci \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 \ + .//app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \ -- -i --rx-offloads=0x10 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024 @@ -209,13 +209,13 @@ Test Case 5: loopback test with split ring inorder mergeable path 1. Launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 --no-pci \ + .//app/dpdk-testpmd -n 4 -l 2-4 --no-pci \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 \ + .//app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=1,mrg_rxbuf=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024 @@ -250,13 +250,13 @@ Test Case 6: loopback test with split ring inorder non-mergeable path 1. Launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 --no-pci \ + .//app/dpdk-testpmd -n 4 -l 2-4 --no-pci \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 \ + .//app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=1,mrg_rxbuf=0 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024 @@ -291,13 +291,13 @@ Test Case 7: loopback test with split ring mergeable path 1. Launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 --no-pci \ + .//app/dpdk-testpmd -n 4 -l 2-4 --no-pci \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 \ + .//app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024 @@ -332,13 +332,13 @@ Test Case 8: loopback test with split ring non-mergeable path 1. Launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 --no-pci \ + .//app/dpdk-testpmd -n 4 -l 2-4 --no-pci \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 \ + .//app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=0,vectorized=1 \ -- -i --rx-offloads=0x10 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024 @@ -373,13 +373,13 @@ Test Case 9: loopback test with split ring vector_rx path 1. Launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 --no-pci \ + .//app/dpdk-testpmd -n 4 -l 2-4 --no-pci \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 \ + .//app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=0,vectorized=1 \ -- -i --nb-cores=1 --txd=1024 --rxd=1024 @@ -414,13 +414,13 @@ Test Case 10: loopback test with packed ring vectorized path 1. Launch vhost by below command:: rm -rf vhost-net* - ./testpmd -n 4 -l 2-4 --no-pci \ + .//app/dpdk-testpmd -n 4 -l 2-4 --no-pci \ --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + .//app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,in_order=0,mrg_rxbuf=0,vectorized=1 \ -- -i --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac diff --git a/test_plans/loopback_multi_queues_test_plan.rst b/test_plans/loopback_multi_queues_test_plan.rst index 3d2851b8..fae367c6 100644 --- a/test_plans/loopback_multi_queues_test_plan.rst +++ b/test_plans/loopback_multi_queues_test_plan.rst @@ -45,14 +45,14 @@ Test Case 1: loopback with virtio 1.1 mergeable path using 1 queue and 8 queues 1. Launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -l 1-2 -n 4 --no-pci \ + .//app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \ -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 \ + .//app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=0 \ -- -i --nb-cores=1 --txd=1024 --rxd=1024 @@ -76,14 +76,14 @@ Test Case 1: loopback with virtio 1.1 mergeable path using 1 queue and 8 queues 6. Launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -l 1-9 -n 4 --no-pci \ + .//app/dpdk-testpmd -l 1-9 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \ -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024 testpmd>set fwd mac 7. Launch virtio-user by below command:: - ./testpmd -n 4 -l 10-18 \ + .//app/dpdk-testpmd -n 4 -l 10-18 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,packed_vq=1,mrg_rxbuf=1,in_order=0 \ -- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024 @@ -105,14 +105,14 @@ Test Case 2: loopback with virtio 1.1 non-mergeable path using 1 queue and 8 que 1. Launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -l 1-2 -n 4 --no-pci \ + .//app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \ -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 \ + .//app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=0 \ -- -i --nb-cores=1 --txd=1024 --rxd=1024 @@ -136,14 +136,14 @@ Test Case 2: loopback with virtio 1.1 non-mergeable path using 1 queue and 8 que 6. Launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -l 1-9 -n 4 --no-pci \ + .//app/dpdk-testpmd -l 1-9 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \ -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024 testpmd>set fwd mac 7. Launch virtio-user by below command:: - ./testpmd -n 4 -l 10-18 \ + .//app/dpdk-testpmd -n 4 -l 10-18 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,packed_vq=1,mrg_rxbuf=0,in_order=0 \ -- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024 @@ -165,14 +165,14 @@ Test Case 3: loopback with virtio 1.0 inorder mergeable path using 1 queue and 8 1. Launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -l 1-2 -n 4 --no-pci \ + .//app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \ -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 \ + .//app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,mrg_rxbuf=1,in_order=1 \ -- -i --nb-cores=1 --txd=1024 --rxd=1024 @@ -196,14 +196,14 @@ Test Case 3: loopback with virtio 1.0 inorder mergeable path using 1 queue and 8 6. Launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -l 1-9 -n 4 --no-pci \ + .//app/dpdk-testpmd -l 1-9 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \ -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024 testpmd>set fwd mac 7. Launch virtio-user by below command:: - ./testpmd -n 4 -l 10-18 \ + .//app/dpdk-testpmd -n 4 -l 10-18 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,mrg_rxbuf=1,in_order=1 \ -- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024 @@ -225,14 +225,14 @@ Test Case 4: loopback with virtio 1.0 inorder non-mergeable path using 1 queue a 1. Launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -l 1-2 -n 4 --no-pci \ + .//app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \ -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 \ + .//app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,mrg_rxbuf=0,in_order=1 \ -- -i --nb-cores=1 --txd=1024 --rxd=1024 @@ -256,14 +256,14 @@ Test Case 4: loopback with virtio 1.0 inorder non-mergeable path using 1 queue a 6. Launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -l 1-9 -n 4 --no-pci \ + .//app/dpdk-testpmd -l 1-9 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \ -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024 testpmd>set fwd mac 7. Launch virtio-user by below command:: - ./testpmd -n 4 -l 10-18 \ + .//app/dpdk-testpmd -n 4 -l 10-18 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,mrg_rxbuf=0,in_order=1 \ -- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024 @@ -285,14 +285,14 @@ Test Case 5: loopback with virtio 1.0 mergeable path using 1 queue and 8 queues 1. Launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -l 1-2 -n 4 --no-pci \ + .//app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \ -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 \ + .//app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,mrg_rxbuf=1,in_order=0 \ -- -i --nb-cores=1 --txd=1024 --rxd=1024 @@ -316,14 +316,14 @@ Test Case 5: loopback with virtio 1.0 mergeable path using 1 queue and 8 queues 6. Launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -l 1-9 -n 4 --no-pci \ + .//app/dpdk-testpmd -l 1-9 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \ -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024 testpmd>set fwd mac 7. Launch virtio-user by below command:: - ./testpmd -n 4 -l 10-18 \ + .//app/dpdk-testpmd -n 4 -l 10-18 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,mrg_rxbuf=1,in_order=0 \ -- -i --enable-hw-vlan-strip --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024 @@ -345,14 +345,14 @@ Test Case 6: loopback with virtio 1.0 non-mergeable path using 1 queue and 8 que 1. Launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -l 1-2 -n 4 --no-pci \ + .//app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \ -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 \ + .//app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,mrg_rxbuf=0,in_order=0,vectorized=1 \ -- -i --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 @@ -376,14 +376,14 @@ Test Case 6: loopback with virtio 1.0 non-mergeable path using 1 queue and 8 que 6. Launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -l 1-9 -n 4 --no-pci \ + .//app/dpdk-testpmd -l 1-9 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \ -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024 testpmd>set fwd mac 7. Launch virtio-user by below command:: - ./testpmd -n 4 -l 10-18 \ + .//app/dpdk-testpmd -n 4 -l 10-18 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,mrg_rxbuf=0,in_order=0,vectorized=1 \ -- -i --enable-hw-vlan-strip --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024 @@ -405,14 +405,14 @@ Test Case 7: loopback with virtio 1.0 vector_rx path using 1 queue and 8 queues 1. Launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -l 1-2 -n 4 --no-pci \ + .//app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \ -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 \ + .//app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,mrg_rxbuf=0,in_order=0,vectorized=1 \ -- -i --nb-cores=1 --txd=1024 --rxd=1024 @@ -436,14 +436,14 @@ Test Case 7: loopback with virtio 1.0 vector_rx path using 1 queue and 8 queues 6. Launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -l 1-9 -n 4 --no-pci \ + .//app/dpdk-testpmd -l 1-9 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \ -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024 testpmd>set fwd mac 7. Launch virtio-user by below command:: - ./testpmd -n 4 -l 10-18 \ + .//app/dpdk-testpmd -n 4 -l 10-18 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,mrg_rxbuf=0,in_order=0,vectorized=1 \ -- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024 @@ -465,14 +465,14 @@ Test Case 8: loopback with virtio 1.1 inorder mergeable path using 1 queue and 8 1. Launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -l 1-2 -n 4 --no-pci \ + .//app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \ -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 \ + .//app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=1 \ -- -i --nb-cores=1 --txd=1024 --rxd=1024 @@ -496,14 +496,14 @@ Test Case 8: loopback with virtio 1.1 inorder mergeable path using 1 queue and 8 6. Launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -l 1-9 -n 4 --no-pci \ + .//app/dpdk-testpmd -l 1-9 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \ -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024 testpmd>set fwd mac 7. Launch virtio-user by below command:: - ./testpmd -n 4 -l 10-18 \ + .//app/dpdk-testpmd -n 4 -l 10-18 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,packed_vq=1,mrg_rxbuf=1,in_order=1 \ -- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024 @@ -525,13 +525,13 @@ Test Case 9: loopback with virtio 1.1 inorder non-mergeable path using 1 queue a 1. Launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -l 1-2 -n 4 --no-pci --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \ + .//app/dpdk-testpmd -l 1-2 -n 4 --no-pci --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \ -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 \ + .//app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \ -- -i --rx-offloads=0x10 --nb-cores=1 --txd=1024 --rxd=1024 @@ -555,13 +555,13 @@ Test Case 9: loopback with virtio 1.1 inorder non-mergeable path using 1 queue a 6. Launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -l 1-9 -n 4 --no-pci --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \ + .//app/dpdk-testpmd -l 1-9 -n 4 --no-pci --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \ -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024 testpmd>set fwd mac 7. Launch virtio-user by below command:: - ./testpmd -n 4 -l 10-18 \ + .//app/dpdk-testpmd -n 4 -l 10-18 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \ -- -i --rx-offloads=0x10 --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024 @@ -583,14 +583,14 @@ Test Case 10: loopback with virtio 1.1 vectorized path using 1 queue and 8 queue 1. Launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -l 1-2 -n 4 --no-pci \ + .//app/dpdk-testpmd -l 1-2 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \ -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac 2. Launch virtio-user by below command:: - ./testpmd -n 4 -l 5-6 \ + .//app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \ -- -i --nb-cores=1 --txd=1024 --rxd=1024 @@ -614,14 +614,14 @@ Test Case 10: loopback with virtio 1.1 vectorized path using 1 queue and 8 queue 6. Launch testpmd by below command:: rm -rf vhost-net* - ./testpmd -l 1-9 -n 4 --no-pci \ + .//app/dpdk-testpmd -l 1-9 -n 4 --no-pci \ --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \ -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024 testpmd>set fwd mac 7. Launch virtio-user by below command:: - ./testpmd -n 4 -l 10-18 \ + .//app/dpdk-testpmd -n 4 -l 10-18 \ --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \ -- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024 diff --git a/test_plans/mac_filter_test_plan.rst b/test_plans/mac_filter_test_plan.rst index a9695cfc..f40ed8b1 100644 --- a/test_plans/mac_filter_test_plan.rst +++ b/test_plans/mac_filter_test_plan.rst @@ -48,7 +48,7 @@ Prerequisites Assuming that at least a port is connected to a traffic generator, launch the ``testpmd`` with the following arguments:: - ./x86_64-default-linuxapp-gcc/build/app/test-pmd/testpmd -c 0xc3 -n 3 -- -i \ + .//app/dpdk-testpmd -c 0xc3 -n 3 -- -i \ --burst=1 --rxpt=0 --rxht=0 --rxwt=0 --txpt=36 --txht=0 --txwt=0 \ --txfreet=32 --rxfreet=64 --mbcache=250 --portmask=0x3 diff --git a/test_plans/macsec_for_ixgbe_test_plan.rst b/test_plans/macsec_for_ixgbe_test_plan.rst index 660c2fd1..68c2c2c8 100644 --- a/test_plans/macsec_for_ixgbe_test_plan.rst +++ b/test_plans/macsec_for_ixgbe_test_plan.rst @@ -113,7 +113,7 @@ Test Case 1: MACsec packets send and receive 1. Start the testpmd of rx port:: - ./testpmd -c 0xf --socket-mem 1024,0 --file-prefix=rx -a 0000:07:00.1 \ + .//app/dpdk-testpmd -c 0xf --socket-mem 1024,0 --file-prefix=rx -a 0000:07:00.1 \ -- -i --port-topology=chained 2. Set MACsec offload on:: @@ -150,7 +150,7 @@ Test Case 1: MACsec packets send and receive 1. Start the testpmd of tx port:: - ./testpmd -c 0xf0 --socket-mem 1024,0 --file-prefix=tx -a 0000:07:00.0 \ + .//app/dpdk-testpmd -c 0xf0 --socket-mem 1024,0 --file-prefix=tx -a 0000:07:00.0 \ -- -i --port-topology=chained 2. Set MACsec offload on:: @@ -403,7 +403,7 @@ Test Case 7: performance test of MACsec offload packets Port0 connected to IXIA port5, port1 connected to IXIA port6, set port0 MACsec offload on, set fwd mac:: - ./testpmd -c 0xf --socket-mem 1024,0 -- -i \ + .//app/dpdk-testpmd -c 0xf --socket-mem 1024,0 -- -i \ --port-topology=chained testpmd> set macsec offload 0 on encrypt on replay-protect on testpmd> set fwd mac @@ -422,7 +422,7 @@ Test Case 7: performance test of MACsec offload packets with cable, connect 05:00.0 to IXIA. Bind the three ports to dpdk driver. Start two testpmd:: - ./testpmd -c 0xf --socket-mem 1024,0 --file-prefix=rx -a 0000:07:00.1 \ + .//app/dpdk-testpmd -c 0xf --socket-mem 1024,0 --file-prefix=rx -a 0000:07:00.1 \ -- -i --port-topology=chained testpmd> set macsec offload 0 on encrypt on replay-protect on @@ -432,7 +432,7 @@ Test Case 7: performance test of MACsec offload packets testpmd> set macsec sa tx 0 0 0 0 00112200000000000000000000000000 testpmd> set fwd rxonly - ./testpmd -c 0xf0 --socket-mem 1024,0 --file-prefix=tx -b 0000:07:00.1 \ + .//app/dpdk-testpmd -c 0xf0 --socket-mem 1024,0 --file-prefix=tx -b 0000:07:00.1 \ -- -i --port-topology=chained testpmd> set macsec offload 1 on encrypt on replay-protect on diff --git a/test_plans/malicious_driver_event_indication_test_plan.rst b/test_plans/malicious_driver_event_indication_test_plan.rst index 1c9d244f..c97555ba 100644 --- a/test_plans/malicious_driver_event_indication_test_plan.rst +++ b/test_plans/malicious_driver_event_indication_test_plan.rst @@ -62,10 +62,10 @@ Test Case1: Check log output when malicious driver events is detected echo 1 > /sys/bus/pci/devices/0000\:18\:00.1/max_vfs 2. Launch PF by testpmd - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x03 -n 4 --file-prefix=test1 -a [pci of PF] -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x03 -n 4 --file-prefix=test1 -a [pci of PF] -- -i 3. Launch VF by testpmd - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x03 -n 4 --file-prefix=lei1 -a [pci of VF] -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x03 -n 4 --file-prefix=lei1 -a [pci of VF] -- -i > set fwd txonly > start @@ -83,11 +83,11 @@ Test Case2: Check the event counter number for malicious driver events echo 1 > /sys/bus/pci/devices/0000\:18\:00.1/max_vfs 2. Launch PF by testpmd - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x03 -n 4 --file-prefix=test1 -a [pci of PF] -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x03 -n 4 --file-prefix=test1 -a [pci of PF] -- -i 3. launch VF by testpmd and start txonly mode 3 times: repeat following step 3 times - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x03 -n 4 --file-prefix=lei1 -a [pci of VF] -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x03 -n 4 --file-prefix=lei1 -a [pci of VF] -- -i > set fwd txonly > start > quit diff --git a/test_plans/metering_and_policing_test_plan.rst b/test_plans/metering_and_policing_test_plan.rst index e3fb308b..11142395 100644 --- a/test_plans/metering_and_policing_test_plan.rst +++ b/test_plans/metering_and_policing_test_plan.rst @@ -144,7 +144,7 @@ Bind them to dpdk igb_uio driver, :: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x1f -s 0x10 -n 4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x1f -s 0x10 -n 4 \ --vdev 'net_softnic0,firmware=./drivers/net/softnic/firmware.cli' \ -- -i --portmask=0x10 --disable-rss testpmd> start @@ -153,7 +153,7 @@ Bind them to dpdk igb_uio driver, :: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x1f -s 0x10 -n 4 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x1f -s 0x10 -n 4 \ --vdev 'net_softnic0,firmware=./drivers/net/softnic/firmware.cli' \ -- -i --portmask=0x10 --disable-rss testpmd> set port tm hierarchy default 1 @@ -173,7 +173,7 @@ Test Case 1: ipv4 ACL table RFC2698 GYR :: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss Add rules to table, set CBS to 400 bytes, PBS to 500 bytes @@ -226,7 +226,7 @@ Test Case 2: ipv4 ACL table RFC2698 GYD :: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss Add rules to table, set CBS to 400 bytes, PBS to 500 bytes :: @@ -275,7 +275,7 @@ Test Case 3: ipv4 ACL table RFC2698 GDR :: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss Add rules to table, set CBS to 400 bytes, PBS to 500 bytes :: @@ -327,7 +327,7 @@ Test Case 4: ipv4 ACL table RFC2698 DYR :: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss Add rules to table, set CBS to 400 bytes, PBS to 500 bytes :: @@ -378,7 +378,7 @@ Test Case 5: ipv4 ACL table RFC2698 DDD :: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss Add rules to table, set CBS to 400 bytes, PBS to 500 bytes :: @@ -426,7 +426,7 @@ Test Case 6: ipv4 with same CBS and PBS GDR :: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss Add rules to table, set CBS to 400 bytes, PBS to 500 bytes :: @@ -467,7 +467,7 @@ Test Case 7: ipv4 HASH table RFC2698 :: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss Add rules to table, :: @@ -507,7 +507,7 @@ Test Case 8: ipv6 ACL table RFC2698 :: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss Add rules to table, :: @@ -561,7 +561,7 @@ Test Case 9: multiple meter and profile :: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x1f -s 0x10 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=4 --txq=4 --portmask=0x10 --disable-rss + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x1f -s 0x10 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=4 --txq=4 --portmask=0x10 --disable-rss Add rules to table, set CBS to 400 bytes, PBS to 500 bytes :: @@ -664,7 +664,7 @@ Test Case 10: ipv4 RFC2698 pre-colored red by DSCP table :: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss Add rules to table, set CBS to 400 bytes, PBS to 500 bytes :: @@ -755,7 +755,7 @@ Test Case 11: ipv4 RFC2698 pre-colored yellow by DSCP table :: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss Add rules to table, set CBS to 400 bytes, PBS to 500 bytes :: @@ -848,7 +848,7 @@ Test Case 12: ipv4 RFC2698 pre-colored green by DSCP table :: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7 -s 0x4 -n 4 --vdev 'net_softnic0,firmware=/root/dpdk/drivers/net/softnic/meter_and_policing_firmware.cli' -- -i --rxq=2 --txq=2 --portmask=0x4 --disable-rss Add rules to table, set CBS to 400 bytes, PBS to 500 bytes :: diff --git a/test_plans/mtu_update_test_plan.rst b/test_plans/mtu_update_test_plan.rst index b62ec15a..5a60746a 100644 --- a/test_plans/mtu_update_test_plan.rst +++ b/test_plans/mtu_update_test_plan.rst @@ -59,7 +59,7 @@ Assuming that ports ``0`` and ``1`` of the test target are directly connected to the traffic generator, launch the ``testpmd`` application with the following arguments:: - ./build/app/testpmd -c ffffff -n 6 -- -i --portmask=0x3 --max-pkt-len=9600 \ + ./build/app/dpdk-testpmd -c ffffff -n 6 -- -i --portmask=0x3 --max-pkt-len=9600 \ --tx-offloads=0x00008000 The -n command is used to select the number of memory channels. It should match the number of memory channels on that setup. diff --git a/test_plans/multiple_pthread_test_plan.rst b/test_plans/multiple_pthread_test_plan.rst index 8dad22d4..9603c494 100644 --- a/test_plans/multiple_pthread_test_plan.rst +++ b/test_plans/multiple_pthread_test_plan.rst @@ -81,7 +81,7 @@ Test Case 1: Basic operation To run the application, start the testpmd with the lcores all running with threads and also the unique core assigned, command as follows:: - ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='0@8,(4-5)@9' -n 4 -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='0@8,(4-5)@9' -n 4 -- -i Using the command to make sure the lcore are init on the correct cpu:: @@ -90,11 +90,11 @@ Using the command to make sure the lcore are init on the correct cpu:: Result as follows:: PID TID %CPU PSR COMMAND - 31038 31038 22.5 8 ./x86_64-native-linuxapp-gcc/app/testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i - 31038 31039 0.0 8 ./x86_64-native-linuxapp-gcc/app/testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i - 31038 31040 0.0 9 ./x86_64-native-linuxapp-gcc/app/testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i - 31038 31041 0.0 9 ./x86_64-native-linuxapp-gcc/app/testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i - 31038 31042 0.0 8 ./x86_64-native-linuxapp-gcc/app/testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i + 31038 31038 22.5 8 ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i + 31038 31039 0.0 8 ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i + 31038 31040 0.0 9 ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i + 31038 31041 0.0 9 ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i + 31038 31042 0.0 8 ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i Their TIDs are for these threads as below:: @@ -134,11 +134,11 @@ Check forward configuration:: Send packets continuous:: PID TID %CPU PSR COMMAND - 31038 31038 0.6 8 ./x86_64-native-linuxapp-gcc/app/testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i - 31038 31039 0.0 8 ./x86_64-native-linuxapp-gcc/app/testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i - 31038 31040 1.5 9 ./x86_64-native-linuxapp-gcc/app/testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i - 31038 31041 1.5 9 ./x86_64-native-linuxapp-gcc/app/testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i - 31038 31042 0.0 8 ./x86_64-native-linuxapp-gcc/app/testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i + 31038 31038 0.6 8 ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i + 31038 31039 0.0 8 ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i + 31038 31040 1.5 9 ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i + 31038 31041 1.5 9 ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i + 31038 31042 0.0 8 ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i You can see TID 31040(Lcore 4), 31041(Lore 5) are running. @@ -150,7 +150,7 @@ Give examples, suppose DUT have 128 cpu core. Case 1:: - ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='0@8,(4-5)@(8-11)' -n 4 -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='0@8,(4-5)@(8-11)' -n 4 -- -i It means start 3 EAL thread:: @@ -159,7 +159,7 @@ It means start 3 EAL thread:: Case 2:: - ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='1,2@(0-4,6),(3-4,6)@5,(7,8)' -n 4 -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='1,2@(0-4,6),(3-4,6)@5,(7,8)' -n 4 -- -i It means start 7 EAL thread:: @@ -171,7 +171,7 @@ It means start 7 EAL thread:: Case 3:: - ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0,CONFIG_RTE_MAX_LCORE-1)@(4,5)' -n 4 -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='(0,CONFIG_RTE_MAX_LCORE-1)@(4,5)' -n 4 -- -i (default CONFIG_RTE_MAX_LCORE=128). It means start 2 EAL thread:: @@ -180,7 +180,7 @@ It means start 2 EAL thread:: Case 4:: - ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0,64-66)@(4,5)' -n 4 -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='(0,64-66)@(4,5)' -n 4 -- -i It means start 4 EAL thread:: @@ -188,7 +188,7 @@ It means start 4 EAL thread:: Case 5:: - ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='2-5,6,7-9' -n 4 -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='2-5,6,7-9' -n 4 -- -i It means start 8 EAL thread:: @@ -203,7 +203,7 @@ It means start 8 EAL thread:: Case 6:: - ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='2,(3-5)@3' -n 4 -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='2,(3-5)@3' -n 4 -- -i It means start 4 EAL thread:: @@ -212,7 +212,7 @@ It means start 4 EAL thread:: Case 7:: - ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0,7-4)@(4,5)' -n 4 -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='(0,7-4)@(4,5)' -n 4 -- -i It means start 5 EAL thread:: @@ -224,19 +224,19 @@ Test Case 3: Negative Test -------------------------- Input invalid commands to make sure the commands can't work:: - ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0-,4-7)@(4,5)' -n 4 -- -i - ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(-1,4-7)@(4,5)' -n 4 -- -i - ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0,4-7-9)@(4,5)' -n 4 -- -i - ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0,abcd)@(4,5)' -n 4 -- -i - ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0,4-7)@(1-,5)' -n 4 -- -i - ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0,4-7)@(-1,5)' -n 4 -- -i - ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0,4-7)@(4,5-8-9)' -n 4 -- -i - ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0,4-7)@(abc,5)' -n 4 -- -i - ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0,4-7)@(4,xyz)' -n 4 -- -i - ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0,4-7)=(8,9)' -n 4 -- -i - ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='2,3 at 4,(0-1,,4))' -n 4 -- -i - ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='[0-,4-7]@(4,5)' -n 4 -- -i - ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0-,4-7)@[4,5]' -n 4 -- -i - ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='3-4 at 3,2 at 5-6' -n 4 -- -i - ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='2,,3''2--3' -n 4 -- -i - ./x86_64-native-linuxapp-gcc/app/testpmd --lcores='2,,,3''2--3' -n 4 -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='(0-,4-7)@(4,5)' -n 4 -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='(-1,4-7)@(4,5)' -n 4 -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='(0,4-7-9)@(4,5)' -n 4 -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='(0,abcd)@(4,5)' -n 4 -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='(0,4-7)@(1-,5)' -n 4 -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='(0,4-7)@(-1,5)' -n 4 -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='(0,4-7)@(4,5-8-9)' -n 4 -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='(0,4-7)@(abc,5)' -n 4 -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='(0,4-7)@(4,xyz)' -n 4 -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='(0,4-7)=(8,9)' -n 4 -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='2,3 at 4,(0-1,,4))' -n 4 -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='[0-,4-7]@(4,5)' -n 4 -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='(0-,4-7)@[4,5]' -n 4 -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='3-4 at 3,2 at 5-6' -n 4 -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='2,,3''2--3' -n 4 -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --lcores='2,,,3''2--3' -n 4 -- -i diff --git a/test_plans/ptpclient_test_plan.rst b/test_plans/ptpclient_test_plan.rst index 7781bffc..31ba2b15 100644 --- a/test_plans/ptpclient_test_plan.rst +++ b/test_plans/ptpclient_test_plan.rst @@ -45,7 +45,11 @@ Assume one port is connected to the tester and "linuxptp.x86_64" has been installed on the tester. Case Config:: - For support IEEE1588, need to set "CONFIG_RTE_LIBRTE_IEEE1588=y" in ./config/common_base and re-build DPDK. + + Meson: For support IEEE1588, need to execute "sed -i '$a\#define RTE_LIBRTE_IEEE1588 1' config/rte_config.h", + and re-build DPDK. + $ CC=gcc meson -Denable_kmods=True -Dlibdir=lib --default-library=static + $ ninja -C The sample should be validated on Forville, Niantic and i350 Nics. @@ -57,7 +61,7 @@ Start ptp server on tester with IEEE 802.3 network transport:: Start ptp client on DUT and wait few seconds:: - ./examples/ptpclient/build/ptpclient -c f -n 3 -- -T 0 -p 0x1 + .//examples/dpdk-ptpclient -c f -n 3 -- -T 0 -p 0x1 Check that output message contained T1,T2,T3,T4 clock and time difference between master and slave time is about 10us in niantic, 20us in Fortville, @@ -79,7 +83,7 @@ Start ptp server on tester with IEEE 802.3 network transport:: Start ptp client on DUT and wait few seconds:: - ./examples/ptpclient/build/ptpclient -c f -n 3 -- -T 1 -p 0x1 + .//examples/dpdk-ptpclient -c f -n 3 -- -T 1 -p 0x1 Make sure DUT system time has been changed to same as tester. Check that output message contained T1,T2,T3,T4 clock and time difference diff --git a/test_plans/ptype_mapping_test_plan.rst b/test_plans/ptype_mapping_test_plan.rst index d157b670..fdabd191 100644 --- a/test_plans/ptype_mapping_test_plan.rst +++ b/test_plans/ptype_mapping_test_plan.rst @@ -61,7 +61,7 @@ Add print info to testpmd for case:: Start testpmd, enable rxonly and verbose mode:: - ./testpmd -c f -n 4 -- -i --port-topology=chained + .//app/dpdk-testpmd -c f -n 4 -- -i --port-topology=chained Test Case 1: Get ptype mapping ============================== diff --git a/test_plans/qinq_filter_test_plan.rst b/test_plans/qinq_filter_test_plan.rst index 7b0a8d14..488596ed 100644 --- a/test_plans/qinq_filter_test_plan.rst +++ b/test_plans/qinq_filter_test_plan.rst @@ -58,7 +58,7 @@ Testpmd configuration - 4 RX/TX queues per port #. set up testpmd with fortville NICs:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x1f -n 4 -- -i --rxq=4 --txq=4 --disable-rss + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x1f -n 4 -- -i --rxq=4 --txq=4 --disable-rss #. enable qinq:: @@ -91,7 +91,7 @@ Testpmd configuration - 4 RX/TX queues per port #. set up testpmd with fortville NICs:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x1f -n 4 -- -i --rxq=4 --txq=4 --disable-rss + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x1f -n 4 -- -i --rxq=4 --txq=4 --disable-rss #. enable qinq:: @@ -134,7 +134,7 @@ Test Case 3: qinq packet filter to VF queues #. set up testpmd with fortville PF NICs:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x1f -n 4 --socket-mem=1024,1024 --file-prefix=pf -a 81:00.0 -- -i --rxq=4 --txq=4 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x1f -n 4 --socket-mem=1024,1024 --file-prefix=pf -a 81:00.0 -- -i --rxq=4 --txq=4 #. enable qinq:: @@ -160,7 +160,7 @@ Test Case 3: qinq packet filter to VF queues #. set up testpmd with fortville VF0 NICs:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3e0 -n 4 --socket-mem=1024,1024 --file-prefix=vf0 -a 81:02.0 -- -i --rxq=4 --txq=4 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3e0 -n 4 --socket-mem=1024,1024 --file-prefix=vf0 -a 81:02.0 -- -i --rxq=4 --txq=4 #. PMD fwd only receive the packets:: @@ -176,7 +176,7 @@ Test Case 3: qinq packet filter to VF queues #. set up testpmd with fortville VF1 NICs:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7c0 -n 4 --socket-mem=1024,1024 --file-prefix=vf1 -a 81:02.1 -- -i --rxq=4 --txq=4 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7c0 -n 4 --socket-mem=1024,1024 --file-prefix=vf1 -a 81:02.1 -- -i --rxq=4 --txq=4 #. PMD fwd only receive the packets:: @@ -211,7 +211,7 @@ Test Case 4: qinq packet filter with different tpid #. set up testpmd with fortville PF NICs:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x1f -n 4 --socket-mem=1024,1024 --file-prefix=pf -a 81:00.0 -- -i --rxq=4 --txq=4 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x1f -n 4 --socket-mem=1024,1024 --file-prefix=pf -a 81:00.0 -- -i --rxq=4 --txq=4 #. enable qinq:: @@ -241,7 +241,7 @@ Test Case 4: qinq packet filter with different tpid #. set up testpmd with fortville VF0 NICs:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3e0 -n 4 --socket-mem=1024,1024 --file-prefix=vf0 -a 81:02.0 -- -i --rxq=4 --txq=4 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3e0 -n 4 --socket-mem=1024,1024 --file-prefix=vf0 -a 81:02.0 -- -i --rxq=4 --txq=4 #. PMD fwd only receive the packets:: @@ -257,7 +257,7 @@ Test Case 4: qinq packet filter with different tpid #. set up testpmd with fortville VF1 NICs:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7c0 -n 4 --socket-mem=1024,1024 --file-prefix=vf1 -a 81:02.1 -- -i --rxq=4 --txq=4 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7c0 -n 4 --socket-mem=1024,1024 --file-prefix=vf1 -a 81:02.1 -- -i --rxq=4 --txq=4 #. PMD fwd only receive the packets:: diff --git a/test_plans/qos_api_test_plan.rst b/test_plans/qos_api_test_plan.rst index f8a77d4c..9102907e 100644 --- a/test_plans/qos_api_test_plan.rst +++ b/test_plans/qos_api_test_plan.rst @@ -90,7 +90,7 @@ Test Case: dcb 4 tc queue mapping ================================= 1. Start testpmd and set DCB:: - ./x86_64-native-linuxapp-gcc/app/testpmd -l 23-27 -n 4 --master-lcore=23 -- -i --nb-cores=4 --rxq=4 --txq=4 --rss-ip + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 23-27 -n 4 --master-lcore=23 -- -i --nb-cores=4 --rxq=4 --txq=4 --rss-ip testpmd> port stop all testpmd> port config 0 dcb vt off 4 pfc off testpmd> port config 1 dcb vt off 4 pfc off @@ -115,7 +115,7 @@ Test Case: dcb 8 tc queue mapping ================================= 1. Start testpmd and set DCB:: - ./x86_64-native-linuxapp-gcc/app/testpmd -l 23-31 -n 4 --master-lcore=23 -- -i --nb-cores=8 --rxq=8 --txq=8 --rss-ip + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 23-31 -n 4 --master-lcore=23 -- -i --nb-cores=8 --rxq=8 --txq=8 --rss-ip testpmd> port stop all testpmd> port config 0 dcb vt off 8 pfc off testpmd> port config 1 dcb vt off 8 pfc off @@ -148,7 +148,7 @@ Test Case: shaping 1 port 4 tc ============================== 1. Start testpmd and set DCB:: - ./x86_64-native-linuxapp-gcc/app/testpmd -l 23-27 -n 4 --master-lcore=23 -- -i --nb-cores=4 --rxq=4 --txq=4 --rss-ip + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 23-27 -n 4 --master-lcore=23 -- -i --nb-cores=4 --rxq=4 --txq=4 --rss-ip testpmd> port stop all testpmd> port config 0 dcb vt off 4 pfc off testpmd> port config 1 dcb vt off 4 pfc off @@ -191,7 +191,7 @@ Test Case: shaping 1 port 8 tc =============================== 1. Start testpmd and set DCB:: - ./x86_64-native-linuxapp-gcc/app/testpmd -l 23-31 -n 4 --master-lcore=23 -- -i --nb-cores=8 --rxq=8 --txq=8 --rss-ip + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 23-31 -n 4 --master-lcore=23 -- -i --nb-cores=8 --rxq=8 --txq=8 --rss-ip testpmd> port stop all testpmd> port config 0 dcb vt off 8 pfc off testpmd> port config 1 dcb vt off 8 pfc off @@ -246,7 +246,7 @@ Test Case: shaping for port =========================== 1. Start testpmd:: - ./x86_64-native-linuxapp-gcc/app/testpmd -l 23-27 -n 4 --master-lcore=23 -- -i --nb-cores=4 --rxq=4 --txq=4 --rss-ip + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 23-27 -n 4 --master-lcore=23 -- -i --nb-cores=4 --rxq=4 --txq=4 --rss-ip testpmd> port stop 1 1. Add private shaper 0:: @@ -273,7 +273,7 @@ Test Case: dcb 4 tc queue mapping ================================= 1. Start testpmd and set DCB:: - ./x86_64-native-linuxapp-gcc/app/testpmd -l 3-7 -n 4 --master-lcore=3 -- -i --nb-cores=4 --rxq=4 --txq=4 --disable-rss + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-7 -n 4 --master-lcore=3 -- -i --nb-cores=4 --rxq=4 --txq=4 --disable-rss testpmd> vlan set filter off 0 testpmd> vlan set filter off 1 testpmd> port stop all @@ -300,7 +300,7 @@ Test Case: dcb 8 tc queue mapping ================================= 1. Start testpmd and set DCB:: - ./x86_64-native-linuxapp-gcc/app/testpmd -l 3-11 -n 4 --master-lcore=3 -- -i --nb-cores=8 --rxq=8 --txq=8 --disable-rss + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-11 -n 4 --master-lcore=3 -- -i --nb-cores=8 --rxq=8 --txq=8 --disable-rss testpmd> vlan set filter off 0 testpmd> vlan set filter off 1 testpmd> port stop all @@ -335,7 +335,7 @@ Test Case: shaping for queue with 4 tc ====================================== 1. Start testpmd and set DCB:: - ./x86_64-native-linuxapp-gcc/app/testpmd -l 3-7 -n 4 --master-lcore=3 -- -i --nb-cores=4 --rxq=4 --txq=4 --disable-rss + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-7 -n 4 --master-lcore=3 -- -i --nb-cores=4 --rxq=4 --txq=4 --disable-rss testpmd> vlan set filter off 0 testpmd> vlan set filter off 1 testpmd> port stop all @@ -381,7 +381,7 @@ Test Case: shaping for queue with 8 tc ====================================== 1. Start testpmd and set DCB:: - ./x86_64-native-linuxapp-gcc/app/testpmd -l 3-11 -n 4 --master-lcore=3 -- -i --nb-cores=8 --rxq=8 --txq=8 --disable-rss + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-11 -n 4 --master-lcore=3 -- -i --nb-cores=8 --rxq=8 --txq=8 --disable-rss testpmd> vlan set filter off 0 testpmd> vlan set filter off 1 testpmd> port stop all diff --git a/test_plans/queue_region_test_plan.rst b/test_plans/queue_region_test_plan.rst index 1db71094..7e4c9ca6 100644 --- a/test_plans/queue_region_test_plan.rst +++ b/test_plans/queue_region_test_plan.rst @@ -79,7 +79,7 @@ Prerequisites 4. start the testpmd:: - ./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -- -i --rxq=16 --txq=16 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 1ffff -n 4 -- -i --rxq=16 --txq=16 testpmd> port config all rss all testpmd> set fwd rxonly testpmd> set verbose 1