From patchwork Thu Jun 8 18:27:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Huang, ZhiminX" X-Patchwork-Id: 128420 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0A35A42C5C; Thu, 8 Jun 2023 12:16:28 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0770D40EF0; Thu, 8 Jun 2023 12:16:28 +0200 (CEST) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 868F740042 for ; Thu, 8 Jun 2023 12:16:26 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686219386; x=1717755386; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qwWtQsNm+dDhTIwblTIbR8VJD44p69w25CHwtTsJY9A=; b=fjh759aa+AgPLBj6TRQMPBzSfudUK5WOehGRUMLvigU0REsKJAlH0DtZ sSvI+jd6seOelxP58BeAau85171mCB8JzSOoIJtltTvkXXf5FU0LBoLEZ v7cRZ1poPY3G6zRsu+X45Kw1aTwGG0RQtAuTajtpdQN4h91RZohgoUcSV MoyxHRXn16t9AxGXuXwBIrp+SP19nnOUUUGrouxocMHPHWf1gkrt9NKhZ JZuliqP7ZqoJ/mC8zfeZi8Q/hGIOgstuTntoJ5X5xS8KyB90xIskCu/nk 0YLW40dFAut6dWf8d391oUimPY9llG7hS+8qZ75Ek1r2+7V2WFmUcmfjr A==; X-IronPort-AV: E=McAfee;i="6600,9927,10734"; a="420832855" X-IronPort-AV: E=Sophos;i="6.00,226,1681196400"; d="scan'208";a="420832855" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jun 2023 03:16:22 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10734"; a="660327300" X-IronPort-AV: E=Sophos;i="6.00,226,1681196400"; d="scan'208";a="660327300" Received: from unknown (HELO localhost.localdomain) ([10.239.252.96]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jun 2023 03:16:17 -0700 From: Zhimin Huang To: dts@dpdk.org Cc: Zhimin Huang Subject: [dst][PATCH V1 5/6] test_plans/ice_kernelpf_dcf_test_plan:add new plan to cover the most of dcf pmd function Date: Thu, 8 Jun 2023 18:27:41 +0000 Message-Id: <20230608182742.360594-6-zhiminx.huang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230608182742.360594-1-zhiminx.huang@intel.com> References: <20230608182742.360594-1-zhiminx.huang@intel.com> MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org add new testplan for ice_kernelpf_dcf test suite Signed-off-by: Zhimin Huang --- test_plans/ice_kernelpf_dcf_test_plan.rst | 596 ++++++++++++++++++++++ 1 file changed, 596 insertions(+) create mode 100644 test_plans/ice_kernelpf_dcf_test_plan.rst diff --git a/test_plans/ice_kernelpf_dcf_test_plan.rst b/test_plans/ice_kernelpf_dcf_test_plan.rst new file mode 100644 index 00000000..734c092a --- /dev/null +++ b/test_plans/ice_kernelpf_dcf_test_plan.rst @@ -0,0 +1,596 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2023 Intel Corporation + +========================= +Kernel PF + DCF test plan +========================= + +DPDK-22.07 make DCF full pmd feature. +This document provides the plan for testing DCF PMD function of Intel® Ethernet 800 Series. + +Requirement +=========== +1. Hardware: + + Intel® Ethernet 800 Series: E810-XXVDA4/E810-CQ, etc. + +2. Software: + dpdk: http://dpdk.org/git/dpdk + scapy: http://www.secdev.org/projects/scapy/ + +3. create 2 VF from 2 PF in DUT, set mac address for this VF, set trust on for dcf:: + + echo 1 > /sys/bus/pci/devices/0000\:18\:00.0/sriov_numvfs + echo 1 > /sys/bus/pci/devices/0000\:18\:00.1/sriov_numvfs + ip link set enp24s0f0 vf 0 mac 00:11:22:33:44:55 + ip link set enp24s0f0 vf 0 trust on + ip link set enp24s0f1 vf 0 trust on + +4. bind VF to vfio-pci:: + + modprobe vfio-pci + usertools/dpdk-devbind.py --force --bind=vfio-pci 0000:18:01.0 0000:18:09.0 + +5. test dcf cases on host or in qemu, and launch testpmd:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf -n 4 -a 0000:18:01.0,cap=dcf -a 0000:18:09.0,cap=dcf -- -i --rxq=16 --txq=16 + testpmd> set fwd rxonly + testpmd> set verbose 1 + testpmd> start + +.. note:: + + 1.the kernel driver has MAC and VLAN Anti-Spoofing feature for VFs, the default is enable. + disabled for vfs: ip link set vf spoofchk {off|on} + + 2.for vf-vlan-pruning in ethtool --set-priv-flag, enable function to receive specific vlan packet. + +Test case: DCF basic RX/TX +========================== +Set rxonly forward, start testpmd + +Send 100 random packets from tester, check packets can be received + +Set txonly forward, start testpmd + +Check tester could receive the packets from application generated + +Test case: DCF promisc +====================== +Ensure kernel trust mode is enable:: + + ip link set $PF_INTF vf 0 trust on + +Start testpmd, set mac forward and enable print output + +Use scapy to send random packets with current VF0's MAC, verify the +packets can be received and forwarded by the DCF. + +Use scapy to send random packets with a wrong MAC to VF0, verify the +packets can be received and forwarded by the DCF. + +Disable promisc mode:: + + testpmd> set promisc all off + +Use scapy to send random packets with current VF0's MAC, verify the +packets can be received and forwarded by the DCF. + +Use scapy to send random packets with a wrong MAC to VF0, verify the +packets can't be received and forwarded by the DCF. + +Enable promisc mode:: + + testpmd> set promisc all on + +Use scapy to send random packets with current VF0's MAC, verify the +packets can be received and forwarded by the DCF. + +Use scapy to send random packets with a wrong MAC to VF0, verify the +packets can be received and forwarded by the DCF. + +Test case: DCF multicast +======================== + +Start testpmd + +Disable promisc and multicast mode:: + + testpmd> set promisc all off + testpmd> set allmulti all off + testpmd> start + +Send packet with current VF0's MAC, and check DCF can receive the packet. + +Send packet with multicast MAC 01:80:C2:00:00:08, and check DCF can not +receive the packet. + +Enable multicast mode:: + + testpmd> set allmulti all on + +configure multicast address:: + + testpmd> mcast_addr add 0 01:80:C2:00:00:08 + +Send packet with current VF0's MAC, and check DCF can receive the packet. + +Send packet with multicast MAC 01:80:C2:00:00:08, and check DCF can +receive the packet. + +Test case: DCF broadcast +======================== +Disable promisc mode:: + + testpmd> set promisc all off + testpmd> start + +Send packet with broadcast address ff:ff:ff:ff:ff:ff, and check DCF can +receive the packet + +Test case: DCF unicast +====================== +Disable promisc and multicast mode:: + + testpmd> set promisc all off + testpmd> set allmulti all off + testpmd> start + +Send packet with unicast address, and check DCF can receive the packet + +Test case: DCF mac add filter +============================= +Disable promisc and enable crc strip and add mac address:: + + testpmd> port stop all + testpmd> port config all crc-strip on + testpmd> port start all + testpmd> set promisc all off + testpmd> mac_addr add 0 00:11:22:33:44:55 + +Use scapy to send 100 random packets with current VF0's MAC, verify the +packets can be received by one DCF and can be forwarded to another DCF +correctly. + +Use scapy to send 100 random packets with new added VF0's MAC, verify the +packets can be received by one DCF and can be forwarded to another DCF correctly. + +remove the added mac address:: + + testpmd> mac_addr remove 0 00:11:22:33:44:55 + +Use scapy to send 100 random packets to the deleted MAC to VF0, verify the +packets can't be received by one DCF and also can't be forwarded to another +DCF correctly + +Use scapy to send 100 random packets with a wrong MAC to VF0, verify the +packets can't be received by one DCF and also can't be forwarded to another +DCF correctly. + +Test case: DCF vlan insertion +============================= + +Disable vlan strip:: + + testpmd> vlan set strip off 0 + +Set vlan id 20 for tx_vlan:: + + testpmd> port stop all + testpmd> tx_vlan set 0 20 + testpmd> vlan set filter on 0 + testpmd> rx_vlan set 0 20 + testpmd> port start all + testpmd> set fwd mac + testpmd> start + +Send normal packet:: + + p=Ether(dst="00:01:23:45:67:89")/IP()/Raw(load='X'*30) + +Verify packet that out from DCF contains the vlan tag 20 + +Test case: DCF vlan strip +========================= + +Enable vlan strip:: + + testpmd> vlan set filter on 0 + testpmd> rx_vlan add 20 0 + testpmd> vlan set strip on 0 + testpmd> set fwd mac + testpmd> set verbose 1 + testpmd> start + +Send packets with vlan tag:: + + p=Ether(dst="00:01:23:45:67:89")/Dot1Q(id=0x8100,vlan=20)/IP()/Raw(load='X'*30) + +Check that out from DCF doesn't contain the vlan tag. + +Disable vlan strip:: + + testpmd> vlan set strip off 0 + +Send packets with vlan tag:: + + Ether(dst="00:01:23:45:67:89")/Dot1Q(id=0x8100,vlan=20)/IP()/Raw(load='X'*30) + +Check that out from DCF contains the vlan tag. + +Test case: DCF vlan filter +========================== + +Enable vlan filter:: + + testpmd> vlan set filter on 0 + testpmd> rx_vlan add 20 0 + testpmd> vlan set strip off 0 + testpmd> set promisc all off + testpmd> set fwd mac + testpmd> set verbose 1 + testpmd> start + +Send packets with vlan tag:: + + p=Ether(dst="00:01:23:45:67:89")/Dot1Q(id=0x8100,vlan=20)/IP()/Raw(load='X'*30) + +Check packets can be received and forwarded with vlan tag. + +Send packets with unmatched vlan tag:: + + p=Ether(dst="00:01:23:45:67:89")/Dot1Q(id=0x8100,vlan=30)/IP()/Raw(load='X'*30) + +Check packets can not be received and forwarded. + +Disable vlan filter:: + + testpmd> vlan set filter off 0 + +Send packets with vlan tag:: + + Ether(dst="00:01:23:45:67:89")/Dot1Q(id=0x8100,vlan=20)/IP()/Raw(load='X'*30) + +if vf-vlan-pruning is on:: + + Check packets can be received and forwarded with vlan tag. + +if vf-vlan-pruning is off or not have this option:: + + Check packets can not be received and forwarded + +Test case: DCF vlan promisc +=========================== + +Enable promisc and disable vlan filter:: + + testpmd> port stop all + testpmd> set promisc all on + testpmd> set verbose 1 + testpmd> vlan set filter off 0 + testpmd> vlan set strip off 0 + testpmd> set fwd mac + testpmd> port start all + testpmd> start + +Send 10 random packets with vlan tag:: + + Ether(dst="00:01:23:45:67:89",type=0x8100)/Dot1Q(vlan=100,type=0x0800)/IP(src="196.222.232.1")/("X"*480) + ... + +Check packets can be received and forwarded. + +Send 10 random packets without vlan tag:: + + Ether(dst="00:01:23:45:67:89")/IP(src="196.222.232.1")/("X"*480) + ... + +Check packets can be received and forwarded. + +Test case: DCF add pvid base rx +=============================== + +Add pvid on DCF from PF device:: + + ip link set $PF_INTF vf 0 vlan 2 + +Check pf device show correct pvid setting:: + + ip link show ens259f0 + ... + vf 0 MAC 00:00:00:00:00:00, vlan 1, spoof checking on, link-state auto + + +Start testpmd + +Send packet with same vlan id and check DCF can receive + +Send packet without vlan and check DCF can't receive + +Send packet with wrong vlan id and check DCF can't receive + +Remove added vlan from PF device:: + + ip link set $PF_INTF vf 0 vlan 0 + +Restart testpmd and send packet without vlan and check DCF can receive + +Set packet with vlan id 0 and check DCF can receive + +Set packet with random id 1-4095 + +if vf-vlan-pruning is on:: + + Check packets can be received and forwarded. + +if vf-vlan-pruning is off or not have this option:: + + Check packets can not be received and forwarded + +send vlan=0 and not vlan pkt, also receive + +Test case: DCF add pvid base tx +=============================== +Add pvid on DCF from PF device:: + + ip link set $PF_INTF vf 0 vlan 2 + +Start testpmd with mac forward mode:: + + testpmd> set fwd mac + testpmd> start + +Send packet from tester port1 and check packet received by tester port0:: + + Check port1 received packet with configured vlan 2 + +Test case: DCF vlan rx combination +================================== +Start testpmd with rxonly mode:: + + testpmd> set fwd rxonly + testpmd> set verbose 1 + testpmd> start + +Send packet without vlan and check packet received + +Send packet with vlan 0 and check packet received + +Add vlan on DCF from VF driver:: + + testpmd> rx_vlan add 1 0 + +Send packet with vlan0/1 and check packet received + +Rerun with step5-6 with random vlan and max vlan 4095 + +Remove vlan on DCF:: + + rx_vlan rm 1 0 + +Send packet with vlan 0 and check packet received + +Send packet without vlan and check packet received + +Send packet with vlan 1 and check packet received + +Test case: DCF RSS +================== + +Start command with multi-queues like below:: + + .//app/dpdk-testpmd -c f -n 4 -- -i --txq=4 --rxq=4 + +Show RSS RETA configuration:: + + testpmd> show port 0 rss reta 64 (0xffffffffffffffff) + + RSS RETA configuration: hash index=0, queue=0 + RSS RETA configuration: hash index=1, queue=1 + RSS RETA configuration: hash index=2, queue=2 + RSS RETA configuration: hash index=3, queue=3 + ... + RSS RETA configuration: hash index=60, queue=0 + RSS RETA configuration: hash index=61, queue=1 + RSS RETA configuration: hash index=62, queue=2 + RSS RETA configuration: hash index=63, queue=3 + +Config hash reta table:: + + testpmd> port config 0 rss reta (0,3) + testpmd> port config 0 rss reta (1,2) + testpmd> port config 0 rss reta (2,1) + testpmd> port config 0 rss reta (3,0) + +Check RSS RETA configuration has changed:: + + testpmd> show port 0 rss reta 64 (0xffffffffffffffff) + + RSS RETA configuration: hash index=0, queue=3 + RSS RETA configuration: hash index=1, queue=2 + RSS RETA configuration: hash index=2, queue=2 + RSS RETA configuration: hash index=3, queue=1 + +Enable IP/TCP/UDP RSS:: + + testpmd> port config all rss (all|ip|tcp|udp|sctp|ether|port|vxlan|geneve|nvgre|none) + +Send different flow types' IP/TCP/UDP packets to DCF port, check packets are +received by different configured queues. + +Test case: DCF RSS hash key +=========================== + +Start command with multi-queues like below:: + + .//app/dpdk-testpmd -c f -n 4 -- -i --txq=4 --rxq=4 + +Show port rss hash key:: + + testpmd> show port 0 rss-hash key + +Set rxonly fwd, enable print, start testpmd:: + + testpmd> set fwd rxonly + testpmd> set verbose 1 + testpmd> start + +Send ipv4 packets, mark the RSS hash value:: + + p=Ether(dst="56:0A:EC:50:A4:28")/IP(src="1.2.3.4")/Raw(load='X'*30) + +Update ipv4 different hash key:: + + testpmd> port config 0 rss-hash-key ipv4 1b9d58a4b961d9cd1c56ad1621c3ad51632c16a5d16c21c3513d132c135d132c13ad1531c23a51d6ac49879c499d798a7d949c8a + +Show port rss hash key, check the key is same to configured key:: + + testpmd> show port 0 rss-hash key + RSS functions: + all ipv4 ipv6 ip + RSS key: + 1B9D58A4B961D9CD1C56AD1621C3AD51632C16A5D16C21C3513D132C135D132C13AD1531C23A51D6AC49879C499D798A7D949C8A + +Send ipv4 packets, check RSS hash value is different:: + + p=Ether(dst="56:0A:EC:50:A4:28")/IP(src="1.2.3.4")/Raw(load='X'*30) + +Test case: DCF rxq txq number inconsistent +========================================== + +Start the testpmd with rxq not equal to txq:: + + .//app/dpdk-testpmd -l 1-9 -n 2 -- -i --rxq=4 --txq=8 + +Set rxonly fwd, enable print, start testpmd:: + + testpmd> set fwd rxonly + testpmd> set verbose 1 + testpmd> start + +Send different hash types' packets with different keywords, then check rx port + could receive packets by different queues:: + + sendp([Ether(dst="00:01:23:45:67:89")/IP(src="192.168.0.4", dst=RandIP())], iface="eth3") + +Check the total Rx packets in all the RxQ should be equal to the total HW Rx packets:: + + testpmd> show fwd stats all + ------- Forward Stats for RX Port= 0/Queue= 0 -> TX Port= 0/Queue= 0 ------- + RX-packets: 252 TX-packets: 0 TX-dropped: 0 + + ------- Forward Stats for RX Port= 0/Queue= 1 -> TX Port= 0/Queue= 1 ------- + RX-packets: 257 TX-packets: 0 TX-dropped: 0 + + ------- Forward Stats for RX Port= 0/Queue= 2 -> TX Port= 0/Queue= 2 ------- + RX-packets: 259 TX-packets: 0 TX-dropped: 0 + + ------- Forward Stats for RX Port= 0/Queue= 3 -> TX Port= 0/Queue= 3 ------- + RX-packets: 256 TX-packets: 0 TX-dropped: 0 + + ---------------------- Forward statistics for port 0 ---------------------- + RX-packets: 1024 RX-dropped: 0 RX-total: 1024 + TX-packets: 0 TX-dropped: 0 TX-total: 0 + ---------------------------------------------------------------------------- + + +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++ + RX-packets: 1024 RX-dropped: 0 RX-total: 1024 + TX-packets: 0 TX-dropped: 0 TX-total: 0 + ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ + +Test case: DCF port stop/start +============================== + +Stop port:: + + testpmd> port stop all + +Start port:: + + testpmd> port start all + +Repeat above stop and start port for 10 times + +Send packets from tester + +Check DCF could receive packets + +Test case: DCF statistics reset +=============================== + +Check port stats:: + + testpmd> show port stats all + +Clear port stats:: + + testpmd> clear port stats all + +Check DCF port stats, RX-packets and TX-packets are 0 + +Set mac forward, enable print out + +Send 100 packets from tester + +Check DCF port stats, RX-packets and TX-packets are 100 + +Clear DCF port stats + +Check DCF port stats, RX-packets and TX-packets are 0 + +Test case: DCF information +========================== + +Start testpmd + +Show DCF port information, check link, speed...information correctness:: + + testpmd> show port info all + +Set mac forward, enable print out + +Send 100 packets from tester + +Check DCF port stats, RX-packets and TX-packets are 100 + +Test case: DCF xstats check +=========================== + +Launch testpmd and enable rss:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c f -n 4 -a 0000:18:01.0,cap=dcf -a 0000:18:09.0,cap=dcf -- -i --rxq=4 --txq=4 --max-pkt-len=9000 + testpmd> port config all rss all + testpmd> set fwd mac + +show the xstats before packet forwarding, all the value are 0. + +Start forward and send 100 packets with random src IP address, + then stop forward. + +Check stats and xstats:: + + testpmd> stop + + testpmd> show port stats all + + testpmd> show port xstats all + +verify rx_good_packets, RX-packets of port 0 and tx_good_packets, TX-packets of port 1 are both 100. +rx_good_bytes, RX-bytes of port 0 and tx_good_bytes, TX-bytes of port 1 are the same. +Intel® Ethernet 700 Series does not support hardware per queue stats, +so rx_qx_packets and rx_qx_bytes are both 0. +tx_qx_packets and tx_qx_bytes are both 0 too. + +Clear stats:: + + testpmd> clear port stats all + +Check stats and xstats, verify rx_good_packets, RX-packets of port 0 and tx_good_packets, TX-packets of port 1 are both 0. + +Repeat above 4 and 5 steps. + +Clear xstats:: + + testpmd> clear port xstats all + +Check stats and xstats, verify rx_good_packets, RX-packets of port 0 and tx_good_packets, TX-packets of port 1 are both 0.