From patchwork Thu Jan 5 11:07:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jiale, SongX" X-Patchwork-Id: 121602 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7A274A00C2; Thu, 5 Jan 2023 04:10:16 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 75CEC42D2A; Thu, 5 Jan 2023 04:10:16 +0100 (CET) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id ADD3940DFF for ; Thu, 5 Jan 2023 04:10:13 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1672888213; x=1704424213; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Pj/0+g6iW0iMWRhX2rQIJITCTKFKB+l/0vFv03WdceM=; b=i6EfZRT5W/pe2OJrDLKNeYd3lFy8C5w4vbY2FLi8Pa5qzUeX8roobowX A9wowwpDBLlnucqWDVovkIzczchqCrFZEeOECz9X/7XZDEetU3dre3gFG 0QOzKBzOzzjc4BtJuBIEDI8EhwStSyPQpg4gamzfU7G7NnSrEoOft6ef6 i8r5M9VNo6cFARnaAIAOZ4GKA7dC1v/EgFU+bhna1Guzcm82wD9hLF1uq Xnd119Dl7Vch8MuJ/UN3B1TDE9nKMNob0h4HXTXoroU07wsjYzaO2R60l W+70R9MkGL6G0pcqDxYUgXmVuALsUZulCSdfJDlK6BIoFNJmt2X7XIH6h Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10580"; a="384398317" X-IronPort-AV: E=Sophos;i="5.96,301,1665471600"; d="scan'208";a="384398317" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jan 2023 19:10:12 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10580"; a="685949754" X-IronPort-AV: E=Sophos;i="5.96,301,1665471600"; d="scan'208";a="685949754" Received: from unknown (HELO localhost.localdomain) ([10.239.252.20]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jan 2023 19:10:11 -0800 From: Song Jiale To: dts@dpdk.org Cc: Song Jiale Subject: [dts] [PATCH V1 6/7] test_plans/vf_pmd_stacked_bonded: add cases to test vf bonding Date: Thu, 5 Jan 2023 11:07:51 +0000 Message-Id: <20230105110752.235201-7-songx.jiale@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230105110752.235201-1-songx.jiale@intel.com> References: <20230105110752.235201-1-songx.jiale@intel.com> MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org add testplan to test vf bonding. Signed-off-by: Song Jiale --- .../vf_pmd_stacked_bonded_test_plan.rst | 406 ++++++++++++++++++ 1 file changed, 406 insertions(+) create mode 100644 test_plans/vf_pmd_stacked_bonded_test_plan.rst diff --git a/test_plans/vf_pmd_stacked_bonded_test_plan.rst b/test_plans/vf_pmd_stacked_bonded_test_plan.rst new file mode 100644 index 00000000..8c6c6851 --- /dev/null +++ b/test_plans/vf_pmd_stacked_bonded_test_plan.rst @@ -0,0 +1,406 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2023 Intel Corporation + +============== +stacked Bonded +============== + +Stacked bonded mechanism allow a bonded port to be added to another bonded port. + +The demand arises from a discussion with a prospective customer for a 100G NIC +based on RRC. The customer already uses Mellanox 100G NICs. Mellanox 100G NICs +support a proper x16 PCIe interface so the host sees a single netdev and that +netdev corresponds directly to the 100G Ethernet port. They indicated that in +their current system they bond multiple 100G NICs together, using DPDK bonding +API in their application. They are interested in looking at an alternative source +for the 100G NIC and are in conversation with Silicom who are shipping a 100G +RRC based NIC (something like Boulder Rapids). The issue they have with RRC NIC +is that the NIC presents as two PCIe interfaces (netdevs) instead of one. If the +DPDK bonding could operate at 1st level on the two RRC netdevs to present a +single netdev could the application then bond multiple of these bonded +interfaces to implement NIC bonding. + +Prerequisites +============= + +hardware configuration +---------------------- + +all link ports of tester/dut should be the same data rate and support full-duplex. +Slave down test cases need four ports at least, other test cases can run with +two ports. + +NIC/DUT/TESTER ports requirements: + +- Tester: 2 ports of nic +- DUT: 2 ports of nic + +enable ``link-down-on-close`` in tester:: + + ethtool --set-priv-flags {tport_iface0} link-down-on-close on + ethtool --set-priv-flags {tport_iface1} link-down-on-close on + +create 2 vf for two dut ports:: + + echo 2 > /sys/bus/pci/devices/0000\:31\:00.0/sriov_numvfs + echo 2 > /sys/bus/pci/devices/0000\:31\:00.1/sriov_numvfs + +port topology diagram(2 peer links):: + + TESTER DUT + physical link logical link + .---------. .------------------------------------------------. + | portA 0 | <------------> | portB pf0vf0 <---> .--------. | + | | | | bond 0 | <-----> .------. | + | portA 1 | <------------> | portB pf1vf0 <---> '--------' | | | + | | | |bond2 | | + | portA 0 | <------------> | portB pf0vf1 <---> .--------. | | | + | | | | bond 1 | <-----> '------' | + | portA 1 | <------------> | portB pf1vf1 <---> '--------' | + '---------' '------------------------------------------------' + +Test cases +========== +``tx-offloads`` value set based on nic type. Test cases' steps, which run for +slave down testing, are based on 4 ports. Other test cases' steps are based on +2 ports. + +Test Case: basic behavior +========================= +allow a bonded port to be added to another bonded port, which is +supported by:: + + balance-rr 0 + active-backup 1 + balance-xor 2 + broadcast 3 + balance-tlb 5 + balance-alb 6 + +#. 802.3ad mode is not supported if one or more slaves is a bond device. +#. add the same device twice to check exceptional process is good. +#. master bonded port/each slaves queue configuration is the same. + +steps +----- + +#. bind two ports:: + + ./usertools/dpdk-devbind.py --bind=vfio-pci + +#. boot up testpmd, stop all ports:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i + testpmd> port stop all + +#. create first bonded port and add two slave, check bond 4 config status:: + + testpmd> create bonded device 0 + testpmd> add bonding slave 0 4 + testpmd> add bonding slave 2 4 + testpmd> show bonding config 4 + +#. create second bonded port and add two slave, check bond 5 config status:: + + testpmd> create bonded device 0 + testpmd> add bonding slave 1 5 + testpmd> add bonding slave 3 5 + testpmd> show bonding config 5 + +#. create third bonded port and add first/second bonded port as its' slaves. + check if slaves are added successful. stacked bonded is forbidden by mode 4, + mode 4 will fail to add a bonded port as its' slave:: + + testpmd> create bonded device 0 + testpmd> add bonding slave 4 6 + testpmd> add bonding slave 5 6 + testpmd> show bonding config 6 + +#. check master bonded port/slave port's queue configuration are the same:: + + testpmd> show bonding config 0 + testpmd> show bonding config 1 + testpmd> show bonding config 2 + testpmd> show bonding config 3 + testpmd> show bonding config 4 + testpmd> show bonding config 5 + testpmd> show bonding config 6 + +#. start top level bond port to check ports start action:: + + testpmd> port start 4 + testpmd> start + +#. close testpmd:: + + testpmd> stop + testpmd> quit + +#. repeat upper steps with the following mode number:: + + balance-rr 0 + active-backup 1 + balance-xor 2 + broadcast 3 + 802.3ad 4 + balance-tlb 5 + +Test Case: active-backup stacked bonded rx traffic +================================================== +setup dut/testpmd stacked bonded ports, send tcp packet by scapy and check +testpmd packet statistics. + +steps +----- + +#. bind two ports:: + + ./usertools/dpdk-devbind.py --bind=vfio-pci + +#. boot up testpmd, stop all ports:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i + testpmd> port stop all + +#. create first bonded port and add two port as slave:: + + testpmd> create bonded device 1 0 + testpmd> add bonding slave 0 4 + testpmd> add bonding slave 2 4 + +#. create second bonded port and add two port as slave:: + + testpmd> create bonded device 1 0 + testpmd> add bonding slave 1 5 + testpmd> add bonding slave 3 5 + +#. create third bonded port and add first/second bonded ports as its' slaves, + check if slaves are added successful:: + + testpmd> create bonded device 1 0 + testpmd> add bonding slave 4 6 + testpmd> add bonding slave 5 6 + testpmd> show bonding config 6 + +#. start top level bond port:: + + testpmd> port start 6 + testpmd> start + +#. send 100 tcp packets to portA 0 and portA 1:: + + sendp([Ether({pf0_vf0_mac})/IP()/TCP()/Raw('\0'*60)], iface=) + sendp([Ether({pf0_vf1_mac})/IP()/TCP()/Raw('\0'*60)], iface=) + sendp([Ether({pf1_vf0_mac})/IP()/TCP()/Raw('\0'*60)], iface=) + sendp([Ether({pf1_vf1_mac})/IP()/TCP()/Raw('\0'*60)], iface=) + +#. first/second bonded port should receive 400 packets, third bonded port + should receive 800 packets:: + + testpmd> show port stats all + +#. close testpmd:: + + testpmd> stop + testpmd> quit + +Test Case: active-backup stacked bonded rx traffic with slave down +================================================================== +setup dut/testpmd stacked bonded ports, set one slave of 1st level bonded port +to down status, send tcp packet by scapy and check testpmd packet statistics. + +steps +----- + +#. bind four ports:: + + ./usertools/dpdk-devbind.py --bind=vfio-pci + +#. boot up testpmd, stop all ports:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i + testpmd> port stop all + +#. create first bonded port and add two ports as slaves:: + + testpmd> create bonded device 1 0 + testpmd> add bonding slave 0 4 + testpmd> add bonding slave 2 4 + +#. set portB pf0vf0 and pf0vf1 down:: + + ethtool --set-priv-flags {portA 0} link-down-on-close on + ifconfig {portA 0} down + +.. note:: + + The vf port link status cannot be changed directly. Change the connection status of + the opposite port to make the vf port link down. + +#. create second bonded port and add two ports as slaves:: + + testpmd> create bonded device 1 0 + testpmd> add bonding slave 1 5 + testpmd> add bonding slave 3 5 + +#. create third bonded port and add first/second bonded port as its' slaves, + check if slave is added successful:: + + testpmd> create bonded device 1 0 + testpmd> add bonding slave 4 6 + testpmd> add bonding slave 5 6 + testpmd> show bonding config 6 + +#. start top level bond port:: + + testpmd> port start 6 + testpmd> start + +#. send 100 packets to portB pf0vf0/portB pf0vf1/portB pf1vf0/portB pf1vf1 separately:: + + sendp([Ether({pf0_vf0_mac})/IP()/TCP()/Raw('\0'*60)], iface=) + sendp([Ether({pf0_vf1_mac})/IP()/TCP()/Raw('\0'*60)], iface=) + sendp([Ether({pf1_vf0_mac})/IP()/TCP()/Raw('\0'*60)], iface=) + sendp([Ether({pf1_vf1_mac})/IP()/TCP()/Raw('\0'*60)], iface=) + +#. check first/second bonded ports should receive 100 packets, third bonded + device should receive 200 packets.:: + + testpmd> show port stats all + +#. close testpmd:: + + testpmd> stop + testpmd> quit + +Test Case: balance-xor stacked bonded rx traffic +================================================ +setup dut/testpmd stacked bonded ports, send tcp packet by scapy and check +packet statistics. + +steps +----- + +#. bind two ports:: + + ./usertools/dpdk-devbind.py --bind=vfio-pci + +#. boot up testpmd, stop all ports:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i + testpmd> port stop all + +#. create first bonded port and add one port as slave:: + + testpmd> create bonded device 2 0 + testpmd> add bonding slave 0 4 + testpmd> add bonding slave 2 4 + +#. create second bonded port and add one port as slave:: + + testpmd> create bonded device 2 0 + testpmd> add bonding slave 1 5 + testpmd> add bonding slave 3 5 + +#. create third bonded port and add first/second bonded ports as its' slaves + check if slaves are added successful:: + + testpmd> create bonded device 2 0 + testpmd> add bonding slave 4 6 + testpmd> add bonding slave 5 6 + testpmd> show bonding config 6 + +#. start top level bond port:: + + testpmd> port start 6 + testpmd> start + +#. send 100 packets to portA 0 and portA 1:: + + sendp([Ether({pf0_vf0_mac})/IP()/TCP()/Raw('\0'*60)], iface=) + sendp([Ether({pf0_vf1_mac})/IP()/TCP()/Raw('\0'*60)], iface=) + sendp([Ether({pf1_vf0_mac})/IP()/TCP()/Raw('\0'*60)], iface=) + sendp([Ether({pf1_vf1_mac})/IP()/TCP()/Raw('\0'*60)], iface=) + +#. check first/second bonded port should receive 200 packets, third bonded + device should receive 400 packets:: + + testpmd> show port stats all + +#. close testpmd:: + + testpmd> stop + testpmd> quit + +Test Case: balance-xor stacked bonded rx traffic with slave down +================================================================ +setup dut/testpmd stacked bonded ports, set one slave of 1st level bonded +device to down status, send tcp packet by scapy and check packet statistics. + +steps +----- + +#. bind four ports:: + + ./usertools/dpdk-devbind.py --bind=vfio-pci \ + + +#. boot up testpmd, stop all ports:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x6 -n 4 -- -i + testpmd> port stop all + +#. create first bonded port and add two ports as slaves, set portA 0a down:: + + testpmd> create bonded device 2 0 + testpmd> add bonding slave 0 4 + testpmd> add bonding slave 2 4 + testpmd> port stop 1 + +#. create second bonded port and add two ports as slaves:: + + testpmd> create bonded device 2 0 + testpmd> add bonding slave 1 5 + testpmd> add bonding slave 3 5 + testpmd> port stop 3 + +#. set portB pf0vf0 and pf0vf1 down:: + + ethtool --set-priv-flags {portA 0} link-down-on-close on + ifconfig {portA 0} down + +.. note:: + + The vf port link status cannot be changed directly. Change the connection status of + the opposite port to make the vf port link down. + +#. create third bonded port and add first/second bonded port as its' slaves + check if slave is added successful:: + + testpmd> create bonded device 2 0 + testpmd> add bonding slave 4 6 + testpmd> add bonding slave 5 6 + testpmd> show bonding config 6 + +#. start top level bond port:: + + testpmd> port start 6 + testpmd> start + +#. send 100 packets to portB pf0vf0/portB pf0vf1/portB pf1vf0/portB pf1vf1 separately:: + + sendp([Ether({pf0_vf0_mac})/IP()/TCP()/Raw('\0'*60)], iface=) + sendp([Ether({pf0_vf1_mac})/IP()/TCP()/Raw('\0'*60)], iface=) + sendp([Ether({pf1_vf0_mac})/IP()/TCP()/Raw('\0'*60)], iface=) + sendp([Ether({pf1_vf1_mac})/IP()/TCP()/Raw('\0'*60)], iface=) + +#. check first/second bonded port should receive 100 packets, third bonded + device should receive 200 packets:: + + testpmd> show port stats all + +#. close testpmd:: + + testpmd> stop + testpmd> quit +