From patchwork Fri Aug 26 14:38:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Jiale, SongX" X-Patchwork-Id: 115458 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 16A2DA0551; Fri, 26 Aug 2022 08:22:18 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0ADF640143; Fri, 26 Aug 2022 08:22:18 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 424B340143 for ; Fri, 26 Aug 2022 08:22:16 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1661494936; x=1693030936; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=tG25R8V4YSU2nISx6Z3dCHENezLXdZfzhkinZ5gqzK8=; b=F4gZLvItAR7LH4i6/EEmzhXzGeJv/txUbTaTS0kqnX0jtqmyiuFd6lOr XHKuvblFqshI4eeJzNQ+G43s6gcWa42lFuC2WujlpuMiKhfwp3/6diPbM KXt5JWXEFEAPZLEq9iJ4jDRxzCpUIMy00lB+e346aTpfFRMEwGnwT5m10 WOe7MZhyYQcrGn2Dcz7XeobKk9VcdFgpcu/f4bYDBL61cbt523tbF7a/q sUTHfQOhwjHgR7HmS+L0v0c4gxNpu16Y8M5SWG+yUe+Sn+guWbzWxWCzT gSDxIX6RBxMLk9TMJKVyGtBg/ghNBMcDCGKlgQBf9QBTaymIwJsLVcvdS Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10450"; a="380728081" X-IronPort-AV: E=Sophos;i="5.93,264,1654585200"; d="scan'208";a="380728081" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Aug 2022 23:22:15 -0700 X-IronPort-AV: E=Sophos;i="5.93,264,1654585200"; d="scan'208";a="938640618" Received: from unknown (HELO cvl_tetser_105.icx.intel.com) ([10.239.252.94]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Aug 2022 23:22:13 -0700 From: Jiale Song To: dts@dpdk.org Cc: Jiale Song Subject: [dts] [PATCH V1 1/2] test_plans/ice_flow_priority: move the case of rteflow_priority to ice_flow_priority and remove rteflow_priority Date: Fri, 26 Aug 2022 14:38:23 +0000 Message-Id: <20220826143824.12813-1-songx.jiale@intel.com> X-Mailer: git-send-email 2.17.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org 1. move the case not covered by ice_flow_priority from rteflow_priority to ice_flow_priority. 2. remove the test suite rteflow_priority. Signed-off-by: Jiale Song Signed-off-by: Jiale Song --- test_plans/ice_flow_priority_test_plan.rst | 209 ++++++++++++++ test_plans/index.rst | 1 - test_plans/rteflow_priority_test_plan.rst | 309 --------------------- 3 files changed, 209 insertions(+), 310 deletions(-) delete mode 100644 test_plans/rteflow_priority_test_plan.rst diff --git a/test_plans/ice_flow_priority_test_plan.rst b/test_plans/ice_flow_priority_test_plan.rst index 14b7a6f8..73095014 100644 --- a/test_plans/ice_flow_priority_test_plan.rst +++ b/test_plans/ice_flow_priority_test_plan.rst @@ -727,3 +727,212 @@ Subcase 3: some rules overlap 8. destroy rule 4, repeat step 7 and check the pkts can be received by queue 4:: flow destroy 0 rule 3 + +Test case 23: Create Flow Rules Only Supported by Fdir Filter with Priority 0 +----------------------------------------------------------------------------- + +Creating a rule only supported by fdir filter with priority 0, it is not acceptable. + +Patterns in this case: + MAC_IPV6_SCTP + MAC_IPV4_SCTP + +#. Start the ``testpmd`` application as follows:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf -n 4 -a 0000:af:00.0,pipeline-mode-support=1 --log-level="ice,7" -- -i --txq=8 --rxq=8 + set fwd rxonly + set verbose 1 + +#. Create rules, check the flows can not be created:: + + testpmd> flow create 0 priority 0 ingress pattern eth / ipv6 src is 1111:2222:3333:4444:5555:6666:7777:8888 dst is 1111:2222:3333:4444:5555:6666:7777:9999 / sctp src is 25 dst is 23 / end actions queue index 1 / end + ice_flow_create(): Failed to create flow + Caught error type 2 (flow rule (handle)): Invalid input pattern: Invalid argument + + testpmd> flow create 0 priority 0 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 ttl is 20 / sctp src is 25 dst is 23 / end actions queue index 1 / end + ice_flow_create(): Failed to create flow + Caught error type 2 (flow rule (handle)): Invalid input pattern: Invalid argument + + +Test case 24: Create flow rules only supported by switch filter with priority 1 +------------------------------------------------------------------------------- + +Create a rule only supported by fdir switch with priority 1, it is not acceptable. + +Patterns in this case: + MAC_IPV4_NVGRE_MAC_IPV4 + MAC_IPV4_NVGRE_MAC_IPV4_UDP + +#. Start the ``testpmd`` application as follows:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf -n 4 -a 0000:af:00.0,pipeline-mode-support=1 --log-level="ice,7" -- -i --txq=8 --rxq=8 + set fwd rxonly + set verbose 1 + +#. Create rules, check the flows can not be created:: + + testpmd> flow create 0 priority 1 ingress pattern eth / ipv4 / nvgre / eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / end actions queue index 3 / end + ice_flow_create(): Failed to create flow + Caught error type 13 (specific pattern item): cause: 0x7fffe65b8128, Unsupported pattern: Invalid argument + + testpmd> flow create 0 priority 1 ingress pattern eth / ipv4 / nvgre / eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / udp src is 25 dst is 23 / end actions queue index 3 / end + ice_flow_create(): Failed to create flow + Caught error type 13 (specific pattern item): cause: 0x7fffe65b8128, Unsupported pattern: Invalid argument + +Test case 25: Create Flow Rules with Priority in Pipeline Mode +-------------------------------------------------------------- + +Priority is active in pipeline mode. +Creating flow rules and setting priority 0/1 will map switch/fdir filter separately. + +Patterns in this case: + MAC_IPV4_TCP + MAC_IPV4_VXLAN_IPV4_UDP_PAY + +#. Start the ``testpmd`` application as follows:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf -n 4 -a 0000:af:00.0,pipeline-mode-support=1 --log-level="ice,7" -- -i --txq=8 --rxq=8 + set fwd rxonly + set verbose 1 + rx_vxlan_port add 4789 0 + +#. Create switch filter rules:: + + flow create 0 priority 0 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / tcp src is 25 dst is 23 / end actions queue index 1 / end + + flow create 0 priority 0 ingress pattern eth / ipv4 / udp / vxlan / eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / udp src is 25 dst is 23 / end actions queue index 2 / end + +#. Create fdir filter rules:: + + flow create 0 priority 1 ingress pattern eth / ipv4 src is 192.168.0.4 dst is 192.168.0.7 tos is 4 ttl is 20 / tcp src is 25 dst is 23 / end actions queue index 3 / end + + flow create 0 priority 1 ingress pattern eth / ipv4 / udp / vxlan / eth / ipv4 src is 192.168.0.4 dst is 192.168.0.7 / udp src is 25 dst is 23 / end actions queue index 4 / end + +#. Check flow list with commands "flow list 0", all flows are created correctly:: + + +-----+--------+--------+--------+-----------------------+ + |ID | Group | Prio | Attr | Rul | + +=====+========+========+========+=======================+ + | 0 | 0 | 0 | i- | ETH IPV4 TCP => QUEUE | + +-----+--------+--------+--------+-----------------------+ + | 1 ... | + +-----+--------+--------+--------+-----------------------+ + | 2 ... | + +-----+--------+--------+--------+-----------------------+ + | 3 ... | + +-----+--------+--------+--------+-----------------------+ + +#. Send packets according to the created rules in tester:: + + sendp([Ether(dst="00:00:00:00:11:00",src="11:22:33:44:55:66")/IP(src="192.168.0.2",dst="192.168.0.3",tos=4)/TCP(sport=25,dport=23)/Raw('x'*80)],iface="enp134s0f0") + sendp([Ether(dst="00:00:00:00:11:00",src="11:22:33:44:55:66")/IP()/UDP()/VXLAN()/Ether()/IP(src="192.168.0.2",dst="192.168.0.3",tos=4)/UDP(sport=25,dport=23)/Raw('x'*80)],iface="enp134s0f0") + sendp([Ether(dst="00:00:00:00:11:00",src="11:22:33:44:55:66")/IP(src="192.168.0.4",dst="192.168.0.7",tos=4,ttl=20)/TCP(sport=25,dport=23)/Raw('x'*80)],iface="enp134s0f0") + sendp([Ether(dst="00:00:00:00:11:00",src="11:22:33:44:55:66")/IP()/UDP()/VXLAN()/Ether()/IP(src="192.168.0.4 ",dst="192.168.0.7")/UDP(sport=25,dport=23)/Raw('x'*80)],iface="enp134s0f0") + +#. Check the packets are recieved in right queues by dut:: + + testpmd> port 0/queue 1: received 1 packets + src=11:22:33:44:55:66 - dst=00:00:00:00:11:00 - type=0x0800 - length=134 - nb_segs=1 - RSS hash=0x96803f93 - RSS queue=0x1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_TCP - sw ptype: L2_ETHER L3_IPV4 L4_TCP - l2_len=14 - l3_len=20 - l4_len=20 - Receive queue=0x1 + ol_flags: PKT_RX_RSS_HASH PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN + ...... + +#. Create rules without priority, Check only patterns supported by switch can be created for the default priorty is 0. +So the first flow can be created and the second flow can not be created:: + + testpmd> flow create 0 ingress pattern eth / ipv4 src is 192.168.0.3 dst is 192.168.0.1 tos is 5 / tcp src is 25 dst is 23 / end actions queue index 1 / end + ice_flow_create(): Succeeded to create (2) flow + Flow rule #1 created + testpmd> flow create 0 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 ttl is 20 / sctp src is 25 dst is 23 / end actions queue index 1 / end + ice_flow_create(): Failed to create flow + Caught error type 2 (flow rule (handle)): Invalid input pattern: Invalid argument + +Test case 26: Create flow rules with same parameter but different actions +------------------------------------------------------------------------- + +It is acceptable to create same rules with differenet filter in pipeline mode. +When fdir filter and switch filter has the same parameter rules, the flow will map to switch then fdir. + +Patterns in this case: + MAC_IPV4_TCP + +#. Start the ``testpmd`` application as follows:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf -n 4 -a 0000:af:00.0,pipeline-mode-support=1 --log-level="ice,7" -- -i --txq=8 --rxq=8 + set fwd rxonly + set verbose 1 + +#. Create switch rule then fdir rule with the same parameter, check two flows can be created:: + + testpmd> flow create 0 priority 0 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / tcp src is 25 dst is 23 / end actions queue index 1 / end + ice_flow_create(): Succeeded to create (2) flow + Flow rule #0 created + + testpmd> flow create 0 priority 1 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / tcp src is 25 dst is 23 / end actions queue index 3 / end + ice_interrupt_handler(): OICR: MDD event + ice_flow_create(): Succeeded to create (1) flow + Flow rule #1 created + +#. Tester send a pkt to dut:: + + sendp([Ether(dst="00:00:00:00:11:00",src="11:22:33:44:55:66")/IP(src="192.168.0.2",dst="192.168.0.3",tos=4)/TCP(sport=25,dport=23)/Raw('x'*80)],iface="enp134s0f0") + +#. Check the packets are recieved by dut in queue 1:: + + testpmd> port 0/queue 1: received 1 packets + src=11:22:33:44:55:66 - dst=00:00:00:00:11:00 - type=0x0800 - length=134 - nb_segs=1 - RSS hash=0xf12811f1 - RSS queue=0x1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_TCP - sw ptype: L2_ETHER L3_IPV4 L4_TCP - l2_len=14 - l3_len=20 - l4_len=20 - Receive queue=0x1 + ol_flags: PKT_RX_RSS_HASH PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN + +#. Remove the switch rule:: + + testpmd>flow destroy 0 rule 0 + +#. Tester send a pkt to dut:: + + sendp([Ether(dst="00:00:00:00:11:00",src="11:22:33:44:55:66")/IP(src="192.168.0.2",dst="192.168.0.3",tos=4)/TCP(sport=25,dport=23)/Raw('x'*80)],iface="enp134s0f0") + +#. Check the packets are recieved in queue 3:: + + testpmd> port 0/queue 3: received 1 packets + src=11:22:33:44:55:66 - dst=00:00:00:00:11:00 - type=0x0800 - length=134 - nb_segs=1 - RSS hash=0xf12811f1 - RSS queue=0x3 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_TCP - sw ptype: L2_ETHER L3_IPV4 L4_TCP - l2_len=14 - l3_len=20 - l4_len=20 - Receive queue=0x3 + ol_flags: PKT_RX_RSS_HASH PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN + +#. Restart the ``testpmd`` application as follows:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf -n 4 -a 0000:af:00.0, pipeline-mode-support=1 --log-level="ice,7" -- -i --txq=8 --rxq=8 + set fwd rxonly + set verbose 1 + +#. Create fdir rule then switch rule with the same parameter, check two flows can be created:: + + testpmd> flow create 0 priority 1 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / tcp src is 25 dst is 23 / end actions queue index 3 / end + ice_interrupt_handler(): OICR: MDD event + ice_flow_create(): Succeeded to create (1) flow + Flow rule #0 created + + testpmd> flow create 0 priority 0 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / tcp src is 25 dst is 23 / end actions queue index 1 / end + ice_flow_create(): Succeeded to create (2) flow + Flow rule #1 created + +#. Tester send a pkt to dut:: + + sendp([Ether(dst="00:00:00:00:11:00",src="11:22:33:44:55:66")/IP(src="192.168.0.2",dst="192.168.0.3",tos=4)/TCP(sport=25,dport=23)/Raw('x'*80)],iface="enp134s0f0") + +#. Check the packets are recieved by dut in queue 1:: + + testpmd> port 0/queue 1: received 1 packets + src=11:22:33:44:55:66 - dst=00:00:00:00:11:00 - type=0x0800 - length=134 - nb_segs=1 - RSS hash=0xf12811f1 - RSS queue=0x1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_TCP - sw ptype: L2_ETHER L3_IPV4 L4_TCP - l2_len=14 - l3_len=20 - l4_len=20 - Receive queue=0x1 + ol_flags: PKT_RX_RSS_HASH PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN + +#. Remove the switch rule:: + + testpmd>flow destroy 0 rule 1 + +#. Tester send a pkt to dut:: + + sendp([Ether(dst="00:00:00:00:11:00",src="11:22:33:44:55:66")/IP(src="192.168.0.2",dst="192.168.0.3",tos=4)/TCP(sport=25,dport=23)/Raw('x'*80)],iface="enp134s0f0") + +#. Check the packets are recieved in queue 3:: + + testpmd> port 0/queue 3: received 1 packets + src=11:22:33:44:55:66 - dst=00:00:00:00:11:00 - type=0x0800 - length=134 - nb_segs=1 - RSS hash=0xf12811f1 - RSS queue=0x3 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_TCP - sw ptype: L2_ETHER L3_IPV4 L4_TCP - l2_len=14 - l3_len=20 - l4_len=20 - Receive queue=0x3 + ol_flags: PKT_RX_RSS_HASH PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN \ No newline at end of file diff --git a/test_plans/index.rst b/test_plans/index.rst index a78dd0f5..30c52335 100644 --- a/test_plans/index.rst +++ b/test_plans/index.rst @@ -130,7 +130,6 @@ The following are the test plans for the DPDK DTS automated test system. rss_to_rte_flow_test_plan rss_key_update_test_plan rxtx_offload_test_plan - rteflow_priority_test_plan rte_flow_test_plan runtime_vf_queue_number_kernel_test_plan runtime_vf_queue_number_maxinum_test_plan diff --git a/test_plans/rteflow_priority_test_plan.rst b/test_plans/rteflow_priority_test_plan.rst deleted file mode 100644 index 111383ba..00000000 --- a/test_plans/rteflow_priority_test_plan.rst +++ /dev/null @@ -1,309 +0,0 @@ -.. SPDX-License-Identifier: BSD-3-Clause - Copyright(c) 2019 Intel Corporation - -======================= -Rte_flow Priority Tests -======================= - - -Description -=========== - -This document provides the plan for testing the Rte_flow Priority feature. -this feature uses devargs as a hint to active flow priority or not. - -This test plan is based on Intel E810 series ethernet cards. -when priority is not active, flows are created on fdir then switch/ACL. -when priority is active, flows are identified into 2 category: -High priority as permission stage that maps to switch/ACL, -Low priority as distribution stage that maps to fdir, -a no destination high priority rule is not acceptable, since it may be overwritten -by a low priority rule due to IntelĀ® Ethernet 800 Series FXP behavior. - -Note: Since these tests are focus on priority, the patterns in tests are examples. - - -Prerequisites -============= - -Bind the pf to dpdk driver:: - - ./usertools/dpdk-devbind.py -b vfio-pci af:00.0 - -Note: The kernel must be >= 3.6+ and VT-d must be enabled in bios. - -Test Case: Setting Priority in Non-pipeline Mode -================================================ - -Priority is not active in non-pipeline mode. The default value of priority is 0 but it will be ignored. - -Patterns in this case: - MAC_IPV4 - -#. Start the ``testpmd`` application as follows:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf -n 4 -a 0000:af:00.0 --log-level="ice,7" -- -i --txq=8 --rxq=8 - set fwd rxonly - set verbose 1 - -#. Create a rule with priority 0, Check the flow can be created but it will map to fdir filter:: - - testpmd> flow create 0 priority 0 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / end actions queue index 2 / mark / end - ice_interrupt_handler(): OICR: MDD event - ice_flow_create(): Succeeded to create (1) flow - Flow rule #0 created - -#. Create a rule with priority 1, check the flow can not be created for the vallue of priority is 0 in non-pipeline mode:: - - testpmd> flow create 0 priority 1 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / end actions queue index 2 / mark / end - ice_flow_create(): Failed to create flow - Caught error type 4 (priority field): cause: 0x7ffe24e65738, Not support priority.: Invalid argument - -#. Start the ``testpmd`` application as follows:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf -n 4 -a 0000:af:00.0,pipeline-mode-support=0 --log-level="ice,7" -- -i --txq=8 --rxq=8 - set fwd rxonly - set verbose 1 - -#. Create a rule with priority 0, Check the flow can be created but it will map to fdir filter:: - - testpmd> flow create 0 priority 0 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / end actions queue index 2 / end - ice_interrupt_handler(): OICR: MDD event - ice_flow_create(): Succeeded to create (1) flow - Flow rule #0 created - -#. Create a rule with priority 1, check the flow can not be created for the vallue of priority is 0 in non-pipeline mode:: - - testpmd> flow create 0 priority 1 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / end actions queue index 2 / end - ice_flow_create(): Failed to create flow - Caught error type 4 (priority field): cause: 0x7ffe24e65738, Not support priority.: Invalid argument - -Test Case: Create Flow Rules with Priority in Pipeline Mode -============================================================ - -Priority is active in pipeline mode. -Creating flow rules and setting priority 0/1 will map switch/fdir filter separately. - -Patterns in this case: - MAC_IPV4_TCP - MAC_IPV4_VXLAN_IPV4_UDP_PAY - -#. Start the ``testpmd`` application as follows:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf -n 4 -a 0000:af:00.0,pipeline-mode-support=1 --log-level="ice,7" -- -i --txq=8 --rxq=8 - set fwd rxonly - set verbose 1 - rx_vxlan_port add 4789 0 - -#. Create switch filter rules:: - - flow create 0 priority 0 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / tcp src is 25 dst is 23 / end actions queue index 1 / end - - flow create 0 priority 0 ingress pattern eth / ipv4 / udp / vxlan / eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / udp src is 25 dst is 23 / end actions queue index 2 / end - -#. Create fdir filter rules:: - - flow create 0 priority 1 ingress pattern eth / ipv4 src is 192.168.0.4 dst is 192.168.0.7 tos is 4 ttl is 20 / tcp src is 25 dst is 23 / end actions queue index 3 / end - - flow create 0 priority 1 ingress pattern eth / ipv4 / udp / vxlan / eth / ipv4 src is 192.168.0.4 dst is 192.168.0.7 / udp src is 25 dst is 23 / end actions queue index 4 / end - -#. Check flow list with commands "flow list 0", all flows are created correctly:: - - +-----+--------+--------+--------+-----------------------+ - |ID | Group | Prio | Attr | Rul | - +=====+========+========+========+=======================+ - | 0 | 0 | 0 | i- | ETH IPV4 TCP => QUEUE | - +-----+--------+--------+--------+-----------------------+ - | 1 ... | - +-----+--------+--------+--------+-----------------------+ - | 2 ... | - +-----+--------+--------+--------+-----------------------+ - | 3 ... | - +-----+--------+--------+--------+-----------------------+ - -#. Send packets according to the created rules in tester:: - - sendp([Ether(dst="00:00:00:00:11:00",src="11:22:33:44:55:66")/IP(src="192.168.0.2",dst="192.168.0.3",tos=4)/TCP(sport=25,dport=23)/Raw('x'*80)],iface="enp134s0f0") - sendp([Ether(dst="00:00:00:00:11:00",src="11:22:33:44:55:66")/IP()/UDP()/VXLAN()/Ether()/IP(src="192.168.0.2",dst="192.168.0.3",tos=4)/UDP(sport=25,dport=23)/Raw('x'*80)],iface="enp134s0f0") - sendp([Ether(dst="00:00:00:00:11:00",src="11:22:33:44:55:66")/IP(src="192.168.0.4",dst="192.168.0.7",tos=4,ttl=20)/TCP(sport=25,dport=23)/Raw('x'*80)],iface="enp134s0f0") - sendp([Ether(dst="00:00:00:00:11:00",src="11:22:33:44:55:66")/IP()/UDP()/VXLAN()/Ether()/IP(src="192.168.0.4 ",dst="192.168.0.7")/UDP(sport=25,dport=23)/Raw('x'*80)],iface="enp134s0f0") - -#. Check the packets are recieved in right queues by dut:: - - testpmd> port 0/queue 1: received 1 packets - src=11:22:33:44:55:66 - dst=00:00:00:00:11:00 - type=0x0800 - length=134 - nb_segs=1 - RSS hash=0x96803f93 - RSS queue=0x1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_TCP - sw ptype: L2_ETHER L3_IPV4 L4_TCP - l2_len=14 - l3_len=20 - l4_len=20 - Receive queue=0x1 - ol_flags: PKT_RX_RSS_HASH PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN - ...... - -#. Create rules without priority, Check only patterns supported by switch can be created for the default priorty is 0. -So the first flow can be created and the second flow can not be created:: - - testpmd> flow create 0 ingress pattern eth / ipv4 src is 192.168.0.3 dst is 192.168.0.1 tos is 5 / tcp src is 25 dst is 23 / end actions queue index 1 / end - ice_flow_create(): Succeeded to create (2) flow - Flow rule #1 created - testpmd> flow create 0 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 ttl is 20 / sctp src is 25 dst is 23 / end actions queue index 1 / end - ice_flow_create(): Failed to create flow - Caught error type 2 (flow rule (handle)): Invalid input pattern: Invalid argument - -Test case: Create No Destination High Priority Flow Rule -======================================================== - -A no destination high priority rule is not acceptable. Destination here means exact actions. - -Patterns in this case: - MAC_IPV4_TCP - -#. Start the ``testpmd`` application as follows:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf -n 4 -a 0000:af:00.0,pipeline-mode-support=1 --log-level="ice,7" -- -i --txq=8 --rxq=8 - set fwd rxonly - set verbose 1 - -#. Create a rule without exact actions, check the flows can not be created:: - - testpmd> flow create 0 priority 0 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / tcp src is 25 dst is 23 / end actions / end - Bad arguments - testpmd> flow create 0 priority 0 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / tcp src is 25 dst is 23 / end - Bad arguments - -Test case: Create Flow Rules Only Supported by Fdir Filter with Priority 0 -=========================================================================== - -Creating a rule only supported by fdir filter with priority 0, it is not acceptable. - -Patterns in this case: - MAC_IPV6_SCTP - MAC_IPV4_SCTP - -#. Start the ``testpmd`` application as follows:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf -n 4 -a 0000:af:00.0,pipeline-mode-support=1 --log-level="ice,7" -- -i --txq=8 --rxq=8 - set fwd rxonly - set verbose 1 - -#. Create rules, check the flows can not be created:: - - testpmd> flow create 0 priority 0 ingress pattern eth / ipv6 src is 1111:2222:3333:4444:5555:6666:7777:8888 dst is 1111:2222:3333:4444:5555:6666:7777:9999 / sctp src is 25 dst is 23 / end actions queue index 1 / end - ice_flow_create(): Failed to create flow - Caught error type 2 (flow rule (handle)): Invalid input pattern: Invalid argument - - testpmd> flow create 0 priority 0 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 ttl is 20 / sctp src is 25 dst is 23 / end actions queue index 1 / end - ice_flow_create(): Failed to create flow - Caught error type 2 (flow rule (handle)): Invalid input pattern: Invalid argument - - -Test case: Create flow rules only supported by switch filter with priority 1 -============================================================================= - -Create a rule only supported by fdir switch with priority 1, it is not acceptable. - -Patterns in this case: - MAC_IPV4_NVGRE_MAC_IPV4 - MAC_IPV4_NVGRE_MAC_IPV4_UDP - -#. Start the ``testpmd`` application as follows:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf -n 4 -a 0000:af:00.0,pipeline-mode-support=1 --log-level="ice,7" -- -i --txq=8 --rxq=8 - set fwd rxonly - set verbose 1 - -#. Create rules, check the flows can not be created:: - - testpmd> flow create 0 priority 1 ingress pattern eth / ipv4 / nvgre / eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / end actions queue index 3 / end - ice_flow_create(): Failed to create flow - Caught error type 13 (specific pattern item): cause: 0x7fffe65b8128, Unsupported pattern: Invalid argument - - testpmd> flow create 0 priority 1 ingress pattern eth / ipv4 / nvgre / eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / udp src is 25 dst is 23 / end actions queue index 3 / end - ice_flow_create(): Failed to create flow - Caught error type 13 (specific pattern item): cause: 0x7fffe65b8128, Unsupported pattern: Invalid argument - -Test case: Create flow rules with same parameter but differenet actions -========================================================================== - -It is acceptable to create same rules with differenet filter in pipeline mode. -When fdir filter and switch filter has the same parameter rules, the flow will map to switch then fdir. - -Patterns in this case: - MAC_IPV4_TCP - -#. Start the ``testpmd`` application as follows:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf -n 4 -a 0000:af:00.0,pipeline-mode-support=1 --log-level="ice,7" -- -i --txq=8 --rxq=8 - set fwd rxonly - set verbose 1 - -#. Create switch rule then fdir rule with the same parameter, check two flows can be created:: - - testpmd> flow create 0 priority 0 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / tcp src is 25 dst is 23 / end actions queue index 1 / end - ice_flow_create(): Succeeded to create (2) flow - Flow rule #0 created - - testpmd> flow create 0 priority 1 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / tcp src is 25 dst is 23 / end actions queue index 3 / end - ice_interrupt_handler(): OICR: MDD event - ice_flow_create(): Succeeded to create (1) flow - Flow rule #1 created - -#. Tester send a pkt to dut:: - - sendp([Ether(dst="00:00:00:00:11:00",src="11:22:33:44:55:66")/IP(src="192.168.0.2",dst="192.168.0.3",tos=4)/TCP(sport=25,dport=23)/Raw('x'*80)],iface="enp134s0f0") - -#. Check the packets are recieved by dut in queue 1:: - - testpmd> port 0/queue 1: received 1 packets - src=11:22:33:44:55:66 - dst=00:00:00:00:11:00 - type=0x0800 - length=134 - nb_segs=1 - RSS hash=0xf12811f1 - RSS queue=0x1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_TCP - sw ptype: L2_ETHER L3_IPV4 L4_TCP - l2_len=14 - l3_len=20 - l4_len=20 - Receive queue=0x1 - ol_flags: PKT_RX_RSS_HASH PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN - -#. Remove the switch rule:: - - testpmd>flow destroy 0 rule 0 - -#. Tester send a pkt to dut:: - - sendp([Ether(dst="00:00:00:00:11:00",src="11:22:33:44:55:66")/IP(src="192.168.0.2",dst="192.168.0.3",tos=4)/TCP(sport=25,dport=23)/Raw('x'*80)],iface="enp134s0f0") - -#. Check the packets are recieved in queue 3:: - - testpmd> port 0/queue 3: received 1 packets - src=11:22:33:44:55:66 - dst=00:00:00:00:11:00 - type=0x0800 - length=134 - nb_segs=1 - RSS hash=0xf12811f1 - RSS queue=0x3 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_TCP - sw ptype: L2_ETHER L3_IPV4 L4_TCP - l2_len=14 - l3_len=20 - l4_len=20 - Receive queue=0x3 - ol_flags: PKT_RX_RSS_HASH PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN - -#. Restart the ``testpmd`` application as follows:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xf -n 4 -a 0000:af:00.0, pipeline-mode-support=1 --log-level="ice,7" -- -i --txq=8 --rxq=8 - set fwd rxonly - set verbose 1 - -#. Create fdir rule then switch rule with the same parameter, check two flows can be created:: - - testpmd> flow create 0 priority 1 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / tcp src is 25 dst is 23 / end actions queue index 3 / end - ice_interrupt_handler(): OICR: MDD event - ice_flow_create(): Succeeded to create (1) flow - Flow rule #0 created - - testpmd> flow create 0 priority 0 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / tcp src is 25 dst is 23 / end actions queue index 1 / end - ice_flow_create(): Succeeded to create (2) flow - Flow rule #1 created - -#. Tester send a pkt to dut:: - - sendp([Ether(dst="00:00:00:00:11:00",src="11:22:33:44:55:66")/IP(src="192.168.0.2",dst="192.168.0.3",tos=4)/TCP(sport=25,dport=23)/Raw('x'*80)],iface="enp134s0f0") - -#. Check the packets are recieved by dut in queue 1:: - - testpmd> port 0/queue 1: received 1 packets - src=11:22:33:44:55:66 - dst=00:00:00:00:11:00 - type=0x0800 - length=134 - nb_segs=1 - RSS hash=0xf12811f1 - RSS queue=0x1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_TCP - sw ptype: L2_ETHER L3_IPV4 L4_TCP - l2_len=14 - l3_len=20 - l4_len=20 - Receive queue=0x1 - ol_flags: PKT_RX_RSS_HASH PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN - -#. Remove the switch rule:: - - testpmd>flow destroy 0 rule 1 - -#. Tester send a pkt to dut:: - - sendp([Ether(dst="00:00:00:00:11:00",src="11:22:33:44:55:66")/IP(src="192.168.0.2",dst="192.168.0.3",tos=4)/TCP(sport=25,dport=23)/Raw('x'*80)],iface="enp134s0f0") - -#. Check the packets are recieved in queue 3:: - - testpmd> port 0/queue 3: received 1 packets - src=11:22:33:44:55:66 - dst=00:00:00:00:11:00 - type=0x0800 - length=134 - nb_segs=1 - RSS hash=0xf12811f1 - RSS queue=0x3 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_TCP - sw ptype: L2_ETHER L3_IPV4 L4_TCP - l2_len=14 - l3_len=20 - l4_len=20 - Receive queue=0x3 - ol_flags: PKT_RX_RSS_HASH PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN From patchwork Fri Aug 26 14:38:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jiale, SongX" X-Patchwork-Id: 115459 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 34BA4A0552; Fri, 26 Aug 2022 08:22:19 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2FC1140696; Fri, 26 Aug 2022 08:22:19 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 2CC9A40143 for ; Fri, 26 Aug 2022 08:22:17 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1661494937; x=1693030937; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=awYG+SgkvxvEs/zf2DvGfYlvAWT6c9rG6AVDU3mHYLk=; b=B6f7FWpgTIjPbjKmrt1a6vbPfGWyNn5+IEiOxvc5cVDfwf5x5TOnCKQN LRWsKguh6wreNnuVdFU8ME1JnLEYQ8L+9t/OBIv9iqg5fx8hf3n452GBw Ug1oxYfiHboikjyW7pPzNRqwD5KycMNzsXWKrJ+SgIDSsE72VfME2LUjJ 05DOiESVtHFeaJQMm7rsabNrvdazcQl5OYH6wq4nw154lbRNCSsfvnvR5 up4biu2VZawx2ebfAvTLxTMETLlJLSyUqMd4Sk+XH6C9Yx0cYahM/mCXK eX9u5Pqmlu52yP78hNbmC8q+NYhyVL1GhNugqzuUfr9gidWhoMYoJxwR+ g==; X-IronPort-AV: E=McAfee;i="6500,9779,10450"; a="380728083" X-IronPort-AV: E=Sophos;i="5.93,264,1654585200"; d="scan'208";a="380728083" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Aug 2022 23:22:16 -0700 X-IronPort-AV: E=Sophos;i="5.93,264,1654585200"; d="scan'208";a="938640622" Received: from unknown (HELO cvl_tetser_105.icx.intel.com) ([10.239.252.94]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Aug 2022 23:22:15 -0700 From: Jiale Song To: dts@dpdk.org Cc: Jiale Song Subject: [dts] [PATCH V1 2/2] tests/ice_flow_priority: move the case of rteflow_priority to ice_flow_priority and remove rteflow_priority Date: Fri, 26 Aug 2022 14:38:24 +0000 Message-Id: <20220826143824.12813-2-songx.jiale@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220826143824.12813-1-songx.jiale@intel.com> References: <20220826143824.12813-1-songx.jiale@intel.com> X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org 1. move the case not covered by ice_flow_priority from rteflow_priority to ice_flow_priority. 2. remove the test suite rteflow_priority. Signed-off-by: Jiale Song Signed-off-by: Jiale Song Acked-by: Lijuan Tu --- tests/TestSuite_ice_flow_priority.py | 172 ++++++++- tests/TestSuite_rteflow_priority.py | 529 --------------------------- 2 files changed, 169 insertions(+), 532 deletions(-) delete mode 100644 tests/TestSuite_rteflow_priority.py diff --git a/tests/TestSuite_ice_flow_priority.py b/tests/TestSuite_ice_flow_priority.py index 4f63f06d..ddc6a2af 100644 --- a/tests/TestSuite_ice_flow_priority.py +++ b/tests/TestSuite_ice_flow_priority.py @@ -440,11 +440,22 @@ class ICEPFFlowPriorityTest(TestCase): cores="1S/4C/1T", ports=[self.pf_pci], eal_param="--log-level=ice,8", - param="--rxq=16 --txq=16", + param="--rxq={0} --txq={0}".format(self.rxq), + ) + # start testpmd in pipeline mode + elif "pipeline" in self._suite_result.test_case: + self.pmdout.start_testpmd( + cores="1S/4C/1T", + ports=[self.pf_pci], + port_options={self.pf_pci: "pipeline-mode-support=1"}, + eal_param="--log-level=ice,7", + param="--rxq={0} --txq={0}".format(self.rxq), ) else: self.pmdout.start_testpmd( - cores="1S/4C/1T", ports=[self.pf_pci], param="--rxq=16 --txq=16" + cores="1S/4C/1T", + ports=[self.pf_pci], + param="--rxq={0} --txq={0}".format(self.rxq), ) self.pmdout.execute_cmd("set fwd rxonly") self.pmdout.execute_cmd("set verbose 1") @@ -652,11 +663,166 @@ class ICEPFFlowPriorityTest(TestCase): queue = re.findall(p, out) self.verify(len(queue) == 1 and int(queue[0]) == 5, "drop pkt failed") + def test_create_fdir_rule_with_priority_0_pipeline(self): + """ + Create Flow Rules Only Supported by Fdir Filter with Priority 0 + """ + # create rules only supported by fdir with priority 0, check the rules can not be created. + out = self.dut.send_expect( + "flow create 0 priority 0 ingress pattern eth / ipv6 src is 1111:2222:3333:4444:5555:6666:7777:8888 dst is 1111:2222:3333:4444:5555:6666:7777:9999 / sctp src is 25 dst is 23 / end actions queue index 1 / end", + "testpmd> ", + ) + self.verify("Failed" in out, "failed: priority is not work") + out = self.dut.send_expect( + "flow create 0 priority 0 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 ttl is 20 / sctp src is 25 dst is 23 / end actions queue index 1 / end", + "testpmd> ", + ) + self.verify("Failed" in out, "failed: priority is not work") + self.dut.send_expect("quit", "# ") + + def test_create_switch_rule_with_priority_1_pipeline(self): + """ + Create flow rules only supported by switch filter with priority 1 + """ + # create rules only supported by switch with priority 1, check the rules can not be created. + out = self.dut.send_expect( + "flow create 0 priority 1 ingress pattern eth / ipv4 / nvgre / eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / end actions queue index 3 / end", + "testpmd> ", + ) + self.verify("Failed" in out, "failed: priority is not work") + out = self.dut.send_expect( + "flow create 0 priority 1 ingress pattern eth / ipv4 / nvgre / eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / udp src is 25 dst is 23 / end actions queue index 3 / end", + "testpmd> ", + ) + self.verify("Failed" in out, "failed: priority is not work") + self.dut.send_expect("quit", "# ") + + def test_priority_in_pipeline_mode(self): + """ + Create Flow Rules with Priority in Pipeline Mode + """ + rules = [ + "flow create 0 priority 0 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / tcp src is 25 dst is 23 / end actions queue index 1 / end", + "flow create 0 priority 0 ingress pattern eth / ipv4 / udp / vxlan / eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / udp src is 25 dst is 23 / end actions queue index 2 / end", + "flow create 0 priority 1 ingress pattern eth / ipv4 src is 192.168.0.4 dst is 192.168.0.7 tos is 4 ttl is 20 / tcp src is 25 dst is 23 / end actions queue index 3 / end", + "flow create 0 priority 1 ingress pattern eth / ipv4 / udp / vxlan / eth / ipv4 src is 192.168.0.4 dst is 192.168.0.7 / udp src is 25 dst is 23 / end actions queue index 4 / end", + "flow create 0 ingress pattern eth / ipv4 src is 192.168.1.2 dst is 192.168.0.3 tos is 5 / tcp src is 25 dst is 23 / end actions queue index 1 / end", + "flow create 0 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 ttl is 20 / sctp src is 25 dst is 23 / end actions queue index 1 / end", + ] + pkts = [ + 'Ether(dst="00:00:00:00:01:00",src="11:22:33:44:55:66")/IP(src="192.168.0.2",dst="192.168.0.3",tos=4)/TCP(sport=25,dport=23)/Raw("x"*80)', + 'Ether(dst="00:00:00:00:01:00",src="11:22:33:44:55:66")/IP()/UDP()/VXLAN()/Ether()/IP(src="192.168.0.2",dst="192.168.0.3",tos=4)/UDP(sport=25,dport=23)/Raw("x"*80)', + 'Ether(dst="00:00:00:00:01:00",src="11:22:33:44:55:66")/IP(src="192.168.0.4",dst="192.168.0.7",tos=4,ttl=20)/TCP(sport=25,dport=23)/Raw("x"*80)', + 'Ether(dst="00:00:00:00:01:00",src="11:22:33:44:55:66")/IP()/UDP()/VXLAN()/Ether()/IP(src="192.168.0.4",dst="192.168.0.7")/UDP(sport=25,dport=23)/Raw("x"*80)', + ] + # create fdir and switch rules with different priority + rule_list1 = self.process.create_rule( + rules[0:2], check_stats=True, msg="Succeeded to create (2) flow" + ) + rule_list2 = self.process.create_rule( + rules[2:4], check_stats=True, msg="Succeeded to create (1) flow" + ) + rule_list = rule_list1 + rule_list2 + self.process.check_rule(port_id=0, stats=True, rule_list=rule_list) + # send pkts and check the rules are written to different filter tables and the rules can work + self.pmdout.wait_link_status_up(self.dut_ports[0]) + out = self.process.send_pkt_get_out(pkts[0], port_id=0) + self.process.check_rx_packets( + out, check_param={"queue": 1}, expect_pkt=1, stats=True + ) + self.logger.info("pass: queue id is 1") + out = self.process.send_pkt_get_out(pkts[1], port_id=0) + self.process.check_rx_packets( + out, check_param={"queue": 2}, expect_pkt=1, stats=True + ) + self.logger.info("pass: queue id is 2") + out = self.process.send_pkt_get_out(pkts[2], port_id=0) + self.process.check_rx_packets( + out, check_param={"queue": 3}, expect_pkt=1, stats=True + ) + self.logger.info("pass: queue id is 3") + out = self.process.send_pkt_get_out(pkts[3], port_id=0) + self.process.check_rx_packets( + out, check_param={"queue": 4}, expect_pkt=1, stats=True + ) + self.logger.info("pass: queue id is 4") + # create rules without priority, only the pattern supported by switch can be created. + self.process.create_rule( + rules[4], check_stats=True, msg="Succeeded to create (2) flow" + ) + self.process.create_rule(rules[5], check_stats=False) + self.pmdout.execute_cmd("flow flush 0") + + def test_rules_with_same_parameters_different_action_pipeline(self): + """ + Create flow rules with same parameter but different actions + """ + rules = [ + "flow create 0 priority 0 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / tcp src is 25 dst is 23 / end actions queue index 1 / end", + "flow create 0 priority 1 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / tcp src is 25 dst is 23 / end actions queue index 3 / end", + ] + pkts = [ + 'Ether(dst="00:00:00:00:01:00",src="11:22:33:44:55:66")/IP(src="192.168.0.2",dst="192.168.0.3",tos=4)/TCP(sport=25,dport=23)/Raw("x"*80)', + ] + + # create rules with same parameters but different action + rule_list1 = self.process.create_rule( + rules[0], check_stats=True, msg="Succeeded to create (2) flow" + ) + rule_list2 = self.process.create_rule( + rules[1], check_stats=True, msg="Succeeded to create (1) flow" + ) + rule_list = rule_list1 + rule_list2 + self.process.check_rule(port_id=0, stats=True, rule_list=rule_list) + # send a pkt to check the switch rule is work for its high priority + self.pmdout.wait_link_status_up(self.dut_ports[0]) + out = self.process.send_pkt_get_out(pkts[0], port_id=0) + self.process.check_rx_packets( + out, check_param={"queue": 1}, expect_pkt=1, stats=True + ) + self.logger.info("pass: queue id is 1") + # remove the switch rule and check the fdir rule is work + self.process.destroy_rule(port_id=0, rule_id=rule_list1) + out = self.process.send_pkt_get_out(pkts[0], port_id=0) + self.process.check_rx_packets( + out, check_param={"queue": 3}, expect_pkt=1, stats=True + ) + self.logger.info("pass: queue id is 3") + self.pmdout.execute_cmd("flow flush 0") + self.pmdout.quit() + # restart testpmd in pipeline mode + self.launch_testpmd() + # create rules with same parameters but different action + rule_list1 = self.process.create_rule( + rules[1], check_stats=True, msg="Succeeded to create (1) flow" + ) + rule_list2 = self.process.create_rule( + rules[0], check_stats=True, msg="Succeeded to create (2) flow" + ) + rule_list = rule_list1 + rule_list2 + self.process.check_rule(port_id=0, stats=True, rule_list=rule_list) + # send a pkt to check the switch rule is work for its high priority + self.pmdout.wait_link_status_up(self.dut_ports[0]) + self.pmdout.wait_link_status_up(self.dut_ports[0]) + out = self.process.send_pkt_get_out(pkts[0], port_id=0) + self.process.check_rx_packets( + out, check_param={"queue": 1}, expect_pkt=1, stats=True + ) + self.logger.info("pass: queue id is 1") + # remove the switch rule and check the fdir rule is work + self.process.destroy_rule(port_id=0, rule_id=rule_list2) + out = self.process.send_pkt_get_out(pkts[0], port_id=0) + self.process.check_rx_packets( + out, check_param={"queue": 3}, expect_pkt=1, stats=True + ) + self.logger.info("pass: queue id is 3") + self.pmdout.execute_cmd("flow flush 0") + def tear_down(self): """ Run after each test case. """ - self.pmdout.execute_cmd("quit", "# ") + self.pmdout.quit() def tear_down_all(self): """ diff --git a/tests/TestSuite_rteflow_priority.py b/tests/TestSuite_rteflow_priority.py deleted file mode 100644 index 813df85c..00000000 --- a/tests/TestSuite_rteflow_priority.py +++ /dev/null @@ -1,529 +0,0 @@ -# SPDX-License-Identifier: BSD-3-Clause -# Copyright(c) 2019 Intel Corporation -# - -""" -DPDK Test suite. - -Test rte_flow priority -""" - -import imp -import re -import string -import sys -import time -from time import sleep - -from scapy.utils import PcapWriter, socket, struct - -import framework.utils as utils -from framework.pktgen import PacketGeneratorHelper -from framework.pmd_output import PmdOutput -from framework.settings import HEADER_SIZE -from framework.test_case import TestCase - -imp.reload(sys) - - -class TestRteflowPriority(TestCase): - def set_up_all(self): - """ - Run at the start of each test suite. - - PMD prerequisites. - """ - self.dut_ports = self.dut.get_ports(self.nic) - localPort = self.tester.get_local_port(self.dut_ports[0]) - self.__tx_iface = self.tester.get_interface(localPort) - cores = self.dut.get_core_list("1S/5C/1T") - self.coreMask = utils.create_mask(cores) - self.portMask = utils.create_mask([self.dut_ports[0]]) - self.path = self.dut.apps_name["test-pmd"] - - def set_up(self): - """ - Run before each test case. - """ - pass - - # - # Utility methods and other non-test code. - # - ########################################################################### - scapyCmds = [] - - def check_link(self): - # check status in test case, dut and tester both should be up. - self.pmd_output = PmdOutput(self.dut) - res = self.pmd_output.wait_link_status_up("all", timeout=30) - if res is True: - for i in range(15): - out = self.tester.get_port_status(self.dut_ports[0]) - if out == "up": - break - else: - time.sleep(1) - return True - else: - return False - - def send_pkt(self, cmd): - """ - Send packages and verify behavior. - """ - self.tester.scapyCmds.append(cmd) - self.tester.scapy_execute() - - def get_packet_number(self, out, match_string): - """ - get the rx packets number. - """ - - out_lines = out.splitlines() - pkt_num = 0 - for i in range(len(out_lines)): - if match_string in out_lines[i]: - result_scanner = r"RX-packets:\s?(\d+)" - scanner = re.compile(result_scanner, re.DOTALL) - m = scanner.search(out_lines[i + 1]) - pkt_num = int(m.group(1)) - break - return pkt_num - - def check_queue_rx_packets_number(self, out, queue_id): - """ - check the queue rx packets number. - """ - match_string = "------- Forward Stats for RX Port= 0/Queue= %d" % queue_id - pkt_num = self.get_packet_number(out, match_string) - return pkt_num - - # - # test cases. - # - ########################################################################### - - def test_priority_in_pipeline_mode(self): - """ - priority is active in pipeline mode. - """ - # start testpmd in pipeline mode - # genarate eal - command = ( - self.path - + '-c %s -n 4 -a %s,pipeline-mode-support=1 --log-level="ice,7" -- -i --portmask=%s --rxq=10 --txq=10' - % ( - self.coreMask, - self.dut.ports_info[0]["pci"], - utils.create_mask([self.dut_ports[0]]), - ) - ) - out = self.dut.send_expect(command, "testpmd> ", 120) - self.logger.debug(out) - - self.dut.send_expect("set fwd rxonly", "testpmd> ", 15) - self.dut.send_expect("set verbose 1", "testpmd> ", 15) - self.dut.send_expect("rx_vxlan_port add 4789 0", "testpmd> ", 15) - - # create fdir and switch rules with different priority - out = self.dut.send_expect( - "flow create 0 priority 0 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / tcp src is 25 dst is 23 / end actions queue index 1 / end ", - "testpmd>", - 15, - ) - self.verify("Successed" and "(2)" in out, "failed: rule map to wrong filter") - out = self.dut.send_expect( - "flow create 0 priority 0 ingress pattern eth / ipv4 / udp / vxlan / eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / udp src is 25 dst is 23 / end actions queue index 2 / end ", - "testpmd>", - 15, - ) - self.verify("Successed" and "(2)" in out, "failed: rule map to wrong filter") - out = self.dut.send_expect( - "flow create 0 priority 1 ingress pattern eth / ipv4 src is 192.168.0.4 dst is 192.168.0.7 tos is 4 ttl is 20 / tcp src is 25 dst is 23 / end actions queue index 3 / end ", - "testpmd>", - 15, - ) - self.verify("Successed" and "(1)" in out, "failed: rule map to wrong filter") - out = self.dut.send_expect( - "flow create 0 priority 1 ingress pattern eth / ipv4 / udp / vxlan / eth / ipv4 src is 192.168.0.4 dst is 192.168.0.7 / udp src is 25 dst is 23 / end actions queue index 4 / end ", - "testpmd>", - 15, - ) - self.verify("Successed" and "(1)" in out, "failed: rule map to wrong filter") - out = self.dut.send_expect("flow list 0", "testpmd> ", 15) - self.logger.info(out) - self.verify("3" in out, "failed: flow rules created error") - - # send pkts and check the rules are written to different filter tables and the rules can work - self.dut.send_expect("start", "testpmd>", 20) - a = self.check_link() - self.verify(a, "failed: link can not up") - self.send_pkt( - 'sendp([Ether(dst="00:00:00:00:01:00",src="11:22:33:44:55:66")/IP(src="192.168.0.2",dst="192.168.0.3",tos=4)/TCP(sport=25,dport=23)/Raw("x"*80)],iface="%s")' - % (self.__tx_iface) - ) - out = self.dut.send_expect("stop", "testpmd>", 20) - pkt_num = self.check_queue_rx_packets_number(out, 1) - self.verify(pkt_num == 1, "failed: the flow rule can not work") - self.logger.info("pass: queue id is 1") - - self.dut.send_expect("start", "testpmd>", 20) - self.send_pkt( - 'sendp([Ether(dst="00:00:00:00:01:00",src="11:22:33:44:55:66")/IP()/UDP()/VXLAN()/Ether()/IP(src="192.168.0.2",dst="192.168.0.3",tos=4)/UDP(sport=25,dport=23)/Raw("x"*80)],iface="%s")' - % (self.__tx_iface) - ) - out = self.dut.send_expect("stop", "testpmd>", 20) - pkt_num = self.check_queue_rx_packets_number(out, 2) - self.verify(pkt_num == 1, "failed: the flow rule can not work") - self.logger.info("pass: queue id is 2") - - self.dut.send_expect("start", "testpmd>", 20) - self.send_pkt( - 'sendp([Ether(dst="00:00:00:00:01:00",src="11:22:33:44:55:66")/IP(src="192.168.0.4",dst="192.168.0.7",tos=4,ttl=20)/TCP(sport=25,dport=23)/Raw("x"*80)],iface="%s")' - % (self.__tx_iface) - ) - out = self.dut.send_expect("stop", "testpmd>", 20) - pkt_num = self.check_queue_rx_packets_number(out, 3) - self.verify(pkt_num == 1, "failed: the flow rule can not work") - self.logger.info("pass: queue id is 3") - - self.dut.send_expect("start", "testpmd>", 20) - self.send_pkt( - 'sendp([Ether(dst="00:00:00:00:01:00",src="11:22:33:44:55:66")/IP()/UDP()/VXLAN()/Ether()/IP(src="192.168.0.4",dst="192.168.0.7")/UDP(sport=25,dport=23)/Raw("x"*80)],iface="%s")' - % (self.__tx_iface) - ) - out = self.dut.send_expect("stop", "testpmd>", 20) - pkt_num = self.check_queue_rx_packets_number(out, 4) - self.verify(pkt_num == 1, "failed: the flow rule can not work") - self.logger.info("pass: queue id is 4") - - # create rules without priority, only the pattern supported by switch can be created. - out = self.dut.send_expect( - "flow create 0 ingress pattern eth / ipv4 src is 192.168.1.2 dst is 192.168.0.3 tos is 5 / tcp src is 25 dst is 23 / end actions queue index 1 / end ", - "testpmd>", - 15, - ) - self.verify("Failed" not in out, "failed: default priority 0 is not work") - out = self.dut.send_expect( - "flow create 0 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 ttl is 20 / sctp src is 25 dst is 23 / end actions queue index 1 / end ", - "testpmd>", - 15, - ) - self.verify( - "Failed" in out, - "failed: pattern only support by fdir can not be created in default priority", - ) - - self.dut.send_expect("flow flush 0", "testpmd>", 20) - self.dut.send_expect("quit", "#", 50) - - def test_priority_in_nonpipeline_mode(self): - """ - priority is not active in pipeline mode. - """ - - # start testpmd without pipeline-mode-support parameter, check the testpmd is launch in non-pipeline mode - command = ( - self.path - + '-c %s -n 4 -a %s --log-level="ice,7" -- -i --portmask=%s --rxq=10 --txq=10' - % ( - self.coreMask, - self.dut.ports_info[0]["pci"], - utils.create_mask([self.dut_ports[0]]), - ) - ) - out = self.dut.send_expect(command, "testpmd> ", 120) - self.logger.debug(out) - - out = self.dut.send_expect( - "flow create 0 priority 0 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / end actions queue index 2 / mark / end", - "testpmd>", - 15, - ) - self.verify( - "Successed" and "(1)" in out, "failed: rule can't be created to fdir" - ) - out = self.dut.send_expect( - "flow create 0 priority 1 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / end actions queue index 2 / mark / end", - "testpmd>", - 15, - ) - self.verify( - "Failed" in out, - "failed: default value of priority is 0 in non-pipeline mode", - ) - self.dut.send_expect("flow flush 0", "testpmd>", 20) - self.dut.send_expect("quit", "#", 50) - - # restart testpmd with pipeline-mode-support=0, check the testpmd is launch in non-pipeline mode - command = ( - self.path - + '-c %s -n 4 -a %s,pipeline-mode-support=0 --log-level="ice,7" -- -i --portmask=%s --rxq=10 --txq=10' - % ( - self.coreMask, - self.dut.ports_info[0]["pci"], - utils.create_mask([self.dut_ports[0]]), - ) - ) - out = self.dut.send_expect(command, "testpmd> ", 120) - self.logger.debug(out) - - out = self.dut.send_expect( - "flow create 0 priority 0 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / end actions queue index 2 / mark / end", - "testpmd>", - 15, - ) - self.verify( - "Successed" and "(1)" in out, "failed: rule can't be created to fdir" - ) - out = self.dut.send_expect( - "flow create 0 priority 1 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / end actions queue index 2 / mark / end", - "testpmd>", - 15, - ) - self.verify( - "Failed" in out, - "failed: default value of priority is 0 in non-pipeline mode", - ) - self.dut.send_expect("flow flush 0", "testpmd>", 20) - self.dut.send_expect("quit", "#", 50) - - def test_no_destination_high_prority(self): - """ - no destination high priority rule is not acceptable. - """ - - # start testpmd in pipeline mode - command = ( - self.path - + '-c %s -n 4 -a %s,pipeline-mode-support=1 --log-level="ice,7" -- -i --portmask=%s --rxq=10 --txq=10' - % ( - self.coreMask, - self.dut.ports_info[0]["pci"], - utils.create_mask([self.dut_ports[0]]), - ) - ) - out = self.dut.send_expect(command, "testpmd> ", 120) - self.logger.debug(out) - - # create no destination high priority rules, check the rules can not be created. - out = self.dut.send_expect( - "flow create 0 priority 0 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / tcp src is 25 dst is 23 / end actions / end", - "testpmd>", - 15, - ) - self.verify( - "Bad argument" in out, - "failed: no destination high priority rule is not acceptable", - ) - out = self.dut.send_expect( - "flow create 0 priority 0 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / tcp src is 25 dst is 23 / end", - "testpmd>", - 15, - ) - self.verify( - "Bad argument" in out, - "failed: no destination high priority rule is not acceptable", - ) - self.dut.send_expect("quit", "#", 50) - - def test_create_fdir_rule_with_priority_0(self): - """ - create a rule only supported by fdir filter with priority 0 is not acceptable. - """ - - # start testpmd in pipeline mode - command = ( - self.path - + '-c %s -n 4 -a %s,pipeline-mode-support=1 --log-level="ice,7" -- -i --portmask=%s --rxq=10 --txq=10' - % ( - self.coreMask, - self.dut.ports_info[0]["pci"], - utils.create_mask([self.dut_ports[0]]), - ) - ) - out = self.dut.send_expect(command, "testpmd> ", 120) - self.logger.debug(out) - - # create rules only supported by fdir with priority 0, check the rules can not be created. - out = self.dut.send_expect( - "flow create 0 priority 0 ingress pattern eth / ipv6 src is 1111:2222:3333:4444:5555:6666:7777:8888 dst is 1111:2222:3333:4444:5555:6666:7777:9999 / sctp src is 25 dst is 23 / end actions queue index 1 / end", - "testpmd>", - 15, - ) - self.verify("Failed" in out, "failed: priority is not work") - out = self.dut.send_expect( - "flow create 0 priority 0 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 ttl is 20 / sctp src is 25 dst is 23 / end actions queue index 1 / end", - "testpmd>", - 15, - ) - self.verify("Failed" in out, "failed: priority is not work") - self.dut.send_expect("quit", "#", 50) - - def test_create_switch_rule_with_priority_1(self): - """ - create a rule only supported by switch filter with priority 1 is not acceptable. - """ - - # start testpmd in pipeline mode - command = ( - self.path - + '-c %s -n 4 -a %s,pipeline-mode-support=1 --log-level="ice,7" -- -i --portmask=%s --rxq=10 --txq=10' - % ( - self.coreMask, - self.dut.ports_info[0]["pci"], - utils.create_mask([self.dut_ports[0]]), - ) - ) - out = self.dut.send_expect(command, "testpmd> ", 120) - self.logger.debug(out) - - # create rules only supported by switch with priority 1, check the rules can not be created. - out = self.dut.send_expect( - "flow create 0 priority 1 ingress pattern eth / ipv4 / nvgre / eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / end actions queue index 3 / end", - "testpmd>", - 15, - ) - self.verify("Failed" in out, "failed: priority is not work") - out = self.dut.send_expect( - "flow create 0 priority 1 ingress pattern eth / ipv4 / nvgre / eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / udp src is 25 dst is 23 / end actions queue index 3 / end", - "testpmd>", - 15, - ) - self.verify("Failed" in out, "failed: priority is not work") - self.dut.send_expect("quit", "#", 50) - - def test_rules_with_same_parameters_different_action(self): - """ - it's acceptable to create same rules with different filter in pipeline mode. - """ - - # start testpmd in pipeline mode - command = ( - self.path - + '-c %s -n 4 -a %s,pipeline-mode-support=1 --log-level="ice,7" -- -i --portmask=%s --rxq=10 --txq=10' - % ( - self.coreMask, - self.dut.ports_info[0]["pci"], - utils.create_mask([self.dut_ports[0]]), - ) - ) - out = self.dut.send_expect(command, "testpmd> ", 120) - - self.dut.send_expect("set fwd rxonly", "testpmd> ", 15) - self.dut.send_expect("set verbose 1", "testpmd> ", 15) - - # create rules with same parameters but different action - out = self.dut.send_expect( - "flow create 0 priority 0 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / tcp src is 25 dst is 23 / end actions queue index 1 / end", - "testpmd>", - 15, - ) - self.verify( - "Successed" and "(2)" in out, "failed: switch rule can't be created" - ) - out = self.dut.send_expect( - "flow create 0 priority 1 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / tcp src is 25 dst is 23 / end actions queue index 3 / end", - "testpmd>", - 15, - ) - self.verify("Successed" and "(1)" in out, "failed: fdir rule can't be created") - - # send a pkt to check the switch rule is work for its high priority - self.dut.send_expect("start", "testpmd>", 20) - a = self.check_link() - self.verify(a, "failed: link can not up") - - self.send_pkt( - 'sendp([Ether(dst="00:00:00:00:01:00",src="11:22:33:44:55:01")/IP(src="192.168.0.2",dst="192.168.0.3",tos=4)/TCP(sport=25,dport=23)/Raw("x"*80)],iface="%s")' - % (self.__tx_iface) - ) - out = self.dut.send_expect("stop", "testpmd>", 20) - pkt_num = self.check_queue_rx_packets_number(out, 1) - self.verify(pkt_num == 1, "failed: the flow rule can not work") - self.logger.info("pass: queue id is 1") - - # remove the switch rule and check the fdir rule is work - self.dut.send_expect("flow destroy 0 rule 0", "testpmd>", 15) - self.dut.send_expect("start", "testpmd>", 20) - self.send_pkt( - 'sendp([Ether(dst="00:00:00:00:01:00",src="11:22:33:44:55:02")/IP(src="192.168.0.2",dst="192.168.0.3",tos=4)/TCP(sport=25,dport=23)/Raw("x"*80)],iface="%s")' - % (self.__tx_iface) - ) - out = self.dut.send_expect("stop", "testpmd>", 20) - pkt_num = self.check_queue_rx_packets_number(out, 3) - self.verify(pkt_num == 1, "failed: the flow rule can not work") - self.logger.info("pass: queue id is 3") - - self.dut.send_expect("flow flush 0", "testpmd>", 15) - self.dut.send_expect("quit", "#", 50) - - # restart testpmd in pipeline mode - command = ( - self.path - + '-c %s -n 4 -a %s,pipeline-mode-support=1 --log-level="ice,7" -- -i --portmask=%s --rxq=10 --txq=10' - % ( - self.coreMask, - self.dut.ports_info[0]["pci"], - utils.create_mask([self.dut_ports[0]]), - ) - ) - out = self.dut.send_expect(command, "testpmd> ", 120) - self.logger.debug(out) - - self.dut.send_expect("set fwd rxonly", "testpmd> ", 15) - self.dut.send_expect("set verbose 1", "testpmd> ", 15) - - # create rules with same parameters but different action - out = self.dut.send_expect( - "flow create 0 priority 1 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / tcp src is 25 dst is 23 / end actions queue index 3 / end", - "testpmd>", - 15, - ) - self.verify("Successed" and "(1)" in out, "failed: fdir rule can't be created") - out = self.dut.send_expect( - "flow create 0 priority 0 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 tos is 4 / tcp src is 25 dst is 23 / end actions queue index 1 / end", - "testpmd>", - 15, - ) - self.verify( - "Successed" and "(2)" in out, "failed: switch rule can't be created" - ) - - # send a pkt to check the switch rule is work for its high priority - self.dut.send_expect("start", "testpmd>", 20) - a = self.check_link() - self.verify(a, "failed: link can not up") - self.send_pkt( - 'sendp([Ether(dst="00:00:00:00:01:00",src="11:22:33:44:55:01")/IP(src="192.168.0.2",dst="192.168.0.3",tos=4)/TCP(sport=25,dport=23)/Raw("x"*80)],iface="%s")' - % (self.__tx_iface) - ) - out = self.dut.send_expect("stop", "testpmd>", 20) - pkt_num = self.check_queue_rx_packets_number(out, 1) - self.verify(pkt_num == 1, "failed: the flow rule can not work") - self.logger.info("pass: queue id is 1") - - # remove the switch rule and check the fdir rule is work - self.dut.send_expect("flow destroy 0 rule 1", "testpmd>", 15) - self.dut.send_expect("start", "testpmd>", 20) - self.send_pkt( - 'sendp([Ether(dst="00:00:00:00:01:00",src="11:22:33:44:55:02")/IP(src="192.168.0.2",dst="192.168.0.3",tos=4)/TCP(sport=25,dport=23)/Raw("x"*80)],iface="%s")' - % (self.__tx_iface) - ) - out = self.dut.send_expect("stop", "testpmd>", 20) - pkt_num = self.check_queue_rx_packets_number(out, 3) - self.verify(pkt_num == 1, "failed: the flow rule can not work") - self.logger.info("pass: queue id is 3") - - self.dut.send_expect("flow flush 0", "testpmd>", 20) - self.dut.send_expect("quit", "#", 50) - - # - ########################################################################### - - def tear_down_all(self): - pass - - def tear_down(self): - self.dut.kill_all()