From patchwork Thu Feb 10 09:30:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Weiyuan Li X-Patchwork-Id: 107086 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D9302A034F; Wed, 9 Feb 2022 07:31:15 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CF1E3410FD; Wed, 9 Feb 2022 07:31:15 +0100 (CET) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id 59E26410F3 for ; Wed, 9 Feb 2022 07:31:14 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1644388274; x=1675924274; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ryQi/RcEeAinugDdhPwuxdJ+47HCWuL3TWiIav4Ch54=; b=TNNwfP1I580ECdyB4N2xHqCxzhwYizfveAAikxHWUZbt2tFQOvnc2nwE IRpeUFlIeqo2Hc4Q+1ZycsyYA2qj8a/0TOoN3AeEpocdVX7+ItKpkAaQS WsdEDze4+OnOEIUo1R+MSWnlzANFAnpCYWjwVPZnkljnxHxemtKCIdm9N X32RjXxr2mJPuSOgYskuWdhWVWmDUohCV4Bv2FDAuNsEIm0M4a/4RZDVM iVKWSUKzxLPzohyOz7JrA2JxGSqyXlPOSg86WqL8TNnhv1wlbIcFmUSHM d2pDUxsMsIWPMgr4YzPKHOpWYlP1FWl3JVt4Td2eRUl8F4yBZudmezRqy Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10252"; a="232698763" X-IronPort-AV: E=Sophos;i="5.88,354,1635231600"; d="scan'208";a="232698763" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Feb 2022 22:31:14 -0800 X-IronPort-AV: E=Sophos;i="5.88,354,1635231600"; d="scan'208";a="525865680" Received: from unknown (HELO localhost.localdomain.sh.intel.com) ([10.239.251.86]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Feb 2022 22:31:12 -0800 From: Weiyuan Li To: dts@dpdk.org, lijuan.tu@intel.com Cc: Weiyuan Li Subject: [dts][PATCH V2 2/2] test_plans/generic_flow_api_test_plan:remove dpdk code modification. Date: Thu, 10 Feb 2022 17:30:59 +0800 Message-Id: <20220210093059.31049-2-weiyuanx.li@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220210093059.31049-1-weiyuanx.li@intel.com> References: <20220210093059.31049-1-weiyuanx.li@intel.com> MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Sync modify the test plan remove dpdk code modification. Signed-off-by: Weiyuan Li --- v2: -Modify number of dpdk-testpmd cores in the test plan. test_plans/generic_flow_api_test_plan.rst | 28 +++++++++-------------- 1 file changed, 11 insertions(+), 17 deletions(-) diff --git a/test_plans/generic_flow_api_test_plan.rst b/test_plans/generic_flow_api_test_plan.rst index c4c0301e..98baec62 100644 --- a/test_plans/generic_flow_api_test_plan.rst +++ b/test_plans/generic_flow_api_test_plan.rst @@ -2213,20 +2213,14 @@ the packet are not received on the queue 2:: testpmd> stop -Test Case: 128 queues +Test Case: 64 queues ======================== -This case is designed for NIC(niantic). Since NIC(niantic) has 128 transmit -queues, it should be supports 128 kinds of filter if Hardware have enough -cores. -DPDK enable 64 queues in ixgbe driver by default. Enlarge queue number to 128 -for 128 queues test:: - - sed -i -e 's/#define IXGBE_NONE_MODE_TX_NB_QUEUES 64$/#define IXGBE_NONE_MODE_TX_NB_QUEUES 128/' drivers/net/ixgbe/ixgbe_ethdev.h +This case is designed for NIC(niantic). Default use 64 queues for test Launch the app ``testpmd`` with the following arguments:: - ./dpdk-testpmd -l 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53 -n 4 -- -i --disable-rss --rxq=128 --txq=128 --portmask=0x3 --nb-cores=4 --total-num-mbufs=263168 + ./dpdk-testpmd -l 1,2,3,4,5 -n 4 -- -i --disable-rss --rxq=64 --txq=64 --portmask=0x3 --nb-cores=4 --total-num-mbufs=263168 testpmd>set stat_qmap rx 0 0 0 testpmd>set stat_qmap rx 1 0 0 @@ -2235,17 +2229,17 @@ Launch the app ``testpmd`` with the following arguments:: testpmd>vlan set filter off 0 testpmd>vlan set filter off 1 -Create the 5-tuple Filters with different queues (64,127) on port 0 for +Create the 5-tuple Filters with different queues (32,63) on port 0 for niantic:: - testpmd> set stat_qmap rx 0 64 1 - testpmd> flow create 0 ingress pattern eth / ipv4 dst is 2.2.2.5 src is 2.2.2.4 / tcp dst is 1 src is 1 / end actions queue index 64 / end - testpmd> set stat_qmap rx 0 127 2 - testpmd> flow create 0 ingress pattern eth / ipv4 dst is 2.2.2.5 src is 2.2.2.4 / tcp dst is 2 src is 1 / end actions queue index 127 / end + testpmd> set stat_qmap rx 0 32 1 + testpmd> flow create 0 ingress pattern eth / ipv4 dst is 2.2.2.5 src is 2.2.2.4 / tcp dst is 1 src is 1 / end actions queue index 32 / end + testpmd> set stat_qmap rx 0 63 2 + testpmd> flow create 0 ingress pattern eth / ipv4 dst is 2.2.2.5 src is 2.2.2.4 / tcp dst is 2 src is 1 / end actions queue index 63 / end Send packets(`dst_ip` = 2.2.2.5 `src_ip` = 2.2.2.4 `dst_port` = 1 `src_port` = 1 `protocol` = tcp) and (`dst_ip` = 2.2.2.5 `src_ip` = 2.2.2.4 `dst_port` = 2 `src_port` = 1 `protocol` = tcp ). Then reading the stats for port 0 after -sending packets. packets are received on the queue 64 and queue 127 When -setting 5-tuple Filter with queue(128), it will display failure because the -number of queues no more than 128. +sending packets. packets are received on the queue 32 and queue 63 When +setting 5-tuple Filter with queue(64), it will display failure because the +number of queues no more than 64.