From patchwork Mon Jan 30 17:00:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawei Wang X-Patchwork-Id: 122696 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id ED94341B83; Mon, 30 Jan 2023 18:02:11 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D412F42BC9; Mon, 30 Jan 2023 18:02:03 +0100 (CET) Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1nam02on2088.outbound.protection.outlook.com [40.107.96.88]) by mails.dpdk.org (Postfix) with ESMTP id CD464410F6 for ; Mon, 30 Jan 2023 18:02:02 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Cl8WN6hn4bqPjEZYjoHiIt6uuAxY0typBYPD2CQfkaTVAYxHFtoHArCy451/dlwoQCWiSfuC7eJBfQnGZRHo2T8mCu0wNQ/ABfv5X/zTpgAiPaji7KU9rBf0KLO4rIrD36QtIA9kN+ZTuVqQrOL7gY0XJvrEk3QylgzevXFkH7n8HFOvblxO2w2rDLDPlf1ziZ049IwBOAYnR77OpNnlzWBMZMRQsozDsl6VCtIoYzbGneJtZH6k/gZBmtEBTh2EvGsi1RSDfBAX0ENtAMcCSYgoiNbgu6zXf5iL+aRhGxRM0ATAOqiI1jwnpovdRl83TH+p04OBMp+SPGYD3p+X7A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Fv8CZ55Xi1Pj3WdPGZ4qj/nl+ksjQvc29lFVfuEXezk=; b=VYIbgc0Q7JhjdJo+X+2+8T7hre4LWAji+UB3Yc1VnoOUElcoU5iZU6J13KPowAaey6chEyeakYMaKxT4WL+hd84vnJZGEo+bP4W/N3Q48Kvk0r2B0sGXLZtXYNHLbm4qyet0L8Xu/0YMeRNLQktm7xyjRxCxkcEdMeqBZHglqZcUaYbl3u2tM+C0B+PbmuoBH35x+FgbJEBqE75WbzMHvEbEI17MA23+PMMp/bSVRKO8hBNs9s0CdxxJkshPsJJspcbXV4vEVzLTf4UDpksQ/x5txldKabxaSxOcXEyuvWfVklCqBgzl4eSf/BoyAqycZMVJoWOeOfPCO6jI9y8svQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Fv8CZ55Xi1Pj3WdPGZ4qj/nl+ksjQvc29lFVfuEXezk=; b=dFpOCActTt3QwpzmvDFwMqvLmt1EwYdph8ZGtkU+8UwcFUatsvPs1gm+hqa2tdwtGkcFac+rvZuxxn9NNgsLsuDP1E6Xh+qJMSJv3S3wt5ZmQFwMPUQThdAeZTH32frXAKp3r1lCucZx5PRHEioF82O8UwXm2Dh/4yhSdvozRqS/EhM4VzZQjCvan366+jYRcR8GBcFksUKsqqMOxg/dVqDuVaxmU26zjc6G57DvLGl4p/v2g3CZbH61U7NqwoBKEKI0emBQbLrDusD9Y5vMugBEtebBIpxNhT9dRDAU3NwJ2Ce/jJ1wFTDtYzV/CRQ1uk8uIpQu3yy/FBG2ZTvcNQ== Received: from DS7PR03CA0080.namprd03.prod.outlook.com (2603:10b6:5:3bb::25) by SJ1PR12MB6315.namprd12.prod.outlook.com (2603:10b6:a03:456::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Mon, 30 Jan 2023 17:02:00 +0000 Received: from DS1PEPF0000B077.namprd05.prod.outlook.com (2603:10b6:5:3bb:cafe::6d) by DS7PR03CA0080.outlook.office365.com (2603:10b6:5:3bb::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36 via Frontend Transport; Mon, 30 Jan 2023 17:02:00 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by DS1PEPF0000B077.mail.protection.outlook.com (10.167.17.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.17 via Frontend Transport; Mon, 30 Jan 2023 17:01:58 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Mon, 30 Jan 2023 09:01:06 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Mon, 30 Jan 2023 09:01:04 -0800 From: Jiawei Wang To: , , , "Aman Singh" , Yuying Zhang , Ferruh Yigit , Andrew Rybchenko CC: , Subject: [PATCH v2 1/2] ethdev: add PHY affinity match item Date: Mon, 30 Jan 2023 19:00:39 +0200 Message-ID: <20230130170041.1360-2-jiaweiw@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20230130170041.1360-1-jiaweiw@nvidia.com> References: <20230130170041.1360-1-jiaweiw@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF0000B077:EE_|SJ1PR12MB6315:EE_ X-MS-Office365-Filtering-Correlation-Id: 670304f8-7c31-45d4-0eb1-08db02e3b383 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Gfa0R+YM3zfmXcEN3ZlaDBYrr/PZNbvPoXchCnHYKt3Xm/q6ZMttIqNSXCDZrxv9vQFk5CSIbEqU9Gsb9ExR9yzHqLxyKCoyiLT/I/6nXBLaAVnwLmm8fsCU1qSI9lxILrhi/hjW7pok242VUEI1WCQWq7L+Ww7iUfkFivJW6Rnlz5qqTHrHwmllQDIv3yRFcJBnglpkL3OP/12rdjo95ie+vj2TmGVpon48iaYMhyDz/5oJTdRHbQ3wArteWPsEtWjw1NvxTDwHDEfn7zzbLAZPJrr3a9/kq693sldwu6aPRU093CC9gN4CpnuEcadGp3172nSziYD+vFVZL7qCB9PK+I7qxCUHcy38rsiTDRhpkxSyErmUCEjS037eIX9Iv8nebrfHZTVOi1Up8L3EIzngLrHlQGJSiP7QE4f0YypN5+WDWN2Wco51bShWZ59lpHxHsSWOy585MAsaniW/PHkNynZSLt1utyAbt/1qQe2BdeGYVJlL1Tk86sbwf2Yyjvu3fCO2tTkdfNANZDsRwUwUFg2x4djCVYAQSiLivKQmxBl5nk7qsSe5CnSenPKgLijoFdnji+5mBVSIy2oKup2bwV4WZkSYVENPqIt8nRnLXTOF0eywwPg5eAPk7hNFs80fau8ZM3jGs2tueWwIRDmFcqArxFyFVtsy6n+1tJ+pnZtZrqsEQNeVhhMCtFq9WR6p0aS3a4ggcUx+oNB8IA== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(376002)(136003)(396003)(39860400002)(346002)(451199018)(36840700001)(40470700004)(46966006)(2906002)(36756003)(82310400005)(47076005)(426003)(336012)(316002)(110136005)(54906003)(83380400001)(16526019)(6286002)(26005)(186003)(40460700003)(2616005)(7696005)(1076003)(107886003)(6666004)(478600001)(86362001)(70206006)(70586007)(8676002)(356005)(55016003)(5660300002)(8936002)(40480700001)(4326008)(41300700001)(36860700001)(7636003)(82740400003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jan 2023 17:01:58.3880 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 670304f8-7c31-45d4-0eb1-08db02e3b383 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF0000B077.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ1PR12MB6315 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org For the multiple hardware ports connect to a single DPDK port (mhpsdp), currently, there is no information to indicate the packet belongs to which hardware port. This patch introduces a new phy affinity item in rte flow API, and the phy affinity value reflects the physical port of the received packets. While uses the phy affinity as a matching item in the flow, and sets the same phy_affinity value on the tx queue, then the packet can be sent from the same hardware port with received. This patch also adds the testpmd command line to match the new item: flow create 0 ingress group 0 pattern phy_affinity affinity is 1 / end actions queue index 0 / end The above command means that creates a flow on a single DPDK port and matches the packet from the first physical port (assume the phy affinity 1 stands for the first port) and redirects these packets into RxQ 0. Signed-off-by: Jiawei Wang Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 29 +++++++++++++++++++++ doc/guides/prog_guide/rte_flow.rst | 8 ++++++ doc/guides/rel_notes/release_23_03.rst | 5 ++++ doc/guides/testpmd_app_ug/testpmd_funcs.rst | 4 +++ lib/ethdev/rte_flow.c | 1 + lib/ethdev/rte_flow.h | 28 ++++++++++++++++++++ 6 files changed, 75 insertions(+) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 88108498e0..a6d4615038 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -465,6 +465,8 @@ enum index { ITEM_METER, ITEM_METER_COLOR, ITEM_METER_COLOR_NAME, + ITEM_PHY_AFFINITY, + ITEM_PHY_AFFINITY_VALUE, /* Validate/create actions. */ ACTIONS, @@ -1355,6 +1357,7 @@ static const enum index next_item[] = { ITEM_L2TPV2, ITEM_PPP, ITEM_METER, + ITEM_PHY_AFFINITY, END_SET, ZERO, }; @@ -1821,6 +1824,12 @@ static const enum index item_meter[] = { ZERO, }; +static const enum index item_phy_affinity[] = { + ITEM_PHY_AFFINITY_VALUE, + ITEM_NEXT, + ZERO, +}; + static const enum index next_action[] = { ACTION_END, ACTION_VOID, @@ -6443,6 +6452,23 @@ static const struct token token_list[] = { ARGS_ENTRY(struct buffer, port)), .call = parse_mp, }, + [ITEM_PHY_AFFINITY] = { + .name = "phy_affinity", + .help = "match on the physical affinity of the" + " received packet.", + .priv = PRIV_ITEM(PHY_AFFINITY, + sizeof(struct rte_flow_item_phy_affinity)), + .next = NEXT(item_phy_affinity), + .call = parse_vc, + }, + [ITEM_PHY_AFFINITY_VALUE] = { + .name = "affinity", + .help = "physical affinity value", + .next = NEXT(item_phy_affinity, NEXT_ENTRY(COMMON_UNSIGNED), + item_param), + .args = ARGS(ARGS_ENTRY(struct rte_flow_item_phy_affinity, + affinity)), + }, }; /** Remove and return last entry from argument stack. */ @@ -10981,6 +11007,9 @@ flow_item_default_mask(const struct rte_flow_item *item) case RTE_FLOW_ITEM_TYPE_METER_COLOR: mask = &rte_flow_item_meter_color_mask; break; + case RTE_FLOW_ITEM_TYPE_PHY_AFFINITY: + mask = &rte_flow_item_phy_affinity_mask; + break; default: break; } diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 3e6242803d..3b4e8923dc 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -1544,6 +1544,14 @@ Matches Color Marker set by a Meter. - ``color``: Metering color marker. +Item: ``PHY_AFFINITY`` +^^^^^^^^^^^^^^^^^^^^^^ + +Matches on the physical affinity of the received packet, the physical port +in the group of physical ports connected to a single DPDK port. + +- ``affinity``: Physical affinity. + Actions ~~~~~~~ diff --git a/doc/guides/rel_notes/release_23_03.rst b/doc/guides/rel_notes/release_23_03.rst index c15f6fbb9f..a1abd67771 100644 --- a/doc/guides/rel_notes/release_23_03.rst +++ b/doc/guides/rel_notes/release_23_03.rst @@ -69,6 +69,11 @@ New Features ``rte_event_dev_config::nb_single_link_event_port_queues`` parameter required for eth_rx, eth_tx, crypto and timer eventdev adapters. +* **Added rte_flow support for matching PHY Affinity fields.** + + For the multiple hardware ports connect to a single DPDK port (mhpsdp), + Added ``phy_affinity`` item in rte_flow to support physical affinity of + the packets. Removed Items ------------- diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 0037506a79..1853030e93 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3712,6 +3712,10 @@ This section lists supported pattern items and their attributes, if any. - ``color {value}``: meter color value (green/yellow/red). +- ``phy_affinity``: match physical affinity. + + - ``affinity {value}``: physical affinity value. + - ``send_to_kernel``: send packets to kernel. diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 7d0c24366c..0c2d3b679b 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -157,6 +157,7 @@ static const struct rte_flow_desc_data rte_flow_desc_item[] = { MK_FLOW_ITEM(L2TPV2, sizeof(struct rte_flow_item_l2tpv2)), MK_FLOW_ITEM(PPP, sizeof(struct rte_flow_item_ppp)), MK_FLOW_ITEM(METER_COLOR, sizeof(struct rte_flow_item_meter_color)), + MK_FLOW_ITEM(PHY_AFFINITY, sizeof(struct rte_flow_item_phy_affinity)), }; /** Generate flow_action[] entry. */ diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index b60987db4b..56c04ea37c 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -624,6 +624,13 @@ enum rte_flow_item_type { * See struct rte_flow_item_meter_color. */ RTE_FLOW_ITEM_TYPE_METER_COLOR, + + /** + * Matches on the physical affinity of the received packet. + * + * @see struct rte_flow_item_phy_affinity. + */ + RTE_FLOW_ITEM_TYPE_PHY_AFFINITY, }; /** @@ -2103,6 +2110,27 @@ static const struct rte_flow_item_meter_color rte_flow_item_meter_color_mask = { }; #endif +/** + * @warning + * @b EXPERIMENTAL: this structure may change without prior notice + * + * RTE_FLOW_ITEM_TYPE_PHY_AFFINITY + * + * For the multiple hardware ports connect to a single DPDK port (mhpsdp), + * use this item to match the physical affinity of the packets. + */ +struct rte_flow_item_phy_affinity { + uint8_t affinity; /**< physical affinity value. */ +}; + +/** Default mask for RTE_FLOW_ITEM_TYPE_PHY_AFFINITY. */ +#ifndef __cplusplus +static const struct rte_flow_item_phy_affinity +rte_flow_item_phy_affinity_mask = { + .affinity = 0xff, +}; +#endif + /** * Action types. * From patchwork Mon Jan 30 17:00:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawei Wang X-Patchwork-Id: 122695 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5C3F541B83; Mon, 30 Jan 2023 18:02:01 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4961140EE2; Mon, 30 Jan 2023 18:02:01 +0100 (CET) Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam02on2061.outbound.protection.outlook.com [40.107.212.61]) by mails.dpdk.org (Postfix) with ESMTP id 2B08240C35 for ; Mon, 30 Jan 2023 18:02:00 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=THiWiO01UPrqiygLmWuNEtyhTpI9vE1YeN9PmZFqFRe8b0anTPxCIw0DqXQ9R+HDmUTLRGiaKFfOWAYlSiIDTqdcrHdQ8WLaxm9d4GEqABa/gGl9Yl6laGOE9Tfif7l0DAwWr+z0UTdiwwCK66zdQ7Kp/XDx2+drtbZCrxtI9XddEjHNR4RbSu/iHyuT6bhTmEhn81llA7bXFy6LPiEX8eHdGQcxo2U7J8vRN04hWN9YkhBf7Jmch1bNAF/13AtTWiXIxK9HbLjeCowcF1Lu6tSGpTzxxZgTtUIgLm3kXhNOW02qU7IRn4HDobIfP9qpwd8caCNPWlWCmBHI37l57g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=yZYVTBLsuBf9ZH55/6/G50NlpLx5xX/sx7UPXWx1FV4=; b=hwZsIjIo62Klf1HsyiEOl7hFamQpRN5/+I340EHXGW1E9jJxqjhTP4Oy/ANq2VURZHWaKGmBW3aPUBXH4Xj22uUh1UfwCZfIrRu0rubQ3XnF2ymyo9x32zLR5FPeYTYGbUxYFfvcRvXLfd77it6k4XHsafQ8QFzZm41bEVazEpWV5vyXnEL+fEU+fkUuiuDknQcRQoyqEXTks/05HKrd0rUjvsNIyklN0DgSt+m7K/s7w0ft7m6hxU4c4VAD01roxbMl2N+vPZcVncSgXQCD5QDr2KZOmG7A6/0XSo1QYvWJN3blU0UQw+ABa3QHfh2FEj5KWPcl69FC1vvSyopJHw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=yZYVTBLsuBf9ZH55/6/G50NlpLx5xX/sx7UPXWx1FV4=; b=J4jVlcl/s3iE0ik8oakgG4x/ifRW/i0/VnbO+R0XxFYTejxwGMMbXdPrqLcsfnPC0vcEG5bkkj/FjlbOaGAuBKFV0i1vSRo5Plm+HT+YaR/ulQnBBWQYU1CFpcrPBuR1JOA8GDLDDl3XeaejsJ/OwIu0jeu5Gc4a29l1f/h2wqtBrtaXdw0eTGEDZNY1AUzQrt26Yqak0Tt1BuK7JCHBWeDLB7Xer4Z4mgR7xI8bjUr/bbPdOJgG/S7qxZCCOUZECH50kPrtX/Uw5nK2G/s2wI7u/dUQWVduCG5jLkCiZN+cRURl5mY7/fUBJHLNCuV+VqbK0wXsUfl5l9lUjwSBug== Received: from DS7PR03CA0048.namprd03.prod.outlook.com (2603:10b6:5:3b5::23) by DM4PR12MB5280.namprd12.prod.outlook.com (2603:10b6:5:39d::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Mon, 30 Jan 2023 17:01:58 +0000 Received: from DS1PEPF0000B078.namprd05.prod.outlook.com (2603:10b6:5:3b5:cafe::d8) by DS7PR03CA0048.outlook.office365.com (2603:10b6:5:3b5::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36 via Frontend Transport; Mon, 30 Jan 2023 17:01:58 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DS1PEPF0000B078.mail.protection.outlook.com (10.167.17.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.17 via Frontend Transport; Mon, 30 Jan 2023 17:01:57 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Mon, 30 Jan 2023 09:01:19 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Mon, 30 Jan 2023 09:01:16 -0800 From: Jiawei Wang To: , , , "Aman Singh" , Yuying Zhang , Ferruh Yigit , Andrew Rybchenko CC: , Subject: [PATCH v2 2/2] ethdev: introduce the PHY affinity field in Tx queue API Date: Mon, 30 Jan 2023 19:00:40 +0200 Message-ID: <20230130170041.1360-3-jiaweiw@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20230130170041.1360-1-jiaweiw@nvidia.com> References: <20230130170041.1360-1-jiaweiw@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF0000B078:EE_|DM4PR12MB5280:EE_ X-MS-Office365-Filtering-Correlation-Id: 91dff80b-f4f9-4022-17e6-08db02e3b295 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: UbcljS50mtp6r7IAqluEp/5TH2ueoRWdpX3pbgG4UZcjQgh3z0YAVDhj5sKa1PRR1Euriitis0aG1iaF5b3dTwAThNfqkKtMjw2Wx5B72eXBfREjWtgNktgXPBnCdXkKGmuzIheA4LRq/mTxb+Id05vpD3na7OnVGg2cVotJLtfMxcjYy5KntgqqxFXcUSps8KefQBsy20U9TmTjbTf3TfOcO9PkbEN4D25V0LAVMkmiNYNWp9sh4yi8twSl1TaS/7rHyksaVWxW2x/eEbD1P2MAW5AASkKtpc1z0Btj9x+HmmHTIYbnM73MuxML/sP8CcKX8nvy7vElMuHdIxMTC1dpbWPB3rrRXI3meGty4E4i51IVOk3GbMpVxIdNtg5SvLBmw5CnUlMVQmZL5w9lhUrDw5KxMhOf4itZwmTtADDyR7eOk2F9H/WH79peDWu2Sds8uEw75rVEB5Y2IiGdBjaBcSbmEZZBLvX8fjgFPVz63IFSGf+8SS3maJ+k6dscpqwncixequyoP5C4UO7sXQKZ+UdXF3ZyzbF7FODKGstRe2eXlMD0TXcjtaURW6GdIxasQlzGXRY+oNoufizdfS+Fny3GKYCPMtZ0ZViUs1cDD6vcgpAxG6tFGKRlUKrcJbsAbxozqJi3Pp0r1RAQrZqXaPEUhHmLJfCs6HmETo9fwm5+J+BWDVdGi35xRT1w0oGNQTTZT3HG9zWjkv6Dcw== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(396003)(346002)(39860400002)(376002)(136003)(451199018)(40470700004)(36840700001)(46966006)(8936002)(70206006)(70586007)(316002)(8676002)(4326008)(5660300002)(54906003)(110136005)(1076003)(2906002)(41300700001)(478600001)(7696005)(6666004)(107886003)(16526019)(26005)(2616005)(336012)(6286002)(36756003)(186003)(40460700003)(47076005)(426003)(356005)(83380400001)(36860700001)(55016003)(82310400005)(86362001)(40480700001)(7636003)(82740400003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jan 2023 17:01:57.4379 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 91dff80b-f4f9-4022-17e6-08db02e3b295 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF0000B078.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5280 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org For the multiple hardware ports connect to a single DPDK port (mhpsdp), the previous patch introduces the new rte flow item to match the phy affinity of the received packets. This patch adds the tx_phy_affinity setting in Tx queue API, the affinity value reflects packets be sent to which hardware port. Value 0 is no affinity and traffic will be routed between different physical ports, if 0 is disabled then try to match on phy_affinity 0 will result in an error. Adds the new tx_phy_affinity field into the padding hole of rte_eth_txconf structure, the size of rte_eth_txconf keeps the same. Adds a suppress type for structure change in the ABI check file. This patch adds the testpmd command line: testpmd> port config (port_id) txq (queue_id) phy_affinity (value) For example, there're two hardware ports 0 and 1 connected to a single DPDK port (port id 0), and phy_affinity 1 stood for hardware port 0 and phy_affinity 2 stood for hardware port 1, used the below command to config tx phy affinity for per Tx Queue: port config 0 txq 0 phy_affinity 1 port config 0 txq 1 phy_affinity 1 port config 0 txq 2 phy_affinity 2 port config 0 txq 3 phy_affinity 2 These commands config the TxQ index 0 and TxQ index 1 with phy affinity 1, uses TxQ 0 or TxQ 1 send packets, these packets will be sent from the hardware port 0, and similar with hardware port 1 if sending packets with TxQ 2 or TxQ 3. Signed-off-by: Jiawei Wang --- app/test-pmd/cmdline.c | 84 +++++++++++++++++++++ devtools/libabigail.abignore | 5 ++ doc/guides/testpmd_app_ug/testpmd_funcs.rst | 13 ++++ lib/ethdev/rte_ethdev.h | 7 ++ 4 files changed, 109 insertions(+) diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index b32dc8bfd4..768f35cb02 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -764,6 +764,10 @@ static void cmd_help_long_parsed(void *parsed_result, "port cleanup (port_id) txq (queue_id) (free_cnt)\n" " Cleanup txq mbufs for a specific Tx queue\n\n" + + "port config (port_id) txq (queue_id) phy_affinity (value)\n" + " Set the physical affinity value " + "on a specific Tx queue\n\n" ); } @@ -12621,6 +12625,85 @@ static cmdline_parse_inst_t cmd_show_port_flow_transfer_proxy = { } }; +/* *** configure port txq phy_affinity value *** */ +struct cmd_config_tx_phy_affinity { + cmdline_fixed_string_t port; + cmdline_fixed_string_t config; + portid_t portid; + cmdline_fixed_string_t txq; + uint16_t qid; + cmdline_fixed_string_t phy_affinity; + uint16_t value; +}; + +static void +cmd_config_tx_phy_affinity_parsed(void *parsed_result, + __rte_unused struct cmdline *cl, + __rte_unused void *data) +{ + struct cmd_config_tx_phy_affinity *res = parsed_result; + struct rte_port *port; + + if (port_id_is_invalid(res->portid, ENABLED_WARN)) + return; + + if (res->portid == (portid_t)RTE_PORT_ALL) { + printf("Invalid port id\n"); + return; + } + + port = &ports[res->portid]; + + if (strcmp(res->txq, "txq")) { + printf("Unknown parameter\n"); + return; + } + if (tx_queue_id_is_invalid(res->qid)) + return; + + port->txq[res->qid].conf.tx_phy_affinity = res->value; + + cmd_reconfig_device_queue(res->portid, 0, 1); +} + +cmdline_parse_token_string_t cmd_config_tx_phy_affinity_port = + TOKEN_STRING_INITIALIZER(struct cmd_config_tx_phy_affinity, + port, "port"); +cmdline_parse_token_string_t cmd_config_tx_phy_affinity_config = + TOKEN_STRING_INITIALIZER(struct cmd_config_tx_phy_affinity, + config, "config"); +cmdline_parse_token_num_t cmd_config_tx_phy_affinity_portid = + TOKEN_NUM_INITIALIZER(struct cmd_config_tx_phy_affinity, + portid, RTE_UINT16); +cmdline_parse_token_string_t cmd_config_tx_phy_affinity_txq = + TOKEN_STRING_INITIALIZER(struct cmd_config_tx_phy_affinity, + txq, "txq"); +cmdline_parse_token_num_t cmd_config_tx_phy_affinity_qid = + TOKEN_NUM_INITIALIZER(struct cmd_config_tx_phy_affinity, + qid, RTE_UINT16); +cmdline_parse_token_string_t cmd_config_tx_phy_affinity_hwport = + TOKEN_STRING_INITIALIZER(struct cmd_config_tx_phy_affinity, + phy_affinity, "phy_affinity"); +cmdline_parse_token_num_t cmd_config_tx_phy_affinity_value = + TOKEN_NUM_INITIALIZER(struct cmd_config_tx_phy_affinity, + value, RTE_UINT16); + +static cmdline_parse_inst_t cmd_config_tx_phy_affinity = { + .f = cmd_config_tx_phy_affinity_parsed, + .data = (void *)0, + .help_str = "port config txq phy_affinity ", + .tokens = { + (void *)&cmd_config_tx_phy_affinity_port, + (void *)&cmd_config_tx_phy_affinity_config, + (void *)&cmd_config_tx_phy_affinity_portid, + (void *)&cmd_config_tx_phy_affinity_txq, + (void *)&cmd_config_tx_phy_affinity_qid, + (void *)&cmd_config_tx_phy_affinity_hwport, + (void *)&cmd_config_tx_phy_affinity_value, + NULL, + }, +}; + /* ******************************************************************************** */ /* list of instructions */ @@ -12851,6 +12934,7 @@ static cmdline_parse_ctx_t builtin_ctx[] = { (cmdline_parse_inst_t *)&cmd_show_capability, (cmdline_parse_inst_t *)&cmd_set_flex_is_pattern, (cmdline_parse_inst_t *)&cmd_set_flex_spec_pattern, + (cmdline_parse_inst_t *)&cmd_config_tx_phy_affinity, NULL, }; diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore index 7a93de3ba1..cbbde4ef05 100644 --- a/devtools/libabigail.abignore +++ b/devtools/libabigail.abignore @@ -20,6 +20,11 @@ [suppress_file] soname_regexp = ^librte_.*mlx.*glue\. +; Ignore fields inserted in middle padding of rte_eth_txconf +[suppress_type] + name = rte_eth_txconf + has_data_member_inserted_between = {offset_after(tx_deferred_start), offset_of(offloads)} + ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; Experimental APIs exceptions ; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 1853030e93..e9f20607a2 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -1605,6 +1605,19 @@ Enable or disable a per queue Tx offloading only on a specific Tx queue:: This command should be run when the port is stopped, or else it will fail. +config per queue Tx physical affinity +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Configure a per queue physical affinity value only on a specific Tx queue:: + + testpmd> port (port_id) txq (queue_id) phy_affinity (value) + +* ``phy_affinity``: reflects packet can be sent to which hardware port. + uses it on multiple hardware ports connect to + a single DPDK port (mhpsdp). + +This command should be run when the port is stopped, or else it will fail. + Config VXLAN Encap outer layers ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index c129ca1eaf..b30467c192 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -1138,6 +1138,13 @@ struct rte_eth_txconf { less free descriptors than this value. */ uint8_t tx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */ + /** + * Physical affinity to be set. + * Value 0 is no affinity and traffic could be routed between different + * physical ports, if 0 is disabled then try to match on phy_affinity 0 will + * result in an error. + */ + uint8_t tx_phy_affinity; /** * Per-queue Tx offloads to be set using RTE_ETH_TX_OFFLOAD_* flags. * Only offloads set on tx_queue_offload_capa or tx_offload_capa