From patchwork Wed Jul 21 15:55:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Havl=C3=ADk_Martin?= X-Patchwork-Id: 96174 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 04362A0C51; Wed, 21 Jul 2021 17:56:09 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DE38140696; Wed, 21 Jul 2021 17:56:09 +0200 (CEST) Received: from eva.fit.vutbr.cz (eva.fit.vutbr.cz [147.229.176.14]) by mails.dpdk.org (Postfix) with ESMTP id 6F8694014D for ; Wed, 21 Jul 2021 17:56:08 +0200 (CEST) Received: from dpdk-test7.liberouter.org ([IPv6:2001:718:800:ff00:2eea:7fff:fef8:8792]) (authenticated bits=0) by eva.fit.vutbr.cz (8.16.1/8.16.1) with ESMTPSA id 16LFtxsD081871 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO); Wed, 21 Jul 2021 17:56:03 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=stud.fit.vutbr.cz; s=studfit; t=1626882964; bh=i2R2jLpZvXsBZwtl3bB8XGTswYQRgw4aHP4G9UdTH80=; h=From:To:Cc:Subject:Date; b=pZYFOv5nbT+zd2RroCy+1WnX3T3f49HWwRG0kiQvBe/JZ6OPbfEhwd3A3+AGPFEB5 dDv5bSx9R1pe7dp/0btahqQ/CGLULHKEaD+1vcXB1zlQrA8ryAUyfJ2rxRPNpknjhw LS0JWENHjtvYLEATcDKKMxAaKsXxaNBvtoi64mpc= From: Martin Havlik To: xhavli56@stud.fit.vutbr.cz, Matan Azrad , Shahaf Shuler , Viacheslav Ovsiienko , Thomas Monjalon , Asaf Penso , Jiawei Wang , Bing Zhao , Xueming Li , Tal Shnaiderman , Shun Hao , Ciara Power , Bruce Richardson , Michael Baum , Raslan Darawsheh Cc: dev@dpdk.org, Jan Viktorin Date: Wed, 21 Jul 2021 17:55:47 +0200 Message-Id: <20210721155550.188663-2-xhavli56@stud.fit.vutbr.cz> X-Mailer: git-send-email 2.27.0 MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 1/4] doc: clarify RTE flow behaviour on port stop/start X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" It is now clearly stated that RTE flow rules can be created only after the port is started. Signed-off-by: Martin Havlik --- doc/guides/nics/mlx5.rst | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index f5b727c1ee..119d537adf 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -1790,21 +1790,25 @@ Notes for rte_flow ------------------ Flows are not cached in the driver. When stopping a device port, all the flows created on this port from the application will be flushed automatically in the background. After stopping the device port, all flows on this port become invalid and not represented in the system. All references to these flows held by the application should be discarded directly but neither destroyed nor flushed. -The application should re-create the flows as required after the port restart. +The application should re-create the flows as required after the port is +started again. + +Creating flows before port start is not permitted. All flows the application +wants to create have to be created after the port is started. Notes for testpmd ----------------- Compared to librte_net_mlx4 that implements a single RSS configuration per port, librte_net_mlx5 supports per-protocol RSS configuration. Since ``testpmd`` defaults to IP RSS mode and there is currently no command-line parameter to enable additional protocols (UDP and TCP as well as IP), the following commands must be entered from its CLI to get the same From patchwork Wed Jul 21 15:58:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Havl=C3=ADk_Martin?= X-Patchwork-Id: 96175 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7167BA0C51; Wed, 21 Jul 2021 17:58:41 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5C6CE4014D; Wed, 21 Jul 2021 17:58:41 +0200 (CEST) Received: from eva.fit.vutbr.cz (eva.fit.vutbr.cz [147.229.176.14]) by mails.dpdk.org (Postfix) with ESMTP id E1D1640143 for ; Wed, 21 Jul 2021 17:58:39 +0200 (CEST) Received: from dpdk-test7.liberouter.org ([IPv6:2001:718:800:ff00:2eea:7fff:fef8:8792]) (authenticated bits=0) by eva.fit.vutbr.cz (8.16.1/8.16.1) with ESMTPSA id 16LFwVas082162 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO); Wed, 21 Jul 2021 17:58:35 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=stud.fit.vutbr.cz; s=studfit; t=1626883116; bh=f9F/sXVTn2wNQijIwBM1ViVvSN6slMXTevrhcbpmMho=; h=From:To:Cc:Subject:Date; b=HXk7DwiLGonWwBWt/pJp5QTMbPQ5YXo1fdYr1ddWaqbX8jqs1QkKXTdKi8vxLaSTG fveRI6Z4JihmPp5ybb8jWWm4eukXSDWdofoh8swqfS1V2cERPKKK9DPJ5KfI832HQD bfwQVZxak9dndlFXMOqovyRXTtemdxaUFvmDsPkc= From: Martin Havlik To: xhavli56@stud.fit.vutbr.cz, Ori Kam , Ajit Khaparde , Thomas Monjalon , Andrew Rybchenko , Ferruh Yigit , Dekel Peled , Bing Zhao , Gregory Etelson , Eli Britstein , Alexander Kozyrev Cc: dev@dpdk.org, Jan Viktorin Date: Wed, 21 Jul 2021 17:58:14 +0200 Message-Id: <20210721155816.188795-3-xhavli56@stud.fit.vutbr.cz> X-Mailer: git-send-email 2.27.0 MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 2/4] doc: specify RTE flow create behaviour X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The ability to create RTE flow rules, depending on port status, can and does differ between PMDs. Now the doc reflects that. Signed-off-by: Martin Havlik --- doc/guides/prog_guide/rte_flow.rst | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 2b42d5ec8c..2988e3328a 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -3097,6 +3097,10 @@ actually created and a handle returned. const struct rte_flow_action *actions[], struct rte_flow_error *error); +The ability to create a flow rule may depend on the status (started/stopped) +of the port for which the rule is being created. This behaviour is +PMD specific. Seek relevant PMD documentation for details. + Arguments: - ``port_id``: port identifier of Ethernet device. From patchwork Wed Jul 21 15:58:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Havl=C3=ADk_Martin?= X-Patchwork-Id: 96176 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 98C0AA0C51; Wed, 21 Jul 2021 17:58:53 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 88AC540696; Wed, 21 Jul 2021 17:58:53 +0200 (CEST) Received: from eva.fit.vutbr.cz (eva.fit.vutbr.cz [147.229.176.14]) by mails.dpdk.org (Postfix) with ESMTP id 839724014E for ; Wed, 21 Jul 2021 17:58:51 +0200 (CEST) Received: from dpdk-test7.liberouter.org ([IPv6:2001:718:800:ff00:2eea:7fff:fef8:8792]) (authenticated bits=0) by eva.fit.vutbr.cz (8.16.1/8.16.1) with ESMTPSA id 16LFwVat082162 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO); Wed, 21 Jul 2021 17:58:41 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=stud.fit.vutbr.cz; s=studfit; t=1626883121; bh=qxUShGWhH1t/xjRG7Y2dtxqWSKcOHA3N5fCc2ll5k2I=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=nEvFli3d9BzujfsTJMZ6dj7MoWwQ3rSE0KnsYcCJmycK2EdrhNM/Fp+OWLyXnH+FN QwxAq7is+n0ne4i6bEGpnlYLvBeemIWyKOkX7dM++bcuQO8LEV2P31dhVi6fGusyay LsgBmDytoqyHes860feO/5Uj3s0YTSkdK/2vIUn4= From: Martin Havlik To: xhavli56@stud.fit.vutbr.cz, Chas Williams , "Min Hu (Connor)" , Ciara Power , Ajit Khaparde , Rosen Xu , Bruce Richardson Cc: dev@dpdk.org, Jan Viktorin Date: Wed, 21 Jul 2021 17:58:15 +0200 Message-Id: <20210721155816.188795-4-xhavli56@stud.fit.vutbr.cz> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210721155816.188795-3-xhavli56@stud.fit.vutbr.cz> References: <20210721155816.188795-3-xhavli56@stud.fit.vutbr.cz> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 3/4] doc: update bonding mode 8023ad info X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Included info on dedicated queues and added related note about issue on mlx5. Signed-off-by: Martin Havlik Acked-by: Min Hu (Connor) --- doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst b/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst index 30c56cd375..19c65f314c 100644 --- a/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst +++ b/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst @@ -116,10 +116,18 @@ Currently the Link Bonding PMD library supports following modes of operation: #. Calls to ``rte_eth_tx_burst`` must have a buffer size of at least 2xN, where N is the number of slaves. This is a space required for LACP frames. Additionally LACP packets are included in the statistics, but they are not returned to the application. + This mode also supports enabling dedicated rx and tx queues for handling + LACP frames separately from fast application path, resulting in + a potential performance improvement. + +.. note:: + Currently mlx5 doesn't work with enabled dedicated queues due to + an issue with RTE flow rule creation prior to port start. + * **Transmit Load Balancing (Mode 5):** .. figure:: img/bond-mode-5.* Transmit Load Balancing (Mode 5) From patchwork Wed Jul 21 15:59:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Havl=C3=ADk_Martin?= X-Patchwork-Id: 96177 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DAA34A0C51; Wed, 21 Jul 2021 17:59:45 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C74404014D; Wed, 21 Jul 2021 17:59:45 +0200 (CEST) Received: from eva.fit.vutbr.cz (eva.fit.vutbr.cz [147.229.176.14]) by mails.dpdk.org (Postfix) with ESMTP id D95B340143 for ; Wed, 21 Jul 2021 17:59:43 +0200 (CEST) Received: from dpdk-test7.liberouter.org ([IPv6:2001:718:800:ff00:2eea:7fff:fef8:8792]) (authenticated bits=0) by eva.fit.vutbr.cz (8.16.1/8.16.1) with ESMTPSA id 16LFxZwY082261 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO); Wed, 21 Jul 2021 17:59:39 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=stud.fit.vutbr.cz; s=studfit; t=1626883179; bh=Gt57jOXK1GJucZElbtK5lYwGB+HFVRwDl917CjirDDM=; h=From:To:Cc:Subject:Date; b=tc++2e17PbjjsF+CXaKWiQ99JHWdO2xN640K1IDAd14ozILiCKYn7Iu40H0EB4sJ5 B9dM6I/SBvS8+78opDIByO/ylu7780keIZsEB4oJetcwg98y7OVqiGaBwDBUEsVTvi PoTy61ZzD/OwGcwDQQ9SVjllqdF6A72VExjp+tHU= From: Martin Havlik To: xhavli56@stud.fit.vutbr.cz, Xiaoyun Li , Ferruh Yigit , Andrew Rybchenko , Ajit Khaparde , Haiyue Wang , Ori Kam , Haifei Luo , Viacheslav Ovsiienko , Andrey Vesnovaty , Bing Zhao , Jiawei Wang , Gregory Etelson , Li Zhang Cc: dev@dpdk.org, Jan Viktorin Date: Wed, 21 Jul 2021 17:59:18 +0200 Message-Id: <20210721155918.188867-5-xhavli56@stud.fit.vutbr.cz> X-Mailer: git-send-email 2.27.0 MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 4/4] doc: note that testpmd on mlx5 has dedicated queues problem X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In bonding mode 4 (8023ad), dedicated queues are not working on mlx5 NICs. Signed-off-by: Martin Havlik --- doc/guides/testpmd_app_ug/testpmd_funcs.rst | 3 +++ 1 file changed, 3 insertions(+) diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 2c43719ad3..8a6edc2bad 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -2603,6 +2603,9 @@ when in mode 4 (link-aggregation-802.3ad):: testpmd> set bonding lacp dedicated_queues (port_id) (enable|disable) +.. note:: + Dedicated queues `do not currently work + `__ on mlx5 NICs. set bonding agg_mode ~~~~~~~~~~~~~~~~~~~~ From patchwork Fri Mar 24 03:26:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dong Zhou X-Patchwork-Id: 125500 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6A4194281E; Fri, 24 Mar 2023 04:27:28 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EEC50406B8; Fri, 24 Mar 2023 04:27:27 +0100 (CET) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2086.outbound.protection.outlook.com [40.107.93.86]) by mails.dpdk.org (Postfix) with ESMTP id CE85C4068E for ; Fri, 24 Mar 2023 04:27:26 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=d1oLUQR0L7EYEMYwat1EbIXs0mrP0OeOmgG5npPVcQ9ezojsLRe9jImCOFZBAIpP7otf+2XsvGde637jVrU7TA6/ivkVN6K/H9pOKRM66DJ2PpeJ4fpYjfDcF1tO6npWtkzPK/GvcyL7qD1XVV3lBn+7ir5h6RmN7SlbZ4QuUCsdUfpjM2o1P1AfEK1HowNPFqYnpomv5Ac/Hc/mPdFmYWJ3ESrw92LSxFk1eDebOsfkupgaUS+PvGuRufvFduriFZWC8Olf7hFaG+OqpjmENs+g9sBli+Nt2hZKhZg7MQZoucPkypJIVdJhCxSgbAcg6dhAoZ6NlHEF+FBMSAxk3A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=PEFea37agfGB4hpUuh26JPti8D7vVX4rELuzoA7KgMk=; b=eLr7d2C+Q2Vj5Rbrv96OqfqaWC26dZQBuT1d+CJuOSjciuEPDXAqlJA+TM+5OseI3Hoh24SCwUVY/tt6mdv8ua2tp1DZqopyhcHaV7GdkVOr+4JsSAKBhY6FW2AqogcvN5COCqfHi67YoBNITepLIJx1TlZW5AgV78phrrwZFyRXkgweMhzfK8A9PQAkRuEmLDsvHzMtvI2NxCsvk+boOB21orGCvjxy0tXwNnS15286DaY/5ZsFXSM4vFpMjJWWsZ8slV8qXzgfLK21Q6WsQxJGukurL4H8rc9YiFa2VvOIETFNMSkW5h12pp5oZc/IOvckXMUoeLXA+FCF6AHetw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=PEFea37agfGB4hpUuh26JPti8D7vVX4rELuzoA7KgMk=; b=ICB3g4TudNy5VPIb2oV5ddZ77I+2eX7Do1aMOVpB7lv53OxqBrhCkT3WoUeuNjomOYpqQIni1G2Zve7wZU0f3lF6o0nkwYaH7phvvT35GD5EZs+kfAqHJkkk/Or0GM/IKHFAQ3C38OSpgDO3jsY5jHDP9G41QegNebEBpSl8eXvvz7++3uJ/f0DPBlOWlRcgASuxYtzCsJpvs/eQZmYdUNDuOX6Q5pAddKVvmrTZ+1ZCgufqYwFAU3ztS4m4j+YrFyzjtyjlQBRzjWYwGJRgBZvC10GfDEsqfAUJyzta8lfvAGWt+afqclEao2JWjF/sy73YTjFO6Oz81vOfrU/jIg== Received: from BN9PR03CA0988.namprd03.prod.outlook.com (2603:10b6:408:109::33) by SJ1PR12MB6097.namprd12.prod.outlook.com (2603:10b6:a03:488::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6178.38; Fri, 24 Mar 2023 03:27:24 +0000 Received: from BN8NAM11FT052.eop-nam11.prod.protection.outlook.com (2603:10b6:408:109:cafe::7e) by BN9PR03CA0988.outlook.office365.com (2603:10b6:408:109::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6178.38 via Frontend Transport; Fri, 24 Mar 2023 03:27:23 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BN8NAM11FT052.mail.protection.outlook.com (10.13.177.210) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6222.22 via Frontend Transport; Fri, 24 Mar 2023 03:27:23 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Thu, 23 Mar 2023 20:27:12 -0700 Received: from nvidia.com (10.126.230.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Thu, 23 Mar 2023 20:27:09 -0700 From: dong zhou To: , , Aman Singh , Yuying Zhang , "Ferruh Yigit" , Andrew Rybchenko , Olivier Matz CC: , , Dong Zhou Subject: [RFC PATCH] ethdev: add flow item for RoCE infiniband BTH Date: Fri, 24 Mar 2023 06:26:15 +0300 Message-ID: <20230324032615.4141031-1-dongzhou@nvidia.com> X-Mailer: git-send-email 2.27.0 MIME-Version: 1.0 X-Originating-IP: [10.126.230.37] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT052:EE_|SJ1PR12MB6097:EE_ X-MS-Office365-Filtering-Correlation-Id: 7e642fb1-9736-40c0-aebd-08db2c17af39 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: UVPidcYmH0mHSOb+3LsSFBn3CnUCT403rWcAgEZ95obwCwRZKv44OeFM4l/j7etUfOscOJj9B79uTEj4SZlgvzRxoAtlps4Y/ua++HQwYAbriJuRNlzOHqP+TV+ylLHqwFJsFN//YZZJ1gkxlRrPCGderW1nR3BVMB2HA6vvALekpLEd3t0xYdhqHwUcGy/nzcmqvsJPDYixn/zT4ySgkS4YmpPwmipBn5mNCk1qxnCJR7bQL491f65qtupq9nxIqAog1FKa+qmOWLMwElrwZeU584CNHHpk+GHiwUrIbj0RsQIeHK/q0G6Smcu7B7BLdCkkQ6PXCd9qvritVT+HXFnXa3Y6HGGzrjY9U1+y0+IeRLJhbUlm0ZEU/XET4+G4QP50QC5Vra+n3BDw2ED1N3Jzpr4C7qGVPBAnuKDAgPmi7+pixGVTg4ZCE/AUKA/H/OADI2aND+QEWh9T5v6K8IeMUMnhcyfdWo7xgEcRpqTTjjOpnntdz2UnoXDtv9FrOpxlMTN1vDcxhv0k85JDuNehydnNDTlV1mNsEei1CzGDFwoZDq8i17V1Sy5incPkg/VDoC36ekZscPZdrXE4s6zc1TXhqOgV3CeuMrRX+Hg9cMU2rZUukhfDQMMmXS1d6yiagvDR8VxeLRKbC7SyAl8wKV9MIDmUxOeN9mz9HkXtUgQZun+0GWlfeALIim3xEvDdp1KNkAoLDWyc3vyb3HfVSiSqZGyjOYuK7/ghkgow4z+swooHGonBFX+9OmEx X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(396003)(136003)(376002)(39860400002)(346002)(451199018)(46966006)(36840700001)(40470700004)(47076005)(478600001)(66574015)(426003)(1076003)(6666004)(107886003)(16526019)(6286002)(186003)(26005)(55016003)(40480700001)(86362001)(36756003)(7696005)(2906002)(336012)(40460700003)(8936002)(82310400005)(2616005)(5660300002)(356005)(8676002)(4326008)(70586007)(70206006)(83380400001)(316002)(36860700001)(110136005)(41300700001)(7636003)(54906003)(82740400003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Mar 2023 03:27:23.6689 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 7e642fb1-9736-40c0-aebd-08db2c17af39 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT052.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ1PR12MB6097 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Dong Zhou IB(InfiniBand) is one type of networking used in high-performance computing with high throughput and low latency. Like Ethernet, IB defines a layered protocol (Physical, Link, Network, Transport Layers). IB provides native support for RDMA(Remote DMA), an extension of the DMA that allows direct access to remote host memory without CPU intervention. IB network requires NICs and switches to support the IB protocol. RoCE(RDMA over Converged Ethernet) is a network protocol that allows RDMA to run on Ethernet. RoCE encapsulates IB packets on ethernet and has two versions, RoCEv1 and RoCEv2. RoCEv1 is an ethernet link layer protocol, IB packets are encapsulated in the ethernet layer and use ethernet type 0x8915. RoCEv2 is an internet layer protocol, IB packets are encapsulated in UDP payload and use a destination port 4791, The format of the RoCEv2 packet is as follows: ETH + IP + UDP(dport 4791) + IB(BTH + ExtHDR + PAYLOAD + CRC) BTH(Base Transport Header) is the IB transport layer header, RoCEv1 and RoCEv2 both contain this header. This patch introduces a new RTE item to match the IB BTH in RoCE packets. One use of this match is that the user can monitor RoCEv2's CNP(Congestion Notification Packet) by matching BTH opcode 0x81. This patch also adds the testpmd command line to match the RoCEv2 BTH. Usage example: testpmd> flow create 0 group 1 ingress pattern eth / ipv4 / udp dst is 4791 / ib_bth opcode is 0x81 dst_qp is 0xd3 / end actions queue index 0 / end Signed-off-by: Dong Zhou --- app/test-pmd/cmdline_flow.c | 58 ++++++++++++++++++ doc/guides/nics/features/default.ini | 1 + doc/guides/prog_guide/rte_flow.rst | 7 +++ doc/guides/testpmd_app_ug/testpmd_funcs.rst | 6 ++ lib/ethdev/rte_flow.c | 1 + lib/ethdev/rte_flow.h | 27 ++++++++ lib/net/rte_ib.h | 68 +++++++++++++++++++++ 7 files changed, 168 insertions(+) create mode 100644 lib/net/rte_ib.h diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 5fbc450849..3ff33b281d 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -531,6 +531,11 @@ enum index { ITEM_QUOTA_STATE_NAME, ITEM_PORT_AFFINITY, ITEM_PORT_AFFINITY_VALUE, + ITEM_IB_BTH, + ITEM_IB_BTH_OPCODE, + ITEM_IB_BTH_PKEY, + ITEM_IB_BTH_DST_QPN, + ITEM_IB_BTH_PSN, /* Validate/create actions. */ ACTIONS, @@ -1517,6 +1522,7 @@ static const enum index next_item[] = { ITEM_QUOTA, ITEM_PORT_AFFINITY, ITEM_IPSEC_SYNDROME, + ITEM_IB_BTH, END_SET, ZERO, }; @@ -2025,6 +2031,15 @@ static const enum index item_ipsec_syndrome[] = { ZERO, }; +static const enum index item_ib_bth[] = { + ITEM_IB_BTH_OPCODE, + ITEM_IB_BTH_PKEY, + ITEM_IB_BTH_DST_QPN, + ITEM_IB_BTH_PSN, + ITEM_NEXT, + ZERO, +}; + static const enum index next_action[] = { ACTION_END, ACTION_VOID, @@ -5750,6 +5765,46 @@ static const struct token token_list[] = { .args = ARGS(ARGS_ENTRY(struct mlx5_flow_item_ipsec_syndrome, syndrome)), }, + [ITEM_IB_BTH] = { + .name = "ib_bth", + .help = "match ib bth fields", + .priv = PRIV_ITEM(IB_BTH, + sizeof(struct rte_flow_item_ib_bth)), + .next = NEXT(item_ib_bth), + .call = parse_vc, + }, + [ITEM_IB_BTH_OPCODE] = { + .name = "opcode", + .help = "match ib bth opcode", + .next = NEXT(item_ib_bth, NEXT_ENTRY(COMMON_UNSIGNED), + item_param), + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ib_bth, + hdr.opcode)), + }, + [ITEM_IB_BTH_PKEY] = { + .name = "pkey", + .help = "partition key", + .next = NEXT(item_ib_bth, NEXT_ENTRY(COMMON_UNSIGNED), + item_param), + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ib_bth, + hdr.pkey)), + }, + [ITEM_IB_BTH_DST_QPN] = { + .name = "dst_qp", + .help = "destination qp", + .next = NEXT(item_ib_bth, NEXT_ENTRY(COMMON_UNSIGNED), + item_param), + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ib_bth, + hdr.dst_qp)), + }, + [ITEM_IB_BTH_PSN] = { + .name = "psn", + .help = "packet sequence number", + .next = NEXT(item_ib_bth, NEXT_ENTRY(COMMON_UNSIGNED), + item_param), + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ib_bth, + hdr.psn)), + }, /* Validate/create actions. */ [ACTIONS] = { .name = "actions", @@ -12634,6 +12689,9 @@ flow_item_default_mask(const struct rte_flow_item *item) case RTE_FLOW_ITEM_TYPE_IPV6_ROUTING_EXT: mask = &ipv6_routing_ext_default_mask; break; + case RTE_FLOW_ITEM_TYPE_IB_BTH: + mask = &rte_flow_item_ib_bth_mask; + break; default: break; } diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini index 510cc6679d..54045a29a0 100644 --- a/doc/guides/nics/features/default.ini +++ b/doc/guides/nics/features/default.ini @@ -103,6 +103,7 @@ gtpc = gtpu = gtp_psc = higig2 = +ib_bth = icmp = icmp6 = icmp6_nd_na = diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 1d81334e96..363bbdd5d8 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -1574,6 +1574,13 @@ Matches ipv6 routing extension header. - ``type``: IPv6 routing extension header type. - ``segments_left``: How many IPv6 destination addresses carries on +Item: ``IB_BTH`` +^^^^^^^^^^^^^^^^ + +Matches an InfiniBand base transport header in RoCE packet. + +- ``hdr``: InfiniBand base transport header definition (``rte_ib.h``). + Actions ~~~~~~~ diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 0c3317ee06..de90958596 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3741,6 +3741,12 @@ This section lists supported pattern items and their attributes, if any. - ``send_to_kernel``: send packets to kernel. +- ``ib_bth``: match InfiniBand BTH(base transport header). + + - ``opcode {unsigned}``: Opcode. + - ``pkey {unsigned}``: Partition key. + - ``dst_qp {unsigned}``: Destination Queue Pair. + - ``psn {unsigned}``: Packet Sequence Number. Actions list ^^^^^^^^^^^^ diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 515edb0c01..61066aa615 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -179,6 +179,7 @@ static const struct rte_flow_desc_data rte_flow_desc_item[] = { MK_FLOW_ITEM(PORT_AFFINITY, sizeof(struct rte_flow_item_port_affinity)), MK_FLOW_ITEM_FN(IPV6_ROUTING_EXT, sizeof(struct rte_flow_item_ipv6_routing_ext), rte_flow_item_ipv6_routing_ext_conv), + MK_FLOW_ITEM(IB_BTH, sizeof(struct rte_flow_item_ib_bth)), }; /** Generate flow_action[] entry. */ diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 1590cd0498..d78bcf17cd 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -38,6 +38,7 @@ #include #include #include +#include #ifdef __cplusplus extern "C" { @@ -691,6 +692,13 @@ enum rte_flow_item_type { * See struct rte_flow_item_ipv6_routing_ext. */ RTE_FLOW_ITEM_TYPE_IPV6_ROUTING_EXT, + + /** + * Matches an InfiniBand base transport header in RoCE packet. + * + * See struct rte_flow_item_ib_bth. + */ + RTE_FLOW_ITEM_TYPE_IB_BTH, }; /** @@ -2284,6 +2292,25 @@ rte_flow_item_port_affinity_mask = { }; #endif +/** + * RTE_FLOW_ITEM_TYPE_IB_BTH. + * + * Matches an InfiniBand base transport header in RoCE packet. + */ +struct rte_flow_item_ib_bth { + struct rte_ib_bth hdr; /**< InfiniBand base transport header definition. */ +}; + +/** Default mask for RTE_FLOW_ITEM_TYPE_IB_BTH. */ +#ifndef __cplusplus +static const struct rte_flow_item_ib_bth rte_flow_item_ib_bth_mask = { + .hdr = { + .opcode = 0xff, + .dst_qp = "\xff\xff\xff", + }, +}; +#endif + /** * Action types. * diff --git a/lib/net/rte_ib.h b/lib/net/rte_ib.h new file mode 100644 index 0000000000..c1b2797815 --- /dev/null +++ b/lib/net/rte_ib.h @@ -0,0 +1,68 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 NVIDIA Corporation & Affiliates + */ + +#ifndef RTE_IB_H +#define RTE_IB_H + +/** + * @file + * + * InfiniBand headers definitions + * + * The infiniBand headers are used by RoCE (RDMA over Converged Ethernet). + */ + +#include + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * InfiniBand Base Transport Header according to + * IB Specification Vol 1-Release-1.4. + */ +__extension__ +struct rte_ib_bth { + uint8_t opcode; /**< Opcode. */ +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + uint8_t tver:4; /**< Transport Header Version. */ + uint8_t padcnt:2; /**< Pad Count. */ + uint8_t m:1; /**< MigReq. */ + uint8_t se:1; /**< Solicited Event. */ +#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint8_t se:1; /**< Solicited Event. */ + uint8_t m:1; /**< MigReq. */ + uint8_t padcnt:2; /**< Pad Count. */ + uint8_t tver:4; /**< Transport Header Version. */ +#endif + rte_be16_t pkey; /**< Partition key. */ +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + uint8_t rsvd0:6; /**< Reserved. */ + uint8_t b:1; /**< BECN. */ + uint8_t f:1; /**< FECN. */ +#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint8_t f:1; /**< FECN. */ + uint8_t b:1; /**< BECN. */ + uint8_t rsvd0:6; /**< Reserved. */ +#endif + uint8_t dst_qp[3]; /**< Destination QP */ +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + uint8_t rsvd1:7; /**< Reserved. */ + uint8_t a:1; /**< Acknowledge Request. */ +#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint8_t a:1; /**< Acknowledge Request. */ + uint8_t rsvd1:7; /**< Reserved. */ +#endif + uint8_t psn[3]; /**< Packet Sequence Number */ +} __rte_packed; + +/** RoCEv2 default port. */ +#define RTE_ROCEV2_DEFAULT_PORT 4791 + +#ifdef __cplusplus +} +#endif + +#endif /* RTE_IB_H */ From patchwork Tue Mar 28 12:27:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rongwei Liu X-Patchwork-Id: 125564 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EE982427E7; Tue, 28 Mar 2023 14:28:15 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CEBA540EDF; Tue, 28 Mar 2023 14:28:15 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2083.outbound.protection.outlook.com [40.107.92.83]) by mails.dpdk.org (Postfix) with ESMTP id 6FAC140156 for ; Tue, 28 Mar 2023 14:28:14 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=R3F0/Zv0+UdCiRE8hDwNgkvY3iCemDtc23PFVbNUSz5JeSLLujUh3ERIHtrTKLUAzWFcnpLbWtew9ehcLjUYeKulc/BAASsygTaJMIfpU3fu0SYa5qSGkciE4jmX5VRa+r9ZmvYdFqG8VnluyTfPfsne6ISxGDDQY50yx4GzLHDNT4pllJj6a6E+4AzvnPShIIH9skqiV9XhEbwUJPqQzC7pZQznHVrVWDrUT4iPOUZZXrrZ/engDfigf+83MTMgoeziOlGLCazdt8ZlmCyela9EzPT/bBKy/u7UjjUulFpczGHTZ+uulnvfIFI145E4wIprmpWwvDHmPuR0J3MMag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=y76icRbYTETbuMXdLWoASe2nMTXHzotNv46sPdisMwA=; b=mlxR4YNwHQSrHnZsxaQ07Hr9lEFV0mgII0Q7EL1DvJDvHijz7Qq8jgVuWlaXGffXtd/wDkyonZbytfei0wLJjEkhRPNNNy/KXTw+tx/uFY2agB+CVO2CPT0EPOlybogBhcssrl8S1TNc9dq4/QnWXAWfEijkAxeXgpBqXSAWqmunM1iJLlvBySoJSPWiZQ3nNesXkbKTFvnupfzunteg+pExrD+JpNZdl8BUzr6zCgzPLRjxvwkFa7tM2YGRhPlP3u+oXECE0gkiSC7YitOw1tt5ZQFc2qtOAQkFxozhptlwLmjnYBXpcPU+C3GYp5kSF3SJbUmhwV1VMkt7ZlLBsg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=y76icRbYTETbuMXdLWoASe2nMTXHzotNv46sPdisMwA=; b=HA1K462kLZnif3uMNHdT+nvl6tAHtxBee+VusZ3Y3sBMP680yJYUjmGuAQZ6Nl0brqXjVA5SN2Gs53TubOQk3JSZZpAFCZymWes0E1EJ9bBB33Sht5ervR9kB9GKxf5Y14yQpAxUQpDErOro1ja4RjSvVt6IH2Kpj+SSJg7+kZF4AeKBg2WckbNLKmBn/4WpU9q3Stl4FN5RTv1ISREhpjAYb/DyR2EjICcc4GBr/1RG5iUngiy8X2O/XMMRde/kuSVgUjM+n9ogJO4xSWWxdONmMxYjAqYq7TNIttxxV55hCxdbOc9iNcY0jZ0/BqxYU/l1WBVLIgiw72624YFCmQ== Received: from MW4PR04CA0289.namprd04.prod.outlook.com (2603:10b6:303:89::24) by MN0PR12MB6271.namprd12.prod.outlook.com (2603:10b6:208:3c1::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6222.31; Tue, 28 Mar 2023 12:28:11 +0000 Received: from CO1NAM11FT016.eop-nam11.prod.protection.outlook.com (2603:10b6:303:89:cafe::79) by MW4PR04CA0289.outlook.office365.com (2603:10b6:303:89::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6178.43 via Frontend Transport; Tue, 28 Mar 2023 12:28:10 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CO1NAM11FT016.mail.protection.outlook.com (10.13.175.141) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6222.22 via Frontend Transport; Tue, 28 Mar 2023 12:28:10 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Tue, 28 Mar 2023 05:27:57 -0700 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Tue, 28 Mar 2023 05:27:54 -0700 From: Rongwei Liu To: , , , , CC: Aman Singh , Yuying Zhang , Olivier Matz Subject: [PATCH v2] app/testpmd: set srv6 header without any TLV Date: Tue, 28 Mar 2023 15:27:42 +0300 Message-ID: <20230328122742.738048-1-rongweil@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <10910654.BaYr0rKQ5T@thomas> References: <10910654.BaYr0rKQ5T@thomas> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT016:EE_|MN0PR12MB6271:EE_ X-MS-Office365-Filtering-Correlation-Id: e2af74ee-44e9-4283-51a5-08db2f87e4b4 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: DqIAzwLVg/aNVU+Eaqrc7VHBYbehtlFE7J7hTXfwkWZYf1Yqz0eVH3cSlJ8id9ezM961smO5F+ZssCxQj8GxI5olzjHd9/wBAJD24EmnOBW/8wKO8vsd5Wp0fY40Cvb4fFsGBAuz15Eqcljw7HRWM2Q19Yh+35yEnWDmBFnpDSXXRzd/bHbBkNxUVsTPP58xGa+QAWmXoWXMcEw+/TSos6DAMwMkzi5yxzpUqBD711113IR52lfG7eph/rmjHq5gveYW6SLyicRBlOjHY2bQbNXRFrJV5ojBYPyilPzfqKVqh5k0bp0Br8LoYNHwEY6gCqT2WAdCJmzz7I7Ml1H8yVrDP8bndwYd0GLLeTKDvDVh9vwViQ27kw6o0urFQ6TcGgYlVHQZTsecuN7nEVv1DXx6gwkUlTh1ZbHskGjePHqX68dk7rupU7rPvEwQsD1gfcxqwQiYpqEsdPA7w2HYCXp+kCHJMJX92RN3IdDeXMmI1xcWYH10S973ZW3tXT2ASDQCv4UpmPcAtQdnq95GWIsANd3o4S3VttUNcxWqsqXfLz/YWFz2SRnAyUDO8afsDhyl4dspM1sMk5ZCV+s8Gsk+jyYZRCol4Hv6DEgdS0Lr5vTQlwzPOrcMRt4KJFs0+ps+aUeOvOUcX6M+gwU/fJQGP95AAHI/vZANEDKYzVc0+mgEO5e+D44g+9MmpZMpn5GcMnzfWn6ntrO5PgxtlqhO6LRe3v8PP+d+ZdUFFhhWLOU26hqvf0085wsujWAbc2jHYDBdmrjJhbwyz+aRE4Pe7ix8xqQzQYXWEl6uYcc= X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(396003)(346002)(136003)(376002)(39860400002)(451199021)(40470700004)(36840700001)(46966006)(2906002)(40480700001)(82310400005)(86362001)(36756003)(40460700003)(478600001)(426003)(6666004)(336012)(2616005)(8676002)(7696005)(47076005)(55016003)(110136005)(54906003)(70206006)(70586007)(4326008)(36860700001)(316002)(34020700004)(5660300002)(26005)(16526019)(6286002)(7636003)(186003)(356005)(1076003)(82740400003)(8936002)(41300700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Mar 2023 12:28:10.6232 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e2af74ee-44e9-4283-51a5-08db2f87e4b4 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT016.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB6271 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org When the type field of the IPv6 routing extension is 4, it means segment routing header. In this case, set the last_entry to be segment_left minus 1 if the user doesn't specify the header length explicitly. Signed-off-by: Rongwei Liu v2: add macro definition for segment routing header. Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 3 +++ lib/net/rte_ip.h | 3 +++ 2 files changed, 6 insertions(+) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 5fbc450849..09f417b76e 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -12817,6 +12817,9 @@ cmd_set_raw_parsed(const struct buffer *in) size = sizeof(struct rte_ipv6_routing_ext) + (ext->hdr.segments_left << 4); ext->hdr.hdr_len = ext->hdr.segments_left << 1; + /* Srv6 without TLV. */ + if (ext->hdr.type == RTE_IPV6_SRCRT_TYPE_4) + ext->hdr.last_entry = ext->hdr.segments_left - 1; } else { size = sizeof(struct rte_ipv6_routing_ext) + (ext->hdr.hdr_len << 3); diff --git a/lib/net/rte_ip.h b/lib/net/rte_ip.h index 337fad15d7..cfdbfb86ba 100644 --- a/lib/net/rte_ip.h +++ b/lib/net/rte_ip.h @@ -540,6 +540,9 @@ struct rte_ipv6_hdr { uint8_t dst_addr[16]; /**< IP address of destination host(s). */ } __rte_packed; +/* IPv6 routing extension type definition. */ +#define RTE_IPV6_SRCRT_TYPE_4 4 + /** * IPv6 Routing Extension Header */ From patchwork Tue Apr 18 17:21:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 126248 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7CB334297F; Tue, 18 Apr 2023 19:22:16 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 06FE64021F; Tue, 18 Apr 2023 19:22:16 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2087.outbound.protection.outlook.com [40.107.92.87]) by mails.dpdk.org (Postfix) with ESMTP id 6E1754014F for ; Tue, 18 Apr 2023 19:22:14 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ieymtgzcsuVITLPQkc9si3vXhO5OhRnd7YTVXO7/qC1HbYWEQ2imj51PJArQZ94MTGNUJ37Gs0VBZ+HVe/jKui4phS8Ed29s1cKLCAp60xYA6MXu3PM5apGJlkfoDCh27S2CZ9JwSCTmDsuijZN5OwFyn0SpcmSiCM6X6CNZmrQlpx2dMrwPuQgTv+fCS9/uOoWeQrFVe2GnhP8h5TjE0lc/IttkwTQNyPiRQGvj/y1Z5JgAq2luSQgO73IjCiKkgJsVg+AHliFQ4g++jsMMtm7c0hiRUy7s5z6nrBQhSmWQtCzBx7i3UMoXKSAQ1DGPClHBioPYtpBD+KfgeimSvA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=XVN4ccH2F9J17mw3qgnhtAKiDi6Hjh367bjYrOKh7pQ=; b=j0jLOKCXtDr1Xb8t4NLb/4igzOCRW61rrwODz2rrwWs35pX3hn/uuyZ1efyvBXAzrVih0qotpfTRPak/SbpHhkWyglI+BYkJBxGko5GNa8J87bS1y/jng1124LeuO157z3oqIr09NLwrdM4MQG4n0TM3QLW1S3g5lU5UgSYsVdEigIxbi1tLZ3BqRePj5v3nWsFLvF6o0moaabmaAuvZU+I9JRCfTGbQ/nLHq6CchH5g2B3BrhLG0ELdL2Eg1aYFHI3qxuUBmzwmnoAwn4z7DlgenNqvi8axHTy5aFJdSywiVTXVMOGv5gFxuHcZAndQ/yTtdWXStSaBaHUJHkPqeQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=XVN4ccH2F9J17mw3qgnhtAKiDi6Hjh367bjYrOKh7pQ=; b=hWNnbJDMfUEXrLiR3PhzmcxabKYc48dJTO4/cDINJmc4J4za5mkQQcOaeqB71u68kJQTCqj2bjko3R7Jl3knqVvgRhVPN8jDK4HM+V/RwrgpKa9QSYiAHusImJMW6w8f1ipc/ztJ8NmGsSoZei+4Y+Oz1DcnPGDvNE29nnC2Zs0YHj1B87asFhVpqRVpsWvtiX6MLlYWlcMY8+FiHawYHq6V17IpeK3VDceDSh/EfAKVeIb5HHZk517ThZLZ9LvU/w9fEGiZjp4sR8VyEQY5EcP3XFmFMxEts1znlL6FG0jf/Ke2g+5bhjlqWLSCofJRt1/IqUrHbnu8bXs4ehs1vw== Received: from MW4PR03CA0354.namprd03.prod.outlook.com (2603:10b6:303:dc::29) by SJ0PR12MB8614.namprd12.prod.outlook.com (2603:10b6:a03:47d::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr 2023 17:22:11 +0000 Received: from CO1NAM11FT090.eop-nam11.prod.protection.outlook.com (2603:10b6:303:dc:cafe::99) by MW4PR03CA0354.outlook.office365.com (2603:10b6:303:dc::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.47 via Frontend Transport; Tue, 18 Apr 2023 17:22:11 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CO1NAM11FT090.mail.protection.outlook.com (10.13.175.152) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.20 via Frontend Transport; Tue, 18 Apr 2023 17:22:11 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Tue, 18 Apr 2023 10:22:01 -0700 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Tue, 18 Apr 2023 10:21:58 -0700 From: Gregory Etelson To: CC: , Ori Kam , Aman Singh , Yuying Zhang , "Ferruh Yigit" , Thomas Monjalon , "Andrew Rybchenko" Subject: [PATCH] ethdev: add indirect list flow action Date: Tue, 18 Apr 2023 20:21:44 +0300 Message-ID: <20230418172144.24365-1-getelson@nvidia.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT090:EE_|SJ0PR12MB8614:EE_ X-MS-Office365-Filtering-Correlation-Id: 9a50f947-064d-4b18-9b37-08db4031722c X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: tfMi4XDjwelcA8N2gVTV1nUsZoOTlNehRP/3kv/gDOG1defwVxGHOEotSagTXcFimexn5855hG2miyzSnSRpdzHfT3vM1MNh2dwUvaWDErxqOBf/+uCUonwdlREbEBlrzLZII/Z5+ClRKnFjnqed+krQXrBADdGWjOmDAkQ5RmQIBWaiXEKV+c7jmCfqupu5mFPWD7iqYmUqux0VCPP2bSkFjs4QpcmTZ0nOwBdeWZG+Nw4danrK+xhmFeDJFAVx0UW/YpLnnGkZiKMsgVHGORHKxNE4BLHKJ+fmlcb7WfI0xgVS7+dPSpHL7/AhDyWMEtfR+oAWq8OBKycj3yh1RdKlhQY+ix1sBrZHCHaHiNXifXPuWkXTc6bz5c+IrNxMx+Vl7AzM6z9RrWOw5dZZouJTovmRT4DETGYnq2jP3DUIdXoh62KQR2DHUhs8lfsz5A2d9ivsdGI1tEF5m3bO4WYmb2utp2F70S3TzuAgQHL8NIYT+17VEtq9p0F71WQ1rtZoNJb73Gj0VtorykAIgGWD6/JsgSWk23evrpYhomECrpPwZXrj2CqHF5QQmGvsIzOgzdODPgWwDRhle45wF12v0/BiUNG89bnaTDaNodYvx3gXwv15YOYSjZOCzICDBWGKWKEndzym8nNV4chY42T4mIhoxWqng6qZs/QOknntFS5SvATIRrl3kplxkcj+XIp8kmQ1nyqbuyZy0SmdzPchqt7fAUotAyG76c5HVPTpS6VUKj0ziJLRrk9jspYJU5GaDbRLrNe+2ErNxFJ2zQ== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(136003)(396003)(346002)(376002)(39860400002)(451199021)(40470700004)(36840700001)(46966006)(5660300002)(86362001)(426003)(336012)(82310400005)(2616005)(47076005)(83380400001)(82740400003)(16526019)(186003)(26005)(356005)(7636003)(1076003)(40480700001)(36860700001)(8676002)(34020700004)(6286002)(8936002)(478600001)(54906003)(7696005)(6666004)(316002)(55016003)(41300700001)(40460700003)(36756003)(4326008)(6916009)(70586007)(70206006)(2906002)(30864003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 17:22:11.5220 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9a50f947-064d-4b18-9b37-08db4031722c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT090.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR12MB8614 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Indirect flow action provides a handler to hardware flow action object. The handler is used in flow rules for sharing hardware action object state. Current INDIRECT flow handler can reference a single flow action type. New INDIRECT_LIST extends existing functionality. INDIRECT_LIST flow handler can reference one or many flow actions. testpmd example: set raw_encap 0 \ eth src is 11:00:00:00:00:11 dst is aa:00:00:00:00:aa / \ ipv4 src is 1.1.1.1 dst is 2.2.2.2 ttl is 64 proto is 17 / \ udp src is 0x1234 dst is 4789 / vxlan vni is 0xabcd / end_set set raw_encap 1 \ eth src is 22:00:00:00:00:22 dst is bb:00:00:00:00:bb / \ ipv6 src is 2001::1111 dst is 2001::2222 proto is 17 / \ udp src is 0x1234 dst is 4789 / vxlan vni is 0xabcd / end_set set sample_actions 0 \ raw_encap index 0 / represented_port ethdev_port_id 0 / end set sample_actions 1 \ raw_encap index 1 / represented_port ethdev_port_id 0 / end flow indirect_action 0 create transfer list actions \ sample ratio 1 index 0 / \ sample ratio 1 index 1 / \ jump group 0xcaca / end flow actions_template 0 create transfer actions_template_id 10 \ template indirect_list 0 / end mask indirect_list / end Signed-off-by: Gregory Etelson --- app/test-pmd/cmdline_flow.c | 41 ++++++- app/test-pmd/config.c | 162 +++++++++++++++++++------ app/test-pmd/testpmd.h | 7 +- doc/guides/nics/features/default.ini | 1 + doc/guides/prog_guide/rte_flow.rst | 6 + doc/guides/rel_notes/release_23_07.rst | 4 + lib/ethdev/rte_flow.c | 92 ++++++++++++++ lib/ethdev/rte_flow.h | 149 +++++++++++++++++++++++ lib/ethdev/rte_flow_driver.h | 27 ++++- lib/ethdev/version.map | 4 + 10 files changed, 452 insertions(+), 41 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 58939ec321..956a39d167 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -145,6 +145,7 @@ enum index { /* Queue indirect action arguments */ QUEUE_INDIRECT_ACTION_CREATE, + QUEUE_INDIRECT_ACTION_LIST_CREATE, QUEUE_INDIRECT_ACTION_UPDATE, QUEUE_INDIRECT_ACTION_DESTROY, QUEUE_INDIRECT_ACTION_QUERY, @@ -157,6 +158,7 @@ enum index { QUEUE_INDIRECT_ACTION_TRANSFER, QUEUE_INDIRECT_ACTION_CREATE_POSTPONE, QUEUE_INDIRECT_ACTION_SPEC, + QUEUE_INDIRECT_ACTION_LIST, /* Queue indirect action update arguments */ QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE, @@ -242,6 +244,7 @@ enum index { /* Indirect action arguments */ INDIRECT_ACTION_CREATE, + INDIRECT_ACTION_LIST_CREATE, INDIRECT_ACTION_UPDATE, INDIRECT_ACTION_DESTROY, INDIRECT_ACTION_QUERY, @@ -253,6 +256,7 @@ enum index { INDIRECT_ACTION_EGRESS, INDIRECT_ACTION_TRANSFER, INDIRECT_ACTION_SPEC, + INDIRECT_ACTION_LIST, /* Indirect action destroy arguments */ INDIRECT_ACTION_DESTROY_ID, @@ -626,6 +630,7 @@ enum index { ACTION_SAMPLE_INDEX, ACTION_SAMPLE_INDEX_VALUE, ACTION_INDIRECT, + ACTION_INDIRECT_LIST, ACTION_SHARED_INDIRECT, INDIRECT_ACTION_PORT, INDIRECT_ACTION_ID2PTR, @@ -1266,6 +1271,7 @@ static const enum index next_qia_create_attr[] = { QUEUE_INDIRECT_ACTION_TRANSFER, QUEUE_INDIRECT_ACTION_CREATE_POSTPONE, QUEUE_INDIRECT_ACTION_SPEC, + QUEUE_INDIRECT_ACTION_LIST, ZERO, }; @@ -1294,6 +1300,7 @@ static const enum index next_ia_create_attr[] = { INDIRECT_ACTION_EGRESS, INDIRECT_ACTION_TRANSFER, INDIRECT_ACTION_SPEC, + INDIRECT_ACTION_LIST, ZERO, }; @@ -2013,6 +2020,7 @@ static const enum index next_action[] = { ACTION_AGE_UPDATE, ACTION_SAMPLE, ACTION_INDIRECT, + ACTION_INDIRECT_LIST, ACTION_SHARED_INDIRECT, ACTION_MODIFY_FIELD, ACTION_CONNTRACK, @@ -2289,6 +2297,7 @@ static const enum index next_action_sample[] = { ACTION_RAW_ENCAP, ACTION_VXLAN_ENCAP, ACTION_NVGRE_ENCAP, + ACTION_REPRESENTED_PORT, ACTION_NEXT, ZERO, }; @@ -3426,6 +3435,12 @@ static const struct token token_list[] = { .help = "specify action to create indirect handle", .next = NEXT(next_action), }, + [QUEUE_INDIRECT_ACTION_LIST] = { + .name = "list", + .help = "specify actions for indirect handle list", + .next = NEXT(NEXT_ENTRY(ACTIONS, END)), + .call = parse_qia, + }, /* Top-level command. */ [PUSH] = { .name = "push", @@ -6775,6 +6790,14 @@ static const struct token token_list[] = { .args = ARGS(ARGS_ENTRY_ARB(0, sizeof(uint32_t))), .call = parse_vc, }, + [ACTION_INDIRECT_LIST] = { + .name = "indirect_list", + .help = "apply indirect list action by id", + .priv = PRIV_ACTION(INDIRECT_LIST, 0), + .next = NEXT(next_ia), + .args = ARGS(ARGS_ENTRY_ARB(0, sizeof(uint32_t))), + .call = parse_vc, + }, [ACTION_SHARED_INDIRECT] = { .name = "shared_indirect", .help = "apply indirect action by id and port", @@ -6823,6 +6846,12 @@ static const struct token token_list[] = { .help = "specify action to create indirect handle", .next = NEXT(next_action), }, + [INDIRECT_ACTION_LIST] = { + .name = "list", + .help = "specify actions for indirect handle list", + .next = NEXT(NEXT_ENTRY(ACTIONS, END)), + .call = parse_ia, + }, [ACTION_POL_G] = { .name = "g_actions", .help = "submit a list of associated actions for green", @@ -7181,6 +7210,9 @@ parse_ia(struct context *ctx, const struct token *token, return len; case INDIRECT_ACTION_QU_MODE: return len; + case INDIRECT_ACTION_LIST: + out->command = INDIRECT_ACTION_LIST_CREATE; + return len; default: return -1; } @@ -7278,6 +7310,9 @@ parse_qia(struct context *ctx, const struct token *token, return len; case QUEUE_INDIRECT_ACTION_QU_MODE: return len; + case QUEUE_INDIRECT_ACTION_LIST: + out->command = QUEUE_INDIRECT_ACTION_LIST_CREATE; + return len; default: return -1; } @@ -7454,10 +7489,12 @@ parse_vc(struct context *ctx, const struct token *token, return -1; break; case ACTIONS: - out->args.vc.actions = + out->args.vc.actions = out->args.vc.pattern ? (void *)RTE_ALIGN_CEIL((uintptr_t) (out->args.vc.pattern + out->args.vc.pattern_n), + sizeof(double)) : + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), sizeof(double)); ctx->object = out->args.vc.actions; ctx->objmask = NULL; @@ -11532,6 +11569,7 @@ cmd_flow_parsed(const struct buffer *in) in->args.aged.destroy); break; case QUEUE_INDIRECT_ACTION_CREATE: + case QUEUE_INDIRECT_ACTION_LIST_CREATE: port_queue_action_handle_create( in->port, in->queue, in->postpone, in->args.vc.attr.group, @@ -11567,6 +11605,7 @@ cmd_flow_parsed(const struct buffer *in) in->args.vc.actions); break; case INDIRECT_ACTION_CREATE: + case INDIRECT_ACTION_LIST_CREATE: port_action_handle_create( in->port, in->args.vc.attr.group, &((const struct rte_flow_indir_action_conf) { diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 096c218c12..c220682ff9 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -1764,6 +1764,44 @@ port_flow_configure(portid_t port_id, return 0; } +static int +action_handle_create(portid_t port_id, + struct port_indirect_action *pia, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *action, + struct rte_flow_error *error) +{ + if (action->type == RTE_FLOW_ACTION_TYPE_AGE) { + struct rte_flow_action_age *age = + (struct rte_flow_action_age *)(uintptr_t)(action->conf); + + pia->age_type = ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION; + age->context = &pia->age_type; + } else if (action->type == RTE_FLOW_ACTION_TYPE_CONNTRACK) { + struct rte_flow_action_conntrack *ct = + (struct rte_flow_action_conntrack *)(uintptr_t)(action->conf); + + memcpy(ct, &conntrack_context, sizeof(*ct)); + } + pia->type = action->type; + pia->handle = rte_flow_action_handle_create(port_id, conf, action, + error); + return pia->handle ? 0 : -1; +} + +static int +action_list_handle_create(portid_t port_id, + struct port_indirect_action *pia, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *actions, + struct rte_flow_error *error) +{ + pia->type = RTE_FLOW_ACTION_TYPE_INDIRECT_LIST; + pia->list_handle = + rte_flow_action_list_handle_create(port_id, conf, + actions, error); + return pia->list_handle ? 0 : -1; +} /** Create indirect action */ int port_action_handle_create(portid_t port_id, uint32_t id, @@ -1773,32 +1811,21 @@ port_action_handle_create(portid_t port_id, uint32_t id, struct port_indirect_action *pia; int ret; struct rte_flow_error error; + bool is_indirect_list = action[1].type != RTE_FLOW_ACTION_TYPE_END; ret = action_alloc(port_id, id, &pia); if (ret) return ret; - if (action->type == RTE_FLOW_ACTION_TYPE_AGE) { - struct rte_flow_action_age *age = - (struct rte_flow_action_age *)(uintptr_t)(action->conf); - - pia->age_type = ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION; - age->context = &pia->age_type; - } else if (action->type == RTE_FLOW_ACTION_TYPE_CONNTRACK) { - struct rte_flow_action_conntrack *ct = - (struct rte_flow_action_conntrack *)(uintptr_t)(action->conf); - - memcpy(ct, &conntrack_context, sizeof(*ct)); - } /* Poisoning to make sure PMDs update it in case of error. */ memset(&error, 0x22, sizeof(error)); - pia->handle = rte_flow_action_handle_create(port_id, conf, action, - &error); - if (!pia->handle) { + ret = is_indirect_list ? + action_list_handle_create(port_id, pia, conf, action, &error) : + action_handle_create(port_id, pia, conf, action, &error); + if (ret) { uint32_t destroy_id = pia->id; port_action_handle_destroy(port_id, 1, &destroy_id); return port_flow_complain(&error); } - pia->type = action->type; printf("Indirect action #%u created\n", pia->id); return 0; } @@ -1833,10 +1860,17 @@ port_action_handle_destroy(portid_t port_id, */ memset(&error, 0x33, sizeof(error)); - if (pia->handle && rte_flow_action_handle_destroy( - port_id, pia->handle, &error)) { - ret = port_flow_complain(&error); - continue; + if (pia->handle) { + ret = pia->type == + RTE_FLOW_ACTION_TYPE_INDIRECT_LIST ? + rte_flow_action_list_handle_destroy + (port_id, pia->list_handle, &error) : + rte_flow_action_handle_destroy + (port_id, pia->handle, &error); + if (ret) { + ret = port_flow_complain(&error); + continue; + } } *tmp = pia->next; printf("Indirect action #%u destroyed\n", pia->id); @@ -1867,11 +1901,18 @@ port_action_handle_flush(portid_t port_id) /* Poisoning to make sure PMDs update it in case of error. */ memset(&error, 0x44, sizeof(error)); - if (pia->handle != NULL && - rte_flow_action_handle_destroy - (port_id, pia->handle, &error) != 0) { - printf("Indirect action #%u not destroyed\n", pia->id); - ret = port_flow_complain(&error); + if (pia->handle != NULL) { + ret = pia->type == + RTE_FLOW_ACTION_TYPE_INDIRECT_LIST ? + rte_flow_action_list_handle_destroy + (port_id, pia->list_handle, &error) : + rte_flow_action_handle_destroy + (port_id, pia->handle, &error); + if (ret) { + printf("Indirect action #%u not destroyed\n", + pia->id); + ret = port_flow_complain(&error); + } tmp = &pia->next; } else { *tmp = pia->next; @@ -2822,6 +2863,45 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, return ret; } +static void +queue_action_handle_create(portid_t port_id, uint32_t queue_id, + struct port_indirect_action *pia, + struct queue_job *job, + const struct rte_flow_op_attr *attr, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *action, + struct rte_flow_error *error) +{ + if (action->type == RTE_FLOW_ACTION_TYPE_AGE) { + struct rte_flow_action_age *age = + (struct rte_flow_action_age *)(uintptr_t)(action->conf); + + pia->age_type = ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION; + age->context = &pia->age_type; + } + /* Poisoning to make sure PMDs update it in case of error. */ + pia->handle = rte_flow_async_action_handle_create(port_id, queue_id, + attr, conf, action, + job, error); + pia->type = action->type; +} + +static void +queue_action_list_handle_create(portid_t port_id, uint32_t queue_id, + struct port_indirect_action *pia, + struct queue_job *job, + const struct rte_flow_op_attr *attr, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *action, + struct rte_flow_error *error) +{ + /* Poisoning to make sure PMDs update it in case of error. */ + pia->type = RTE_FLOW_ACTION_TYPE_INDIRECT_LIST; + pia->list_handle = rte_flow_async_action_list_handle_create + (port_id, queue_id, attr, conf, action, + job, error); +} + /** Enqueue indirect action create operation. */ int port_queue_action_handle_create(portid_t port_id, uint32_t queue_id, @@ -2835,6 +2915,8 @@ port_queue_action_handle_create(portid_t port_id, uint32_t queue_id, int ret; struct rte_flow_error error; struct queue_job *job; + bool is_indirect_list = action[1].type != RTE_FLOW_ACTION_TYPE_END; + ret = action_alloc(port_id, id, &pia); if (ret) @@ -2853,17 +2935,16 @@ port_queue_action_handle_create(portid_t port_id, uint32_t queue_id, job->type = QUEUE_JOB_TYPE_ACTION_CREATE; job->pia = pia; - if (action->type == RTE_FLOW_ACTION_TYPE_AGE) { - struct rte_flow_action_age *age = - (struct rte_flow_action_age *)(uintptr_t)(action->conf); - - pia->age_type = ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION; - age->context = &pia->age_type; - } /* Poisoning to make sure PMDs update it in case of error. */ memset(&error, 0x88, sizeof(error)); - pia->handle = rte_flow_async_action_handle_create(port_id, queue_id, - &attr, conf, action, job, &error); + + if (is_indirect_list) + queue_action_list_handle_create(port_id, queue_id, pia, job, + &attr, conf, action, &error); + else + queue_action_handle_create(port_id, queue_id, pia, job, &attr, + conf, action, &error); + if (!pia->handle) { uint32_t destroy_id = pia->id; port_queue_action_handle_destroy(port_id, queue_id, @@ -2871,7 +2952,6 @@ port_queue_action_handle_create(portid_t port_id, uint32_t queue_id, free(job); return port_flow_complain(&error); } - pia->type = action->type; printf("Indirect action #%u creation queued\n", pia->id); return 0; } @@ -2920,9 +3000,15 @@ port_queue_action_handle_destroy(portid_t port_id, } job->type = QUEUE_JOB_TYPE_ACTION_DESTROY; job->pia = pia; - - if (rte_flow_async_action_handle_destroy(port_id, - queue_id, &attr, pia->handle, job, &error)) { + ret = pia->type == RTE_FLOW_ACTION_TYPE_INDIRECT_LIST ? + rte_flow_async_action_list_handle_destroy + (port_id, queue_id, + &attr, pia->list_handle, + job, &error) : + rte_flow_async_action_handle_destroy + (port_id, queue_id, &attr, pia->handle, + job, &error); + if (ret) { free(job); ret = port_flow_complain(&error); continue; diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index bdfbfd36d3..9786e62d28 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -228,7 +228,12 @@ struct port_indirect_action { struct port_indirect_action *next; /**< Next flow in list. */ uint32_t id; /**< Indirect action ID. */ enum rte_flow_action_type type; /**< Action type. */ - struct rte_flow_action_handle *handle; /**< Indirect action handle. */ + union { + struct rte_flow_action_handle *handle; + /**< Indirect action handle. */ + struct rte_flow_action_list_handle *list_handle; + /**< Indirect action list handle*/ + }; enum age_action_context_type age_type; /**< Age action context type. */ }; diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini index 1a5087abad..10a1c1af77 100644 --- a/doc/guides/nics/features/default.ini +++ b/doc/guides/nics/features/default.ini @@ -158,6 +158,7 @@ drop = flag = inc_tcp_ack = inc_tcp_seq = +indirect_list = jump = mac_swap = mark = diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 32fc45516a..ed67e86c58 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -3300,6 +3300,12 @@ The ``quota`` value is reduced according to ``mode`` setting. | ``RTE_FLOW_QUOTA_MODE_L3`` | Count packet bytes starting from L3 | +------------------+----------------------------------------------------+ +Action: ``INDIRECT_LIST`` +^^^^^^^^^^^^^^^^^^^^^^^^^ + +The new ``INDIRECT_LIST`` flow action references one or many flow actions. +Extends the ``INDIRECT`` flow action. + Negative types ~~~~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst index a9b1293689..955493e445 100644 --- a/doc/guides/rel_notes/release_23_07.rst +++ b/doc/guides/rel_notes/release_23_07.rst @@ -55,6 +55,10 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= + * **Added indirect list flow action.** + + * ``RTE_FLOW_ACTION_TYPE_INDIRECT_LIST`` + Removed Items ------------- diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 69e6e749f7..73b31fc69f 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -259,6 +259,7 @@ static const struct rte_flow_desc_data rte_flow_desc_action[] = { MK_FLOW_ACTION(METER_MARK, sizeof(struct rte_flow_action_meter_mark)), MK_FLOW_ACTION(SEND_TO_KERNEL, 0), MK_FLOW_ACTION(QUOTA, sizeof(struct rte_flow_action_quota)), + MK_FLOW_ACTION(INDIRECT_LIST, 0), }; int @@ -2171,3 +2172,94 @@ rte_flow_async_action_handle_query_update(uint16_t port_id, uint32_t queue_id, user_data, error); return flow_err(port_id, ret, error); } + +struct rte_flow_action_list_handle * +rte_flow_action_list_handle_create(uint16_t port_id, + const + struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *actions, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev; + const struct rte_flow_ops *ops; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, NULL); + ops = rte_flow_ops_get(port_id, error); + if (!ops || !ops->action_list_handle_create) { + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "action_list handle not supported"); + return NULL; + } + dev = &rte_eth_devices[port_id]; + return ops->action_list_handle_create(dev, conf, actions, error); +} + +int +rte_flow_action_list_handle_destroy(uint16_t port_id, + struct rte_flow_action_list_handle *handle, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev; + const struct rte_flow_ops *ops; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + ops = rte_flow_ops_get(port_id, error); + if (!ops || !ops->action_list_handle_destroy) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "action_list handle not supported"); + dev = &rte_eth_devices[port_id]; + return ops->action_list_handle_destroy(dev, handle, error); +} + +struct rte_flow_action_list_handle * +rte_flow_async_action_list_handle_create(uint16_t port_id, uint32_t queue_id, + const struct rte_flow_op_attr *attr, + const struct + rte_flow_indir_action_conf *conf, + const struct rte_flow_action *actions, + void *user_data, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev; + const struct rte_flow_ops *ops; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, NULL); + ops = rte_flow_ops_get(port_id, error); + if (!ops || !ops->async_action_list_handle_create) { + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "action_list handle not supported"); + return NULL; + } + dev = &rte_eth_devices[port_id]; + return ops->async_action_list_handle_create(dev, queue_id, attr, conf, + actions, user_data, error); +} + +int +rte_flow_async_action_list_handle_destroy(uint16_t port_id, uint32_t queue_id, + const + struct rte_flow_op_attr *op_attr, + struct + rte_flow_action_list_handle *handle, + void *user_data, + struct rte_flow_error *error) +{ + int ret; + struct rte_eth_dev *dev; + const struct rte_flow_ops *ops; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + ops = rte_flow_ops_get(port_id, error); + if (!ops || !ops->async_action_list_handle_destroy) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "async action_list handle not supported"); + dev = &rte_eth_devices[port_id]; + ret = ops->async_action_list_handle_destroy(dev, queue_id, op_attr, + handle, user_data, error); + return flow_err(port_id, ret, error); +} + diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 713ba8b65c..deb5dc2f9d 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -2912,6 +2912,11 @@ enum rte_flow_action_type { * applied to the given ethdev Rx queue. */ RTE_FLOW_ACTION_TYPE_SKIP_CMAN, + + /** + * RTE_FLOW_ACTION_TYPE_INDIRECT_LIST + */ + RTE_FLOW_ACTION_TYPE_INDIRECT_LIST, }; /** @@ -6118,6 +6123,150 @@ rte_flow_async_action_handle_query_update(uint16_t port_id, uint32_t queue_id, void *user_data, struct rte_flow_error *error); +struct rte_flow_action_list_handle; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Create an indirect flow action object from flow actions list. + * The object is identified by a unique handle. + * The handle has single state and configuration + * across all the flow rules using it. + * + * @param[in] port_id + * The port identifier of the Ethernet device. + * @param[in] conf + * Action configuration for the indirect action list creation. + * @param[in] actions + * Specific configuration of the indirect action lists. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * @return + * A valid handle in case of success, NULL otherwise and rte_errno is set + * to one of the error codes defined: + * - (-ENODEV) if *port_id* invalid. + * - (-ENOSYS) if underlying device does not support this functionality. + * - (-EIO) if underlying device is removed. + * - (-EINVAL) if *actions* list invalid. + * - (-ENOTSUP) if *action* list element valid but unsupported. + * - (-E2BIG) to many elements in *actions* + */ +__rte_experimental +struct rte_flow_action_list_handle * +rte_flow_action_list_handle_create(uint16_t port_id, + const + struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *actions, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Async function call to create an indirect flow action object + * from flow actions list. + * The object is identified by a unique handle. + * The handle has single state and configuration + * across all the flow rules using it. + * + * @param[in] port_id + * The port identifier of the Ethernet device. + * @param[in] queue_id + * Flow queue which is used to update the rule. + * @param[in] attr + * Indirect action update operation attributes. + * @param[in] conf + * Action configuration for the indirect action list creation. + * @param[in] actions + * Specific configuration of the indirect action list. + * @param[in] user_data + * The user data that will be returned on async completion event. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * @return + * A valid handle in case of success, NULL otherwise and rte_errno is set + * to one of the error codes defined: + * - (-ENODEV) if *port_id* invalid. + * - (-ENOSYS) if underlying device does not support this functionality. + * - (-EIO) if underlying device is removed. + * - (-EINVAL) if *actions* list invalid. + * - (-ENOTSUP) if *action* list element valid but unsupported. + * - (-E2BIG) to many elements in *actions* + */ +__rte_experimental +struct rte_flow_action_list_handle * +rte_flow_async_action_list_handle_create(uint16_t port_id, uint32_t queue_id, + const struct rte_flow_op_attr *attr, + const struct + rte_flow_indir_action_conf *conf, + const struct rte_flow_action *actions, + void *user_data, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Destroy indirect actions list by handle. + * + * @param[in] port_id + * The port identifier of the Ethernet device. + * @param[in] handle + * Handle for the indirect actions list to be destroyed. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * @return + * - (0) if success. + * - (-ENODEV) if *port_id* invalid. + * - (-ENOSYS) if underlying device does not support this functionality. + * - (-EIO) if underlying device is removed. + * - (-ENOENT) if actions list pointed by *action* handle was not found. + * - (-EBUSY) if actions list pointed by *action* handle still used + * rte_errno is also set. + */ +__rte_experimental +int +rte_flow_action_list_handle_destroy(uint16_t port_id, + struct rte_flow_action_list_handle *handle, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue indirect action list destruction operation. + * The destroy queue must be the same + * as the queue on which the action was created. + * + * @param[in] port_id + * Port identifier of Ethernet device. + * @param[in] queue_id + * Flow queue which is used to destroy the rule. + * @param[in] op_attr + * Indirect action destruction operation attributes. + * @param[in] handle + * Handle for the indirect action object to be destroyed. + * @param[in] user_data + * The user data that will be returned on the completion events. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_async_action_list_handle_destroy + (uint16_t port_id, uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_action_list_handle *handle, + void *user_data, struct rte_flow_error *error); + #ifdef __cplusplus } #endif diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h index a129a4605d..71d9b4b0a7 100644 --- a/lib/ethdev/rte_flow_driver.h +++ b/lib/ethdev/rte_flow_driver.h @@ -121,6 +121,17 @@ struct rte_flow_ops { const void *update, void *query, enum rte_flow_query_update_mode qu_mode, struct rte_flow_error *error); + /** @see rte_flow_action_list_handle_create() */ + struct rte_flow_action_list_handle *(*action_list_handle_create) + (struct rte_eth_dev *dev, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action actions[], + struct rte_flow_error *error); + /** @see rte_flow_action_list_handle_destroy() */ + int (*action_list_handle_destroy) + (struct rte_eth_dev *dev, + struct rte_flow_action_list_handle *handle, + struct rte_flow_error *error); /** See rte_flow_tunnel_decap_set() */ int (*tunnel_decap_set) (struct rte_eth_dev *dev, @@ -294,7 +305,7 @@ struct rte_flow_ops { void *data, void *user_data, struct rte_flow_error *error); - /** See rte_flow_async_action_handle_query_update */ + /** @see rte_flow_async_action_handle_query_update */ int (*async_action_handle_query_update) (struct rte_eth_dev *dev, uint32_t queue_id, const struct rte_flow_op_attr *op_attr, @@ -302,6 +313,20 @@ struct rte_flow_ops { const void *update, void *query, enum rte_flow_query_update_mode qu_mode, void *user_data, struct rte_flow_error *error); + /** @see rte_flow_async_action_list_handle_create() */ + struct rte_flow_action_list_handle * + (*async_action_list_handle_create) + (struct rte_eth_dev *dev, uint32_t queue_id, + const struct rte_flow_op_attr *attr, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *actions, + void *user_data, struct rte_flow_error *error); + /** @see rte_flow_async_action_list_handle_destroy() */ + int (*async_action_list_handle_destroy) + (struct rte_eth_dev *dev, uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_action_list_handle *action_handle, + void *user_data, struct rte_flow_error *error); }; /** diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 357d1a88c0..d6c0b927f1 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -299,6 +299,10 @@ EXPERIMENTAL { rte_flow_action_handle_query_update; rte_flow_async_action_handle_query_update; rte_flow_async_create_by_index; + rte_flow_action_list_handle_create; + rte_flow_action_list_handle_destroy; + rte_flow_async_action_list_handle_create; + rte_flow_async_action_list_handle_destroy; }; INTERNAL { From patchwork Tue Apr 18 19:58:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 126250 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2639142980; Tue, 18 Apr 2023 21:58:43 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B4F9F4021F; Tue, 18 Apr 2023 21:58:42 +0200 (CEST) Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam04on2057.outbound.protection.outlook.com [40.107.101.57]) by mails.dpdk.org (Postfix) with ESMTP id A2A304014F for ; Tue, 18 Apr 2023 21:58:41 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=cKTS904msG8VPDidnzObkRux1s9asLQE0fKdUfDCCl2mEYW3So5k2gqV4MH9QsE9/g+T4s6i6DvA1ij7ZYgs4iELie9RQ6kfw1JJYXkLanSHu9uJG0bc4KJZIzusRgiBfnF0bQE8L1kEzMva/KhfxppDpa2cVFnn3Y1+dYBIIgUnCDfjvYNlkcXIpJ7IDB7vNe4JweEf8vCbY+zLHQQUAIMlUYK9culAaeXToc3LZwQk/2fPEC1APUC9ocWUA5Vl4czxDqWS5bGK72TwyPzqVF/6unqxLTx/ZpQcj5THzCv6SE0m0hfmoL1ub/e43q8v4T9xvSEMXEECkvHq37eYaA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=RnQxaR71VYh/L6NIpVrBLLG2Ze6ry4qRI+C5af9Zsy0=; b=MHPohz2NnjPznCtPnF+yZ7+1OVdNK5qvVgOr+IDNpvzZr01NIgXtiHCBRiOieaKYvZRhjhn24ZaLAFXQehez5SYHmOMsDM258ufXq3Q11LdO0jlfNZ2k827BJtZ+hk0ICcq4zrS0lB29cH9h1FR0gvKDg2W6duI4DOtXCilzbfl7ldAPHiYxaZqBiUSeJgDsOBWFBkdlw5BBfM7u77n5M8NINAabm0gGQrWmVkpenTRdWPbX18m2686TE/6ZGIvQEQJQCRbH/cY4ZeQtkMk9Rb1Xl7leze+wft9HgEyNgeoINCO2BSwivEhI81POf+vN+jbMF2BMbu8Yk8yBEDOiVw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=RnQxaR71VYh/L6NIpVrBLLG2Ze6ry4qRI+C5af9Zsy0=; b=PN/vE8X8E0YXU8+262u7EzFCJR4P7ep2nUATgqJ34hahcrP6s9ox6iEOrMOMKKzFKeAqU8V6qB8bfYLRgvadIPz0XUCjy+m5QfPQkVEBnIlRYva4fNaPbB38HqHySZLbKfMi0Cv73hSlWAQc1jegGVqzIFEn671tQjPt/6q/Lr8B96X4u9kDriDcKWJGG1Z19A3FfNznGl6yjYeFSpykrWItIWycF++Tsngg//DcobymBeiMIpAhxtzSvw0B17GDJCnkqrvHNZAK+P1nkB8M23J3LXhM/fYBb17T3wrWMNI4F02e/wX/s74ZWKdu1y88sCcBRxuNHY/eVCaXErO3mQ== Received: from MW4PR04CA0162.namprd04.prod.outlook.com (2603:10b6:303:85::17) by MW3PR12MB4474.namprd12.prod.outlook.com (2603:10b6:303:2e::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr 2023 19:58:39 +0000 Received: from CO1NAM11FT046.eop-nam11.prod.protection.outlook.com (2603:10b6:303:85:cafe::5f) by MW4PR04CA0162.outlook.office365.com (2603:10b6:303:85::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.47 via Frontend Transport; Tue, 18 Apr 2023 19:58:39 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CO1NAM11FT046.mail.protection.outlook.com (10.13.174.203) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22 via Frontend Transport; Tue, 18 Apr 2023 19:58:39 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Tue, 18 Apr 2023 12:58:26 -0700 Received: from pegasus01.mtr.labs.mlnx (10.126.230.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Tue, 18 Apr 2023 12:58:24 -0700 From: Alexander Kozyrev To: CC: , , Subject: [PATCH] ethdev: add flow rule actions update API Date: Tue, 18 Apr 2023 22:58:07 +0300 Message-ID: <20230418195807.352514-1-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 MIME-Version: 1.0 X-Originating-IP: [10.126.230.37] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT046:EE_|MW3PR12MB4474:EE_ X-MS-Office365-Filtering-Correlation-Id: 64c1c602-3691-4e04-a62b-08db40474d97 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: NzQgwk9tPI+v/XtPGhhGxDHcgWp/1Z8b5Eveu1Le04xdNvicXmbDVR/okix6QfvUHZ7mLqvQcwfQtViF2Vz7n4pRSPWJllFCPu2QYotKe+t00N+6vv9nMM85D5xRDfnMC54LSYg5uTsjhbM+r5V9VAr9ls+HYovLr24ds9XgXqfSL+p9vY7TZ3NS2n+4CADyXDUOCf2wwauP6LxzMLtZdEeEI+tNhkVnDJ3xzWPX96S3i3VxqJox6M3FwuPKe0IFMj01Uku0xfUeYyisxWQ2A3I663LZqY1l8+eZBPwQE5s62weyMSlaNEkCzIlGhcRZgAY00+jilVTQobpaYOmkV4SSQkhwcxETf8K7qHUyVbb5bPaUlzORPK+WH6ahRS3WFd+gKwxRjYmOOJIlfmHIMExbFPz8P9i+zvSqEqnAgJdA9/+C1WkmzE5l8SKTZ7WBivC/xoygjDo8/R4Ckl8WA76q6zDnqYQtcktqR0B2uSFS+a/lhJfO3/D9zDgEWz9+OIBj41qK69bOCCHyLjIEEB9PL+RAZSaWuTMjKf9jdyBC9oAwhQYi7zc6wlxpRQiITk9iqvYsSe+TrRCSv4S74ONg5WR0NHC28AVyOwa8obQjg1Wd7Pl2xsquk3S9wN2gb0h1R0ilyGr4y6TaJj3GhXB1AZKiWyKP9Dv+FS5Bnq03xkDMgzb1G4WeYbLGxU8T8wgZei3v1F3TbBCQ55Y4z93vVFU94YpGDI2wQXFzw+/wMVk1o0MbSXB6j0bSTYpS6kdR3v8UQb7viXGXY+SVstFpzDLGJbqlpPbWkIhS/D8= X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(396003)(346002)(136003)(39860400002)(376002)(451199021)(36840700001)(46966006)(40470700004)(336012)(6666004)(478600001)(86362001)(34020700004)(2616005)(47076005)(426003)(36860700001)(26005)(40480700001)(107886003)(83380400001)(1076003)(16526019)(186003)(40460700003)(82740400003)(36756003)(7636003)(356005)(70586007)(316002)(70206006)(2906002)(4326008)(8936002)(6916009)(5660300002)(8676002)(15650500001)(41300700001)(82310400005)(54906003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 19:58:39.0759 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 64c1c602-3691-4e04-a62b-08db40474d97 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT046.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW3PR12MB4474 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Introduce the new rte_flow_update() API allowing users to update the action list in the already existing rule. Flow rules can be updated now without the need to destroy the rule first and create a new one instead. A single API call ensures that no packets are lost by guaranteeing atomicity and flow state correctness. The rte_flow_async_update() is added as well. The matcher is not updated, only the action list is. Signed-off-by: Alexander Kozyrev --- doc/guides/prog_guide/rte_flow.rst | 42 +++++++++++++++++ doc/guides/rel_notes/release_23_07.rst | 4 ++ lib/ethdev/rte_flow.c | 43 ++++++++++++++++++ lib/ethdev/rte_flow.h | 62 ++++++++++++++++++++++++++ lib/ethdev/rte_flow_driver.h | 16 +++++++ lib/ethdev/version.map | 2 + 6 files changed, 169 insertions(+) diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 32fc45516a..0930accfea 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -3446,6 +3446,31 @@ Return values: - 0 on success, a negative errno value otherwise and ``rte_errno`` is set. +Update +~~~~~~ + +Update an existing flow rule with a new set of actions. + +.. code-block:: c + + struct rte_flow * + rte_flow_update(uint16_t port_id, + struct rte_flow *flow, + const struct rte_flow_action *actions[], + struct rte_flow_error *error); + +Arguments: + +- ``port_id``: port identifier of Ethernet device. +- ``flow``: flow rule handle to update. +- ``actions``: associated actions (list terminated by the END action). +- ``error``: perform verbose error reporting if not NULL. PMDs initialize + this structure in case of error only. + +Return values: + +- 0 on success, a negative errno value otherwise and ``rte_errno`` is set. + Flush ~~~~~ @@ -3795,6 +3820,23 @@ Enqueueing a flow rule destruction operation is similar to simple destruction. void *user_data, struct rte_flow_error *error); +Enqueue update operation +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Enqueueing a flow rule update operation to replace actions in the existing rule. + +.. code-block:: c + + int + rte_flow_async_update(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow *flow, + const struct rte_flow_action actions[], + uint8_t actions_template_index, + void *user_data, + struct rte_flow_error *error); + Enqueue indirect action creation operation ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst index a9b1293689..94e9f8b3ae 100644 --- a/doc/guides/rel_notes/release_23_07.rst +++ b/doc/guides/rel_notes/release_23_07.rst @@ -55,6 +55,10 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= + * **Added flow rule update to the Flow API.** + + * Added API for updating the action list in the already existing rule. + Introduced both rte_flow_update() and rte_flow_async_update() functions. Removed Items ------------- diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 69e6e749f7..1ebb17ae3c 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -441,6 +441,29 @@ rte_flow_destroy(uint16_t port_id, NULL, rte_strerror(ENOSYS)); } +int +rte_flow_update(uint16_t port_id, + struct rte_flow *flow, + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + int ret; + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->update)) { + fts_enter(dev); + ret = ops->update(dev, flow, actions, error); + fts_exit(dev); + return flow_err(port_id, ret, error); + } + return rte_flow_error_set(error, ENOSYS, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOSYS)); +} + /* Destroy all flow rules associated with a port. */ int rte_flow_flush(uint16_t port_id, @@ -1985,6 +2008,26 @@ rte_flow_async_destroy(uint16_t port_id, return ret; } +int +rte_flow_async_update(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow *flow, + const struct rte_flow_action actions[], + uint8_t actions_template_index, + void *user_data, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + return flow_err(port_id, + ops->async_update(dev, queue_id, op_attr, flow, + actions, actions_template_index, + user_data, error), + error); +} + int rte_flow_push(uint16_t port_id, uint32_t queue_id, diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 713ba8b65c..79bfc07a1c 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -4343,6 +4343,29 @@ rte_flow_destroy(uint16_t port_id, struct rte_flow *flow, struct rte_flow_error *error); +/** + * Update a flow rule with new actions on a given port. + * + * @param port_id + * Port identifier of Ethernet device. + * @param flow + * Flow rule handle to update. + * @param[in] actions + * Associated actions (list terminated by the END action). + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_update(uint16_t port_id, + struct rte_flow *flow, + const struct rte_flow_action actions[], + struct rte_flow_error *error); + /** * Destroy all flow rules associated with a port. * @@ -5770,6 +5793,45 @@ rte_flow_async_destroy(uint16_t port_id, void *user_data, struct rte_flow_error *error); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue rule update operation. + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue_id + * Flow queue used to insert the rule. + * @param[in] op_attr + * Rule creation operation attributes. + * @param[in] flow + * Flow rule to be updated. + * @param[in] actions + * List of actions to be used. + * The list order should match the order in the actions template. + * @param[in] actions_template_index + * Actions template index in the table. + * @param[in] user_data + * The user data that will be returned on the completion events. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_async_update(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow *flow, + const struct rte_flow_action actions[], + uint8_t actions_template_index, + void *user_data, + struct rte_flow_error *error); + /** * @warning * @b EXPERIMENTAL: this API may change without prior notice. diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h index a129a4605d..193b09a7d3 100644 --- a/lib/ethdev/rte_flow_driver.h +++ b/lib/ethdev/rte_flow_driver.h @@ -302,6 +302,22 @@ struct rte_flow_ops { const void *update, void *query, enum rte_flow_query_update_mode qu_mode, void *user_data, struct rte_flow_error *error); + /** See rte_flow_update(). */ + int (*update) + (struct rte_eth_dev *dev, + struct rte_flow *flow, + const struct rte_flow_action actions[], + struct rte_flow_error *error); + /** See rte_flow_async_update() */ + int (*async_update) + (struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow *flow, + const struct rte_flow_action actions[], + uint8_t actions_template_index, + void *user_data, + struct rte_flow_error *error); }; /** diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 357d1a88c0..d4f49cb918 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -299,6 +299,8 @@ EXPERIMENTAL { rte_flow_action_handle_query_update; rte_flow_async_action_handle_query_update; rte_flow_async_create_by_index; + rte_flow_update; + rte_flow_async_update; }; INTERNAL { From patchwork Thu Apr 20 09:21:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 126305 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3121242995; Thu, 20 Apr 2023 11:22:15 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C52CD42BD9; Thu, 20 Apr 2023 11:22:10 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2062.outbound.protection.outlook.com [40.107.220.62]) by mails.dpdk.org (Postfix) with ESMTP id 89BCD40687 for ; Thu, 20 Apr 2023 11:22:08 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=i85sZ7bn/87W9Uoy736unz4oB05E2ZG5VhosORaef5RyQyHTEjZTwrNSiV/AtxqLxU82gc/yXe9aSCrB0V027NOGtM+qaqL1uF/8/LmevlDT/nmPOhe00J9UKSNiBeES2r8FYUayB3BjbkOuDbIImrJ4VpNoNww5+Qd9CpLQBrted9GKSiu5Gmiyu9mrVdSXHYrZjhj7fAURnOfrI1Y7/nZkHTYRnMA2ZnCSiOaGJA8h3SVtjWN0AivPs3mPZL+bMGxEaSlvdWFHRGPDEXeB+apLXkvPv77ObFJV6vPlg0U90ZvQZEddh35VAoZIphTPPt3uF1urtkIGowHkJ6vXOg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=YbZiPxFXUVD+eJKzOSjw4IZkOfwkXHjwSlFb2pfGwnA=; b=bykwbCFRUtpQn6frTrojyDKE6jLrjRls+yQA3Md5l/wtS0JugjlTXkGTNWnPSVfaseFy9GXLPll+YShEGn5jg+CiJcrBIh/4nQqlymKxeaGmCu+555JttHIvc/5WeVBav/iCt44fdtlhdcXAS3NT/H4kzGRzlLT6xXefSN7vhdTPre9O2ebhm+hFmQKlKa/9GiS8oms6nJqK6UbmDtTVpCrVGiOY2Tj8IDXZTxLA/dwZRQvuWDfe8iNHQwnwO5q8tKgWWN5DfQNkxUVF7KUuufhK6657vYX4bWSkHZ3xKNzMCob+XQhMTaCZD7ft2ml+Jl4z0tKr38kzoeMl0z7H4A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=YbZiPxFXUVD+eJKzOSjw4IZkOfwkXHjwSlFb2pfGwnA=; b=sG/WAbMJKPZMMsoyJLrGJiMReFx6TbwcrhYoOLZAm7gpOkOo3mbOGTEy0tfBSlJA968KdUOal3mKDYAyqXaWnXcdosAKo6ZHWjNKOkFaS6BNPdgEz98XRuAl0q8eZcAzl04u3WW0YpHE4KqkISr5sa5jErGXkFy5fIYcQ4fJ05MeNVjtiQVxvqpl+rjADr/jbSDr4YYFUW8qMjSiU+Zm6Pa7FaKXpmahHp8DjymByahq1Zz6T/xVcHnbQG2yy9nvQUnmZn4hgt+kJrzptfG4QaOsSQYr5AIiVvelB+U1fNwMl07EHPfEqBzy+onH3Y5G4MPdBrUVlIi2DGpVvRgfSw== Received: from BN9PR03CA0749.namprd03.prod.outlook.com (2603:10b6:408:110::34) by SJ0PR12MB7081.namprd12.prod.outlook.com (2603:10b6:a03:4ae::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.20; Thu, 20 Apr 2023 09:22:06 +0000 Received: from BN8NAM11FT043.eop-nam11.prod.protection.outlook.com (2603:10b6:408:110:cafe::33) by BN9PR03CA0749.outlook.office365.com (2603:10b6:408:110::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.25 via Frontend Transport; Thu, 20 Apr 2023 09:22:05 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BN8NAM11FT043.mail.protection.outlook.com (10.13.177.218) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.23 via Frontend Transport; Thu, 20 Apr 2023 09:22:05 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Thu, 20 Apr 2023 02:21:54 -0700 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Thu, 20 Apr 2023 02:21:54 -0700 Received: from nvidia.com (10.127.8.13) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37 via Frontend Transport; Thu, 20 Apr 2023 02:21:52 -0700 From: Michael Baum To: CC: Ori Kam , Aman Singh , "Yuying Zhang" , Ferruh Yigit , "Thomas Monjalon" Subject: [RFC 1/2] ethdev: add GENEVE TLV option modification support Date: Thu, 20 Apr 2023 12:21:44 +0300 Message-ID: <20230420092145.522389-2-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230420092145.522389-1-michaelba@nvidia.com> References: <20230420092145.522389-1-michaelba@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT043:EE_|SJ0PR12MB7081:EE_ X-MS-Office365-Filtering-Correlation-Id: 14427f98-524f-4fd0-4074-08db4180b53d X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: pAGYmkD7S+FQfSMNfZE1hRhzGNYfM4VvFTBo7Daq9MhaLczctMZ7oEN8GBAMi536DR+LKsP0BWy1N96106Jc/ntlxRcJhcUesWUebTZdsPGwtFWWUGlgBVbg4iSkb19ygP/MPaYltncEJoSo9d3dTYZzXw21msG122sQ+6pQ6NoBwH/dgjMQ1gGcSseKRrSfBMMkMjybzWxHxLMOzQ2SYAzZNfq2Pml9wp9ydPfUzGBji8yhXuj4IpEGER2QKD9SuGn5GykCyzh1/poej1WYyXy8iFIAZUvrA6wqHqZIRuUWHPBuwfeBz+8rKXWdzAIDGVBY+cW9gq1noVAjILFo/kN3/ikw3FnQDBTLgFV+3VdJiXPjRZ5k6u+/Jf9+vIEAxK3a1xmxnLdga6Q9z+VNN3klgzrTQruk8WQEQ8MMX9MB+WYyJMBNkHGnIdFDx+Hj8DAIGQ/5CqUNXVief10ydF9ID40j6j2IllG6lzopDDW4iva7tzBjzVC5ZDdIG9Gxk4vhhm6UxilGD3mfizqf9yUfQvfWPukB1X528TvjCmH+JFY6OQ+0XbbDHdeSFKm1ayoXXz+yIELrYdqqr/Qm0YcDIGD4FffR6noKo66Y7x59cso3PdAUsVk5HLB1NIqoTUXLAyW9Kfcbmie5D1A5DxnLlaP3dw1/gF8ATHB5FIF0NE2zQ9ojAIl4qZmEJ6H2u9htslB1MEXPH11p960Xm67RscqeV/7tOnDatbAr0EoFUEHUXzvnChDOXQDtFtfw8g7F1QMICkThqjcvlT7CXoisrXtmzkppJq0TBoVbkWS+Z7hNRCH63dN0EPD1Fx4g X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(346002)(39860400002)(136003)(396003)(376002)(451199021)(36840700001)(46966006)(40470700004)(5660300002)(40480700001)(55016003)(40460700003)(316002)(41300700001)(4326008)(6916009)(70586007)(70206006)(478600001)(54906003)(8936002)(82740400003)(8676002)(356005)(7636003)(47076005)(186003)(36860700001)(336012)(426003)(83380400001)(6286002)(2616005)(7696005)(34020700004)(6666004)(1076003)(26005)(86362001)(36756003)(82310400005)(2906002)(21314003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Apr 2023 09:22:05.3335 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 14427f98-524f-4fd0-4074-08db4180b53d X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT043.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR12MB7081 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add modify field support for GENEVE option fields: - "RTE_FLOW_FIELD_GENEVE_OPT_TYPE" - "RTE_FLOW_FIELD_GENEVE_OPT_CLASS" - "RTE_FLOW_FIELD_GENEVE_OPT_DATA" Each GENEVE TLV option is identified by both its "class" and "type", so 2 new fields were added to "rte_flow_action_modify_data" structure to help specify which option to modify. To get room for those 2 new fields, the "level" field move to use "uint8_t" which is more than enough for encapsulation level. Signed-off-by: Michael Baum --- app/test-pmd/cmdline_flow.c | 47 ++++++++++++++++++++++++++- doc/guides/prog_guide/rte_flow.rst | 27 +++++++++++++--- lib/ethdev/rte_flow.h | 51 +++++++++++++++++++++++++++++- 3 files changed, 118 insertions(+), 7 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 58939ec321..db8bd30cb1 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -636,11 +636,15 @@ enum index { ACTION_MODIFY_FIELD_DST_TYPE_VALUE, ACTION_MODIFY_FIELD_DST_LEVEL, ACTION_MODIFY_FIELD_DST_LEVEL_VALUE, + ACTION_MODIFY_FIELD_DST_TYPE_ID, + ACTION_MODIFY_FIELD_DST_CLASS_ID, ACTION_MODIFY_FIELD_DST_OFFSET, ACTION_MODIFY_FIELD_SRC_TYPE, ACTION_MODIFY_FIELD_SRC_TYPE_VALUE, ACTION_MODIFY_FIELD_SRC_LEVEL, ACTION_MODIFY_FIELD_SRC_LEVEL_VALUE, + ACTION_MODIFY_FIELD_SRC_TYPE_ID, + ACTION_MODIFY_FIELD_SRC_CLASS_ID, ACTION_MODIFY_FIELD_SRC_OFFSET, ACTION_MODIFY_FIELD_SRC_VALUE, ACTION_MODIFY_FIELD_SRC_POINTER, @@ -854,7 +858,8 @@ static const char *const modify_field_ids[] = { "ipv4_ecn", "ipv6_ecn", "gtp_psc_qfi", "meter_color", "ipv6_proto", "flex_item", - "hash_result", NULL + "hash_result", + "geneve_opt_type", "geneve_opt_class", "geneve_opt_data", NULL }; static const char *const meter_colors[] = { @@ -2295,6 +2300,8 @@ static const enum index next_action_sample[] = { static const enum index action_modify_field_dst[] = { ACTION_MODIFY_FIELD_DST_LEVEL, + ACTION_MODIFY_FIELD_DST_TYPE_ID, + ACTION_MODIFY_FIELD_DST_CLASS_ID, ACTION_MODIFY_FIELD_DST_OFFSET, ACTION_MODIFY_FIELD_SRC_TYPE, ZERO, @@ -2302,6 +2309,8 @@ static const enum index action_modify_field_dst[] = { static const enum index action_modify_field_src[] = { ACTION_MODIFY_FIELD_SRC_LEVEL, + ACTION_MODIFY_FIELD_SRC_TYPE_ID, + ACTION_MODIFY_FIELD_SRC_CLASS_ID, ACTION_MODIFY_FIELD_SRC_OFFSET, ACTION_MODIFY_FIELD_SRC_VALUE, ACTION_MODIFY_FIELD_SRC_POINTER, @@ -6388,6 +6397,24 @@ static const struct token token_list[] = { .call = parse_vc_modify_field_level, .comp = comp_none, }, + [ACTION_MODIFY_FIELD_DST_TYPE_ID] = { + .name = "dst_type_id", + .help = "destination field type ID", + .next = NEXT(action_modify_field_dst, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field, + dst.type)), + .call = parse_vc_conf, + }, + [ACTION_MODIFY_FIELD_DST_CLASS_ID] = { + .name = "dst_class", + .help = "destination field class ID", + .next = NEXT(action_modify_field_dst, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field, + dst.class_id)), + .call = parse_vc_conf, + }, [ACTION_MODIFY_FIELD_DST_OFFSET] = { .name = "dst_offset", .help = "destination field bit offset", @@ -6423,6 +6450,24 @@ static const struct token token_list[] = { .call = parse_vc_modify_field_level, .comp = comp_none, }, + [ACTION_MODIFY_FIELD_SRC_TYPE_ID] = { + .name = "src_type_id", + .help = "source field type ID", + .next = NEXT(action_modify_field_src, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field, + src.type)), + .call = parse_vc_conf, + }, + [ACTION_MODIFY_FIELD_SRC_CLASS_ID] = { + .name = "src_class", + .help = "source field class ID", + .next = NEXT(action_modify_field_src, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field, + src.class_id)), + .call = parse_vc_conf, + }, [ACTION_MODIFY_FIELD_SRC_OFFSET] = { .name = "src_offset", .help = "source field bit offset", diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 32fc45516a..dc86e040ec 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -2917,23 +2917,36 @@ The immediate value ``RTE_FLOW_FIELD_VALUE`` (or a pointer to it ``RTE_FLOW_FIELD_START`` is used to point to the beginning of a packet. See ``enum rte_flow_field_id`` for the list of supported fields. -``op`` selects the operation to perform on a destination field. +``op`` selects the operation to perform on a destination field: + - ``set`` copies the data from ``src`` field to ``dst`` field. - ``add`` adds together ``dst`` and ``src`` and stores the result into ``dst``. -- ``sub`` subtracts ``src`` from ``dst`` and stores the result into ``dst`` +- ``sub`` subtracts ``src`` from ``dst`` and stores the result into ``dst``. ``width`` defines a number of bits to use from ``src`` field. ``level`` is used to access any packet field on any encapsulation level -as well as any tag element in the tag array. +as well as any tag element in the tag array: + - ``0`` means the default behaviour. Depending on the packet type, it can -mean outermost, innermost or anything in between. + mean outermost, innermost or anything in between. + - ``1`` requests access to the outermost packet encapsulation level. + - ``2`` and subsequent values requests access to the specified packet -encapsulation level, from outermost to innermost (lower to higher values). + encapsulation level, from outermost to innermost (lower to higher values). + For the tag array (in case of multiple tags are supported and present) ``level`` translates directly into the array index. +``type`` is used to specify (along with ``class_id``) the Geneve option which +is being modified. +This field is relevant only for ``RTE_FLOW_FIELD_GENEVE_OPT_XXXX`` type. + +``class_id`` is used to specify (along with ``type``) the Geneve option which +is being modified. +This field is relevant only for ``RTE_FLOW_FIELD_GENEVE_OPT_XXXX`` type. + ``flex_handle`` is used to specify the flex item pointer which is being modified. ``flex_handle`` and ``level`` are mutually exclusive. @@ -2991,6 +3004,10 @@ value as sequence of bytes {xxx, xxx, 0x85, xxx, xxx, xxx}. +-----------------+----------------------------------------------------------+ | ``level`` | encapsulation level of a packet field or tag array index | +-----------------+----------------------------------------------------------+ + | ``type`` | geneve option type | + +-----------------+----------------------------------------------------------+ + | ``class_id`` | geneve option class ID | + +-----------------+----------------------------------------------------------+ | ``flex_handle`` | flex item handle of a packet field | +-----------------+----------------------------------------------------------+ | ``offset`` | number of bits to skip at the beginning | diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 713ba8b65c..b82eb0c0a8 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -3773,6 +3773,9 @@ enum rte_flow_field_id { RTE_FLOW_FIELD_IPV6_PROTO, /**< IPv6 next header. */ RTE_FLOW_FIELD_FLEX_ITEM, /**< Flex item. */ RTE_FLOW_FIELD_HASH_RESULT, /**< Hash result. */ + RTE_FLOW_FIELD_GENEVE_OPT_TYPE, /**< GENEVE option type */ + RTE_FLOW_FIELD_GENEVE_OPT_CLASS,/**< GENEVE option class */ + RTE_FLOW_FIELD_GENEVE_OPT_DATA /**< GENEVE option data */ }; /** @@ -3788,7 +3791,53 @@ struct rte_flow_action_modify_data { struct { /** Encapsulation level or tag index or flex item handle. */ union { - uint32_t level; + struct { + /** + * Packet encapsulation level containing + * the field modify to. + * + * - @p 0 requests the default behavior. + * Depending on the packet type, it + * can mean outermost, innermost or + * anything in between. + * + * It basically stands for the + * innermost encapsulation level + * modification can be performed on + * according to PMD and device + * capabilities. + * + * - @p 1 requests modification to be + * performed on the outermost packet + * encapsulation level. + * + * - @p 2 and subsequent values request + * modification to be performed on + * the specified inner packet + * encapsulation level, from + * outermost to innermost (lower to + * higher values). + * + * Values other than @p 0 are not + * necessarily supported. + * + * For RTE_FLOW_FIELD_TAG it represents + * the tag element in the tag array. + */ + uint8_t level; + /** + * Geneve option type. relevant only + * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX + * modification type. + */ + uint8_t type; + /** + * Geneve option class. relevant only + * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX + * modification type. + */ + rte_be16_t class_id; + }; struct rte_flow_item_flex_handle *flex_handle; }; /** Number of bits to skip from a field. */ From patchwork Thu Apr 20 09:21:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 126306 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A68D342995; Thu, 20 Apr 2023 11:22:22 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0B08B42D0C; Thu, 20 Apr 2023 11:22:12 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2062.outbound.protection.outlook.com [40.107.223.62]) by mails.dpdk.org (Postfix) with ESMTP id 02DFD41156 for ; Thu, 20 Apr 2023 11:22:09 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ZyGVPwGwnMelLVgFlsIxOm5I6YMPhsxGild0PZcjqI57q/mcj7YAKHM8OeRI+FQ1Cq/naeANYZ/soIX/ygn1wD+MQm4LB2PpIeaiQr+SHWUNMHCiA4iBFxTF+DZc39MeBZds9vaHJCTdxyPt0sOUOsJyJ9lwJn8JnRze95IlLG+kc8iTPJxICRmDrefS/hei5Swsr1303nGykkQez7nch2dxY4KLUezIxVPjYD65p9NHYpdXxg+Ig7ipx7Dtnwv+GWdbtB7GAh3kFeyOC14Ib9RLG5euWxOaOTyIaGLo2TS8jvFu/bMqN97LfV77VCJZphXzRPQtfjl+9RxO06r08A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=dGrHoXNX7hGyhPSGfNAiNoCH5VtK4QyhERdoAf9Ylrw=; b=M1AsBhYNzY51I3o/4l9DPV0Sncc51oLtkuf+QlFXO4FGEi/UZK5badi1arSZpXoTEOnCQNWuwaZV4p9VPLfo4uK6oWIlNi4BQ0HqexJLEefji/K7fbyqEl4p3dJ8E4EM0nVGe6V9S4a0t3hkU02v8R2SCty5UlhwFfTxPFgxXhPDMzoO8u2gSP/qsYzNRa6W6j4kXVI2W8+USAwR1rnF/vi8GWPorqFcgI3mrXhYzIZOfmecChRX/W0qQhw2oNR0sox2fGGblzJprbFte1srisyB5i6BTH5Q12rD85XmOx917hfPdupz+oiFRbfDKwmT/FK7cvmlgNwSHobaM6nuFw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=dGrHoXNX7hGyhPSGfNAiNoCH5VtK4QyhERdoAf9Ylrw=; b=Sk2Aysb2emlvUsDSeQkiFQ9mDhCmwqA/UaArLkKf55e/0+Qnd9lbCAgDHuSjv488QVyDFq1L7jRgFL7dB4ocVLPgZMvDVpEwVb4D60gMHeF9b014R9lRhctNRoqX39rPJI0Yp2TJhj0h1QncEMUnUuu0OUq4F4aPfLj1IlHsusMhZT5sWB6OWHSaHh8p39wApxEgzl3qheFXX7WUmulIk/BiMYzUoUokSAfJg35EXZxsAHERspoY+IRAPUAcZFTUGSd/Mza4cGDTch2+A/N7BxfHgcsJXqyu9KYxhupd4Ne3GLqbmdxd5poLiH6RgTY7HLqH+hb9un3yAiFb3mjESQ== Received: from BN9PR03CA0730.namprd03.prod.outlook.com (2603:10b6:408:110::15) by DM4PR12MB5182.namprd12.prod.outlook.com (2603:10b6:5:395::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22; Thu, 20 Apr 2023 09:22:07 +0000 Received: from BN8NAM11FT043.eop-nam11.prod.protection.outlook.com (2603:10b6:408:110:cafe::cc) by BN9PR03CA0730.outlook.office365.com (2603:10b6:408:110::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22 via Frontend Transport; Thu, 20 Apr 2023 09:22:06 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BN8NAM11FT043.mail.protection.outlook.com (10.13.177.218) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.23 via Frontend Transport; Thu, 20 Apr 2023 09:22:06 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Thu, 20 Apr 2023 02:21:56 -0700 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Thu, 20 Apr 2023 02:21:56 -0700 Received: from nvidia.com (10.127.8.13) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37 via Frontend Transport; Thu, 20 Apr 2023 02:21:54 -0700 From: Michael Baum To: CC: Ori Kam , Aman Singh , "Yuying Zhang" , Ferruh Yigit , "Thomas Monjalon" Subject: [RFC 2/2] ethdev: add MPLS header modification support Date: Thu, 20 Apr 2023 12:21:45 +0300 Message-ID: <20230420092145.522389-3-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230420092145.522389-1-michaelba@nvidia.com> References: <20230420092145.522389-1-michaelba@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT043:EE_|DM4PR12MB5182:EE_ X-MS-Office365-Filtering-Correlation-Id: b1be31ac-7c50-4d8e-f929-08db4180b5fb X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Di+r3bs+grZ4K4XQGkVWBQv9ZPmoRYimskfgPa8vPAktrdk7UmQLHvMPhUlSEZLtDHjEZTeW+o2OaT4xovg+f3/HFtCK566prSqq/Dd45yIFX1h6FXcW2Wp3XokdEfI4n+rd+trBTP0aJ+h1EGeQ+TiR8VxXHrem0rptKBXT4h9+3OZ1uOGFi2X8F/Ps4iGSCAa2YIf+aZawJ7SsDUtdRP8Ac3+/aS5uDAW7Pi2BEr8hZEhiB+pV/DcRFDli30gj5PBzWu39o95/HPoXfsGMFF3ZuHyOilzSfQgT9yJ3/9b9sUj04fvz+5GF3T+W4KE+R48h9kx5XEgMycIiTO5UhrTfxBPILP0oVsV6H/ZODvqA3IsE2FaWIeOvzVCJVAUoyqf8sqrQ7S21kfRQVD2Vu5pg+n5rnn4/faROPz1/u5EgFK0d/C13Q9GWf+9cDpV2MpI7AHKiJxROdUVuL1/D8w7LNukl2SsyeDyePahJNvTtRNWGr8hM5MyYlws3zDWwHBDowoxDMY09Z0ibSazNxrAudWk/eBSXrSf9N3Em7pvonkSvuUFNp6LFeYZ0RQoUvgbhXOu8vEMmSD3NrMa0YInVxHy79g8XFhYyhvR57idlAK93PCIIo8+124smxg7Lch2C5eOSUP37jZUcYU6MTJkBnIcyfMremzH659zC3DNEN+3i2BN1Vdbb1zzHoiJeAvdlauAocm4yhqxOj0B1HdCaCbj0PBV8+L2T5an+vzBT9MwlIVDeaCqH7GQ9xL2jH5mDQgxwwEl40NiZpQFwMJh9Rkw7oSsIvGP0fvp/cOMbiuUQeOWl9t/4nA1pmzuH X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(39860400002)(396003)(376002)(346002)(136003)(451199021)(36840700001)(46966006)(40470700004)(82310400005)(36756003)(55016003)(40480700001)(40460700003)(478600001)(54906003)(34020700004)(7696005)(6666004)(7636003)(356005)(41300700001)(8936002)(8676002)(70206006)(70586007)(316002)(82740400003)(4326008)(6916009)(36860700001)(2616005)(83380400001)(47076005)(336012)(426003)(186003)(6286002)(1076003)(26005)(86362001)(2906002)(5660300002)(21314003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Apr 2023 09:22:06.5834 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b1be31ac-7c50-4d8e-f929-08db4180b5fb X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT043.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5182 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for MPLS modify header using "RTE_FLOW_FIELD_MPLS" id. Since MPLS heaser might appear more the one time in inner/outer/tunnel, a new field was added to "rte_flow_action_modify_data" structure in addition to "level" field. The "sub_level" field is the index of the header inside encapsulation level. It is used for modify multiple MPLS headers in same encapsulation level. This addition enables to modify multiple VLAN headers too, so the description of "RTE_FLOW_FIELD_VLAN_XXXX" was updated. Signed-off-by: Michael Baum --- app/test-pmd/cmdline_flow.c | 24 ++++++++++++++- doc/guides/prog_guide/rte_flow.rst | 6 ++++ lib/ethdev/rte_flow.h | 47 ++++++++++++++++++++---------- 3 files changed, 61 insertions(+), 16 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index db8bd30cb1..ffeedefc35 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -636,6 +636,7 @@ enum index { ACTION_MODIFY_FIELD_DST_TYPE_VALUE, ACTION_MODIFY_FIELD_DST_LEVEL, ACTION_MODIFY_FIELD_DST_LEVEL_VALUE, + ACTION_MODIFY_FIELD_DST_SUB_LEVEL, ACTION_MODIFY_FIELD_DST_TYPE_ID, ACTION_MODIFY_FIELD_DST_CLASS_ID, ACTION_MODIFY_FIELD_DST_OFFSET, @@ -643,6 +644,7 @@ enum index { ACTION_MODIFY_FIELD_SRC_TYPE_VALUE, ACTION_MODIFY_FIELD_SRC_LEVEL, ACTION_MODIFY_FIELD_SRC_LEVEL_VALUE, + ACTION_MODIFY_FIELD_SRC_SUB_LEVEL, ACTION_MODIFY_FIELD_SRC_TYPE_ID, ACTION_MODIFY_FIELD_SRC_CLASS_ID, ACTION_MODIFY_FIELD_SRC_OFFSET, @@ -859,7 +861,7 @@ static const char *const modify_field_ids[] = { "ipv6_proto", "flex_item", "hash_result", - "geneve_opt_type", "geneve_opt_class", "geneve_opt_data", NULL + "geneve_opt_type", "geneve_opt_class", "geneve_opt_data", "mpls", NULL }; static const char *const meter_colors[] = { @@ -2300,6 +2302,7 @@ static const enum index next_action_sample[] = { static const enum index action_modify_field_dst[] = { ACTION_MODIFY_FIELD_DST_LEVEL, + ACTION_MODIFY_FIELD_DST_SUB_LEVEL, ACTION_MODIFY_FIELD_DST_TYPE_ID, ACTION_MODIFY_FIELD_DST_CLASS_ID, ACTION_MODIFY_FIELD_DST_OFFSET, @@ -2309,6 +2312,7 @@ static const enum index action_modify_field_dst[] = { static const enum index action_modify_field_src[] = { ACTION_MODIFY_FIELD_SRC_LEVEL, + ACTION_MODIFY_FIELD_SRC_SUB_LEVEL, ACTION_MODIFY_FIELD_SRC_TYPE_ID, ACTION_MODIFY_FIELD_SRC_CLASS_ID, ACTION_MODIFY_FIELD_SRC_OFFSET, @@ -6397,6 +6401,15 @@ static const struct token token_list[] = { .call = parse_vc_modify_field_level, .comp = comp_none, }, + [ACTION_MODIFY_FIELD_DST_SUB_LEVEL] = { + .name = "dst_sub_level", + .help = "destination field sub level", + .next = NEXT(action_modify_field_dst, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field, + dst.sub_level)), + .call = parse_vc_conf, + }, [ACTION_MODIFY_FIELD_DST_TYPE_ID] = { .name = "dst_type_id", .help = "destination field type ID", @@ -6450,6 +6463,15 @@ static const struct token token_list[] = { .call = parse_vc_modify_field_level, .comp = comp_none, }, + [ACTION_MODIFY_FIELD_SRC_SUB_LEVEL] = { + .name = "stc_sub_level", + .help = "source field sub level", + .next = NEXT(action_modify_field_src, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field, + src.sub_level)), + .call = parse_vc_conf, + }, [ACTION_MODIFY_FIELD_SRC_TYPE_ID] = { .name = "src_type_id", .help = "source field type ID", diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index dc86e040ec..b5d8ce26c5 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -2939,6 +2939,10 @@ as well as any tag element in the tag array: For the tag array (in case of multiple tags are supported and present) ``level`` translates directly into the array index. +- ``sub_level`` is the index of the header inside encapsulation level. + It is used for modify either ``VLAN`` or ``MPLS`` headers which multiple of + them might be supported in same encapsulation level. + ``type`` is used to specify (along with ``class_id``) the Geneve option which is being modified. This field is relevant only for ``RTE_FLOW_FIELD_GENEVE_OPT_XXXX`` type. @@ -3004,6 +3008,8 @@ value as sequence of bytes {xxx, xxx, 0x85, xxx, xxx, xxx}. +-----------------+----------------------------------------------------------+ | ``level`` | encapsulation level of a packet field or tag array index | +-----------------+----------------------------------------------------------+ + | ``sub_level`` | header level inside encapsulation level | + +-----------------+----------------------------------------------------------+ | ``type`` | geneve option type | +-----------------+----------------------------------------------------------+ | ``class_id`` | geneve option class ID | diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index b82eb0c0a8..4b2e17e266 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -3740,8 +3740,8 @@ enum rte_flow_field_id { RTE_FLOW_FIELD_START = 0, /**< Start of a packet. */ RTE_FLOW_FIELD_MAC_DST, /**< Destination MAC Address. */ RTE_FLOW_FIELD_MAC_SRC, /**< Source MAC Address. */ - RTE_FLOW_FIELD_VLAN_TYPE, /**< 802.1Q Tag Identifier. */ - RTE_FLOW_FIELD_VLAN_ID, /**< 802.1Q VLAN Identifier. */ + RTE_FLOW_FIELD_VLAN_TYPE, /**< VLAN Tag Identifier. */ + RTE_FLOW_FIELD_VLAN_ID, /**< VLAN Identifier. */ RTE_FLOW_FIELD_MAC_TYPE, /**< EtherType. */ RTE_FLOW_FIELD_IPV4_DSCP, /**< IPv4 DSCP. */ RTE_FLOW_FIELD_IPV4_TTL, /**< IPv4 Time To Live. */ @@ -3775,7 +3775,8 @@ enum rte_flow_field_id { RTE_FLOW_FIELD_HASH_RESULT, /**< Hash result. */ RTE_FLOW_FIELD_GENEVE_OPT_TYPE, /**< GENEVE option type */ RTE_FLOW_FIELD_GENEVE_OPT_CLASS,/**< GENEVE option class */ - RTE_FLOW_FIELD_GENEVE_OPT_DATA /**< GENEVE option data */ + RTE_FLOW_FIELD_GENEVE_OPT_DATA, /**< GENEVE option data */ + RTE_FLOW_FIELD_MPLS /**< MPLS header. */ }; /** @@ -3821,22 +3822,38 @@ struct rte_flow_action_modify_data { * Values other than @p 0 are not * necessarily supported. * + * @note that for MPLS field, + * encapsulation level also include + * tunnel since MPLS may appear in + * outer, inner or tunnel. + * * For RTE_FLOW_FIELD_TAG it represents * the tag element in the tag array. */ uint8_t level; - /** - * Geneve option type. relevant only - * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX - * modification type. - */ - uint8_t type; - /** - * Geneve option class. relevant only - * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX - * modification type. - */ - rte_be16_t class_id; + union { + /** + * Header level inside + * encapsulation level. + */ + uint8_t sub_level; + /** + * Geneve option identifier. + * relevant only for + * RTE_FLOW_FIELD_GENEVE_OPT_XXXX + * modification type. + */ + struct { + /** + * Geneve option type. + */ + uint8_t type; + /** + * Geneve option class. + */ + rte_be16_t class_id; + }; + }; }; struct rte_flow_item_flex_handle *flex_handle; }; From patchwork Fri May 5 10:31:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 126701 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DE23E42A6E; Fri, 5 May 2023 12:31:25 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 65CCF41144; Fri, 5 May 2023 12:31:25 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id D2657410EA for ; Fri, 5 May 2023 12:31:23 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1683282683; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=pQay/NFclpDRvyFxj5a5lrgzwygHh97B+66qxQW1Dss=; b=Xspotysv2PCe1UWVKOkKC9sywSg5Hy+wRWWNiWm1X/j5RwLQJGfigeITRgRLhZ0MdSAa9Z D1BMysIxDuaFMybtM6kjv6w3oes56R4/378bGSlH9NpeAfhm5iCjg3Ewah10P0Z81kZvx/ 8LlJJp1YZSduWdr1/pLWegq/eptobT4= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-493-U4RDWaZFMn6E8XuRBL6A4Q-1; Fri, 05 May 2023 06:31:18 -0400 X-MC-Unique: U4RDWaZFMn6E8XuRBL6A4Q-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6BE0885A5A3; Fri, 5 May 2023 10:31:17 +0000 (UTC) Received: from dmarchan.redhat.com (unknown [10.45.224.233]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1670040C2063; Fri, 5 May 2023 10:31:14 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: thomas@monjalon.net, i.maximets@ovn.org, Aman Singh , Yuying Zhang , Matan Azrad , Viacheslav Ovsiienko , Andrew Rybchenko , Ferruh Yigit , Ori Kam Subject: [RFC PATCH] ethdev: advertise flow restore in mbuf Date: Fri, 5 May 2023 12:31:02 +0200 Message-Id: <20230505103102.2912297-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org As reported by Ilya [1], unconditionally calling rte_flow_get_restore_info() impacts an application performance for drivers that do not provide this ops. It could also impact processing of packets that require no call to rte_flow_get_restore_info() at all. Advertise in mbuf (via a dynamic flag) whether the driver has more metadata to provide via rte_flow_get_restore_info(). The application can then call it only when required. Link: http://inbox.dpdk.org/dev/5248c2ca-f2a6-3fb0-38b8-7f659bfa40de@ovn.org/ Signed-off-by: David Marchand --- Note: I did not test this RFC patch yet but I hope we can resume and maybe conclude on the discussion for the tunnel offloading API. --- app/test-pmd/util.c | 9 +++++---- drivers/net/mlx5/linux/mlx5_os.c | 14 ++++++++++---- drivers/net/mlx5/mlx5.h | 3 ++- drivers/net/mlx5/mlx5_flow.c | 26 ++++++++++++++++++++------ drivers/net/mlx5/mlx5_rx.c | 2 +- drivers/net/mlx5/mlx5_rx.h | 1 + drivers/net/mlx5/mlx5_trigger.c | 4 ++-- drivers/net/sfc/sfc_dp.c | 9 ++------- lib/ethdev/ethdev_driver.h | 8 ++++++++ lib/ethdev/rte_flow.c | 29 +++++++++++++++++++++++++++++ lib/ethdev/rte_flow.h | 19 ++++++++++++++++++- lib/ethdev/version.map | 4 ++++ 12 files changed, 102 insertions(+), 26 deletions(-) diff --git a/app/test-pmd/util.c b/app/test-pmd/util.c index f9df5f69ef..5aa69ed545 100644 --- a/app/test-pmd/util.c +++ b/app/test-pmd/util.c @@ -88,18 +88,20 @@ dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[], char print_buf[MAX_STRING_LEN]; size_t buf_size = MAX_STRING_LEN; size_t cur_len = 0; + uint64_t restore_info_dynflag; if (!nb_pkts) return; + restore_info_dynflag = rte_flow_restore_info_dynflag(); MKDUMPSTR(print_buf, buf_size, cur_len, "port %u/queue %u: %s %u packets\n", port_id, queue, is_rx ? "received" : "sent", (unsigned int) nb_pkts); for (i = 0; i < nb_pkts; i++) { - int ret; struct rte_flow_error error; struct rte_flow_restore_info info = { 0, }; mb = pkts[i]; + ol_flags = mb->ol_flags; if (rxq_share > 0) MKDUMPSTR(print_buf, buf_size, cur_len, "port %u, ", mb->port); @@ -107,8 +109,8 @@ dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[], eth_type = RTE_BE_TO_CPU_16(eth_hdr->ether_type); packet_type = mb->packet_type; is_encapsulation = RTE_ETH_IS_TUNNEL_PKT(packet_type); - ret = rte_flow_get_restore_info(port_id, mb, &info, &error); - if (!ret) { + if ((ol_flags & restore_info_dynflag) != 0 && + rte_flow_get_restore_info(port_id, mb, &info, &error) == 0) { MKDUMPSTR(print_buf, buf_size, cur_len, "restore info:"); if (info.flags & RTE_FLOW_RESTORE_INFO_TUNNEL) { @@ -153,7 +155,6 @@ dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[], " - pool=%s - type=0x%04x - length=%u - nb_segs=%d", mb->pool->name, eth_type, (unsigned int) mb->pkt_len, (int)mb->nb_segs); - ol_flags = mb->ol_flags; if (ol_flags & RTE_MBUF_F_RX_RSS_HASH) { MKDUMPSTR(print_buf, buf_size, cur_len, " - RSS hash=0x%x", diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 980234e2ac..e6e3784013 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -575,11 +575,17 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv) goto error; } #endif - if (!sh->tunnel_hub && sh->config.dv_miss_info) + if (!sh->tunnel_hub && sh->config.dv_miss_info) { + err = mlx5_flow_restore_info_register(); + if (err) { + DRV_LOG(ERR, "Could not register mbuf dynflag for rte_flow_get_restore_info"); + goto error; + } err = mlx5_alloc_tunnel_hub(sh); - if (err) { - DRV_LOG(ERR, "mlx5_alloc_tunnel_hub failed err=%d", err); - goto error; + if (err) { + DRV_LOG(ERR, "mlx5_alloc_tunnel_hub failed err=%d", err); + goto error; + } } if (sh->config.reclaim_mode == MLX5_RCM_AGGR) { mlx5_glue->dr_reclaim_domain_memory(sh->rx_domain, 1); diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 9eae692037..77cdc802da 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -2116,7 +2116,8 @@ int mlx5_flow_query_counter(struct rte_eth_dev *dev, struct rte_flow *flow, int mlx5_flow_dev_dump_ipool(struct rte_eth_dev *dev, struct rte_flow *flow, FILE *file, struct rte_flow_error *error); #endif -void mlx5_flow_rxq_dynf_metadata_set(struct rte_eth_dev *dev); +int mlx5_flow_restore_info_register(void); +void mlx5_flow_rxq_dynf_set(struct rte_eth_dev *dev); int mlx5_flow_get_aged_flows(struct rte_eth_dev *dev, void **contexts, uint32_t nb_contexts, struct rte_flow_error *error); int mlx5_validate_action_ct(struct rte_eth_dev *dev, diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index d0275fdd00..715b7d327d 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -1779,6 +1779,20 @@ flow_rxq_flags_clear(struct rte_eth_dev *dev) priv->sh->shared_mark_enabled = 0; } +static uint64_t mlx5_restore_info_dynflag; + +int +mlx5_flow_restore_info_register(void) +{ + int err = 0; + + if (mlx5_restore_info_dynflag == 0) { + if (rte_flow_restore_info_dynflag_register(&mlx5_restore_info_dynflag) < 0) + err = ENOMEM; + } + return err; +} + /** * Set the Rx queue dynamic metadata (mask and offset) for a flow * @@ -1786,7 +1800,7 @@ flow_rxq_flags_clear(struct rte_eth_dev *dev) * Pointer to the Ethernet device structure. */ void -mlx5_flow_rxq_dynf_metadata_set(struct rte_eth_dev *dev) +mlx5_flow_rxq_dynf_set(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; unsigned int i; @@ -1809,6 +1823,9 @@ mlx5_flow_rxq_dynf_metadata_set(struct rte_eth_dev *dev) data->flow_meta_offset = rte_flow_dynf_metadata_offs; data->flow_meta_port_mask = priv->sh->dv_meta_mask; } + data->mark_flag = RTE_MBUF_F_RX_FDIR_ID; + if (is_tunnel_offload_active(dev)) + data->mark_flag |= mlx5_restore_info_dynflag; } } @@ -11453,11 +11470,8 @@ mlx5_flow_tunnel_get_restore_info(struct rte_eth_dev *dev, const struct mlx5_flow_tbl_data_entry *tble; const uint64_t mask = RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID; - if (!is_tunnel_offload_active(dev)) { - info->flags = 0; - return 0; - } - + if ((ol_flags & mlx5_restore_info_dynflag) == 0) + goto err; if ((ol_flags & mask) != mask) goto err; tble = tunnel_mark_decode(dev, m->hash.fdir.hi); diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c index a2be523e9e..71c4638251 100644 --- a/drivers/net/mlx5/mlx5_rx.c +++ b/drivers/net/mlx5/mlx5_rx.c @@ -857,7 +857,7 @@ rxq_cq_to_mbuf(struct mlx5_rxq_data *rxq, struct rte_mbuf *pkt, if (MLX5_FLOW_MARK_IS_VALID(mark)) { pkt->ol_flags |= RTE_MBUF_F_RX_FDIR; if (mark != RTE_BE32(MLX5_FLOW_MARK_DEFAULT)) { - pkt->ol_flags |= RTE_MBUF_F_RX_FDIR_ID; + pkt->ol_flags |= rxq->mark_flag; pkt->hash.fdir.hi = mlx5_flow_mark_get(mark); } } diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index 52c35c83f8..3514edd84e 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -136,6 +136,7 @@ struct mlx5_rxq_data { struct mlx5_uar_data uar_data; /* CQ doorbell. */ uint32_t cqn; /* CQ number. */ uint8_t cq_arm_sn; /* CQ arm seq number. */ + uint64_t mark_flag; /* ol_flags to set with marks. */ uint32_t tunnel; /* Tunnel information. */ int timestamp_offset; /* Dynamic mbuf field for timestamp. */ uint64_t timestamp_rx_flag; /* Dynamic mbuf flag for timestamp. */ diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index bbaa7d2aa0..7bdb897612 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -1282,8 +1282,8 @@ mlx5_dev_start(struct rte_eth_dev *dev) dev->data->port_id); goto error; } - /* Set a mask and offset of dynamic metadata flows into Rx queues. */ - mlx5_flow_rxq_dynf_metadata_set(dev); + /* Set dynamic fields and flags into Rx queues. */ + mlx5_flow_rxq_dynf_set(dev); /* Set flags and context to convert Rx timestamps. */ mlx5_rxq_timestamp_set(dev); /* Set a mask and offset of scheduling on timestamp into Tx queues. */ diff --git a/drivers/net/sfc/sfc_dp.c b/drivers/net/sfc/sfc_dp.c index 9f2093b353..8b2dbea325 100644 --- a/drivers/net/sfc/sfc_dp.c +++ b/drivers/net/sfc/sfc_dp.c @@ -11,6 +11,7 @@ #include #include +#include #include #include @@ -135,12 +136,8 @@ sfc_dp_ft_ctx_id_register(void) .size = sizeof(uint8_t), .align = __alignof__(uint8_t), }; - static const struct rte_mbuf_dynflag ft_ctx_id_valid = { - .name = "rte_net_sfc_dynflag_ft_ctx_id_valid", - }; int field_offset; - int flag; SFC_GENERIC_LOG(INFO, "%s() entry", __func__); @@ -156,15 +153,13 @@ sfc_dp_ft_ctx_id_register(void) return -1; } - flag = rte_mbuf_dynflag_register(&ft_ctx_id_valid); - if (flag < 0) { + if (rte_flow_restore_info_dynflag_register(&sfc_dp_ft_ctx_id_valid) < 0) { SFC_GENERIC_LOG(ERR, "%s() failed to register ft_ctx_id dynflag", __func__); return -1; } sfc_dp_ft_ctx_id_offset = field_offset; - sfc_dp_ft_ctx_id_valid = UINT64_C(1) << flag; SFC_GENERIC_LOG(INFO, "%s() done", __func__); diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h index 2c9d615fb5..6c17d84d1b 100644 --- a/lib/ethdev/ethdev_driver.h +++ b/lib/ethdev/ethdev_driver.h @@ -1949,6 +1949,14 @@ __rte_internal int rte_eth_ip_reassembly_dynfield_register(int *field_offset, int *flag); +/** + * @internal + * Register mbuf dynamic flag for rte_flow_get_restore_info. + */ +__rte_internal +int +rte_flow_restore_info_dynflag_register(uint64_t *flag); + /* * Legacy ethdev API used internally by drivers. diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 69e6e749f7..10cd9d12ba 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -1401,6 +1401,35 @@ rte_flow_get_restore_info(uint16_t port_id, NULL, rte_strerror(ENOTSUP)); } +static struct { + const struct rte_mbuf_dynflag desc; + uint64_t value; +} flow_restore_info_dynflag = { + .desc = { .name = "RTE_MBUF_F_RX_RESTORE_INFO", }, +}; + +uint64_t +rte_flow_restore_info_dynflag(void) +{ + return flow_restore_info_dynflag.value; +} + +int +rte_flow_restore_info_dynflag_register(uint64_t *flag) +{ + if (flow_restore_info_dynflag.value == 0) { + int offset = rte_mbuf_dynflag_register(&flow_restore_info_dynflag.desc); + + if (offset < 0) + return -1; + flow_restore_info_dynflag.value = RTE_BIT64(offset); + } + if (*flag) + *flag = rte_flow_restore_info_dynflag(); + + return 0; +} + int rte_flow_tunnel_action_decap_release(uint16_t port_id, struct rte_flow_action *actions, diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 713ba8b65c..5ce2db4bbd 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -4918,7 +4918,24 @@ rte_flow_tunnel_match(uint16_t port_id, struct rte_flow_error *error); /** - * Populate the current packet processing state, if exists, for the given mbuf. + * On reception of a mbuf from HW, a call to rte_flow_get_restore_info() may be + * required to retrieve some metadata. + * This function returns the associated mbuf ol_flags. + * + * Note: the dynamic flag is registered during the probing of the first device + * that requires it. If this function returns a non 0 value, this value won't + * change for the rest of the life of the application. + * + * @return + * The offload flag indicating rte_flow_get_restore_info() must be called. + */ +__rte_experimental +uint64_t +rte_flow_restore_info_dynflag(void); + +/** + * If a mbuf contains the rte_flow_restore_info_dynflag() flag in ol_flags, + * populate the current packet processing state. * * One should negotiate tunnel metadata delivery from the NIC to the HW. * @see rte_eth_rx_metadata_negotiate() diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 357d1a88c0..bf5668e928 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -299,6 +299,9 @@ EXPERIMENTAL { rte_flow_action_handle_query_update; rte_flow_async_action_handle_query_update; rte_flow_async_create_by_index; + + # added in 23.07 + rte_flow_restore_info_dynflag; }; INTERNAL { @@ -328,4 +331,5 @@ INTERNAL { rte_eth_representor_id_get; rte_eth_switch_domain_alloc; rte_eth_switch_domain_free; + rte_flow_restore_info_dynflag_register; }; From patchwork Sun May 7 09:50:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 126750 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BAB2E42A85; Sun, 7 May 2023 11:51:17 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4C1B640DFD; Sun, 7 May 2023 11:51:17 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2066.outbound.protection.outlook.com [40.107.244.66]) by mails.dpdk.org (Postfix) with ESMTP id 35B3240DFB for ; Sun, 7 May 2023 11:51:15 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=BAQ+t4Bqc+5YKRaXn5HQvLQdEJKFkbLz7SVMtEp9p+L3yAO+jArNqd4g7BuEfKKQMVgVSAXjaVUKBz8ktnQEwSpGSJ3uD+NBgYcYitVV+tftnIsFarelR+Rv4A9whmmYOcRQ3y9elkMPRuDlXxj8vuD8g47UfZk23gFx7PD1GDE1frr1JVp22bRXe/aH222ayzQjGfjsJEbsH+QU2bwGFuaP8WnP9u0xn5farQwrd06WYZqS7MZoV+cdNPmnGKvWAknOKWuB0K5IMenENP8DmRUAyG3ldfjrqQI7NJr9YuSAhH4jOUVBuDhEcsYXi8o0MFcv4VNBKrkRp9urbLmYlg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=0/aLYWH1esdfTuuJO9veDJEscHTL789UT2DIXWIurjI=; b=a/da7nmLL07Ml7YvAlC/lF/IEF2ARf9jTrtz/krC6ECMLyub/99nAO3LkOTBbN1F82RGcnALYatyOyxHvWSztrbCde7zUTTIdf42b2oJ/tYQhedya/ZNK2GFFw3MquFDv7Bh9LSrO7C5OXeUIzp+4asHmj44zsdk/0K1UIfRzagmbSSjAJfCX59zV7ppT6AIVwHUG+Wp51PC+oqmmB+tNe/UgDGCnb5xO3PTMVHW4YLOP+ZKT9ZWgmBt16fWP8rDahWZEyqUB1ct7gEk0NWTHLpGNtD7TID0cNXpc4AL/kfShudNAM3sRJ69+POL7r4VWJ7zW9ULsm+bDXNckZYOPg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=0/aLYWH1esdfTuuJO9veDJEscHTL789UT2DIXWIurjI=; b=X0NlXeE3Pf2d2kU3m3MlBSmhq5LYsFdBM0l7qrfWwhinbyp4kV87iG8/9I3QZW3Nq6tRzGZI7XHofg/aHNQHoSFBsKtH4SrgyGVluxFEW3mpQwgRX/DwMWBPQ67TBO7X9kycTqN3AY0Mh0D6bML0uTPFAjKGL83kHRNQ7mxv+i66QsL6csEigZFqn+6+sWtoVAaHs5oRgZ8013S4Wua6H5HBvAzIkWTv5gmBqijVL2BO25kLVufExJFa6jRpqzKEtXd/T5v1F1lnurbrOSDUdk4oeZck6hONIpKp4i5LOzUY479q0FC/JmOOds0rdnKY0JAJnSMRaQNlvi7Gly0F1A== Received: from BL1PR13CA0175.namprd13.prod.outlook.com (2603:10b6:208:2bd::30) by DM4PR12MB6038.namprd12.prod.outlook.com (2603:10b6:8:ab::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.31; Sun, 7 May 2023 09:51:12 +0000 Received: from BL02EPF0000C409.namprd05.prod.outlook.com (2603:10b6:208:2bd:cafe::81) by BL1PR13CA0175.outlook.office365.com (2603:10b6:208:2bd::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.14 via Frontend Transport; Sun, 7 May 2023 09:51:12 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BL02EPF0000C409.mail.protection.outlook.com (10.167.241.11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.14 via Frontend Transport; Sun, 7 May 2023 09:51:12 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Sun, 7 May 2023 02:51:03 -0700 Received: from nvidia.com (10.126.230.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Sun, 7 May 2023 02:51:01 -0700 From: Gregory Etelson To: CC: , , , "Ori Kam" , Thomas Monjalon , Ferruh Yigit , Andrew Rybchenko Subject: [RFC PATCH v3] ethdev: add indirect list flow action Date: Sun, 7 May 2023 12:50:46 +0300 Message-ID: <20230507095046.5456-1-getelson@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230418172144.24365-1-getelson@nvidia.com> References: <20230418172144.24365-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.37] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BL02EPF0000C409:EE_|DM4PR12MB6038:EE_ X-MS-Office365-Filtering-Correlation-Id: 0678a1ba-56ec-49c4-78cc-08db4ee09763 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: m5s6V61xHYgMi+vXK7Yq8a4PYg81xoRvWDqTcpyzZAIGwIkFKiNxZAHMVML4p7GyjdHRccFH5FAa8EV+EbO3gTSyzBRHxOTSfAiRwjcur2vLKQ8FKBekmUXDhY7wph0alMZFl4t5/4soE5DoZRLiCWE3WAb/53AcoZQ5G1rRB5rsFGaiSyVk/EG2RGZP39TrexJEivJE6zL6BjIXLYA8+L+Z00/jh/q9B9Ek16cJ7xj428Xmu/0f5XsYr/OS6JZR35RvnBhMAHDAMrIjVix42u1DSfrE7JX9l4BZHIy5f5kwm2uQg/jPc1hZLKc+sXom+SvKcI6aVt2AlPEXAeUlMM22XC4wks8OVjFuKq5QzbohnoNj35elZR/eufOpT2zPVkq6xu8UTU5sFcTS+4w4bxQrvQ2+unP1CrhUrQ3fQJf7+Zr0GgRSic0oGTiiiPbeLrx7AsctORNc2TLUil/Wy/122KWU53bWyC6HsALATDPPnYjQqvdzwTD2dwDvuK2PEpPBagN5t0Y8+uypK4kF6r/eH7Z7cd3b6HeKziE5E+JCSY1GPfSOQIqdEviFIgo6U14dekfuXnfLVr1LflSSHYTu+2nbneSbkD/V4xjHnxfKrux03CiV0Ypo4AO3WpD92w23oQXT6S0TuGWfRWomr16OnkK0+LW+LBBMsIIDE26hMAojjbVVhXRV73z+vQiv75/4liEHLNFp7kvpGqPXhw== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(39860400002)(376002)(396003)(136003)(346002)(451199021)(40470700004)(46966006)(36840700001)(6666004)(40460700003)(7696005)(70206006)(70586007)(36756003)(4326008)(6916009)(54906003)(82740400003)(2906002)(30864003)(86362001)(41300700001)(356005)(7636003)(316002)(8936002)(8676002)(5660300002)(82310400005)(40480700001)(55016003)(478600001)(6286002)(186003)(16526019)(47076005)(1076003)(26005)(36860700001)(83380400001)(426003)(336012)(2616005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 May 2023 09:51:12.0700 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0678a1ba-56ec-49c4-78cc-08db4ee09763 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BL02EPF0000C409.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB6038 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Indirect API creates a shared flow action with unique action handle. Flow rules can access the shared flow action and resources related to that action through the indirect action handle. In addition, the API allows to update existing shared flow action configuration. After the update completes, new action configuration is available to all flows that reference that shared action. Indirect actions list expands the indirect action API: • Indirect action list creates a handle for one or several flow actions, while legacy indirect action handle references single action only. Input flow actions arranged in END terminated list. • Flow rule can provide rule specific configuration parameters to existing shared handle. Updates of flow rule specific configuration will not change the base action configuration. Base action configuration was set during the action creation. Indirect action list handle defines 2 types of resources: • Mutable handle resource can be changed during handle lifespan. • Immutable handle resource value is set during handle creation and cannot be changed. There are 2 types of mutable indirect handle contexts: • Action mutable context is always shared between all flows that referenced indirect actions list handle. Action mutable context can be changed by explicit invocation of indirect handle update function. • Flow mutable context is private to a flow. Flow mutable context can be updated by indirect list handle flow rule configuration. flow 1: / indirect handle H conf C1 / | | | | | | flow 2: | | / indirect handle H conf C2 / | | | | | | | | | | | | ========================================================= ^ | | | | | | V | V | ~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~ | flow mutable flow mutable | context 1 context 2 | ~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~ indirect | | | action | | | context | V V | ----------------------------------------------------- | action mutable context | ----------------------------------------------------- v action immutable context ========================================================= Indirect action types - immutable, action / flow mutable, are mutually exclusive and depend on the action definition. For example: • Indirect METER_MARK policy is immutable action member and profile is action mutable action member. • Indirect METER_MARK flow action defines init_color as flow mutable member. • Indirect QUOTA flow action does not define flow mutable members. Template API: Action template format: template .. indirect_list handle Htmpl conf Ctmpl .. mask .. indirect_list handle Hmask conf Cmask .. 1 If Htmpl was masked (Hmask != 0), PMD compiles base action configuration during action template, table template and flow rule phases - depending on PMD action implementation. Otherwise, action is compiled from scratch during flow rule processing. 2 If Htmpl and Ctmpl were masked (Hmask !=0 and Cmask != 0), table template processing overwrites base action configuration with Ctmpl parameters. Flow rule format: actions .. indirect_list handle Hflow conf Cflow .. 3 If Htmpl was masked, Hflow can reference a different action of the same type as Htmpl. 4 If Cflow was specified, it overwrites action configuration. Signed-off-by: Gregory Etelson --- v3: do not deprecate indirect flow action. --- lib/ethdev/rte_flow.h | 266 +++++++++++++++++++++++++++++++++++ lib/ethdev/rte_flow_driver.h | 41 ++++++ 2 files changed, 307 insertions(+) diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 713ba8b65c..ac1f51e564 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -2912,6 +2912,13 @@ enum rte_flow_action_type { * applied to the given ethdev Rx queue. */ RTE_FLOW_ACTION_TYPE_SKIP_CMAN, + + /** + * Action handle to reference flow actions list. + * + * @see struct rte_flow_action_indirect_list + */ + RTE_FLOW_ACTION_TYPE_INDIRECT_LIST, }; /** @@ -6118,6 +6125,265 @@ rte_flow_async_action_handle_query_update(uint16_t port_id, uint32_t queue_id, void *user_data, struct rte_flow_error *error); +struct rte_flow_action_list_handle; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Configure INDIRECT_LIST flow action. + * + * @see RTE_FLOW_ACTION_TYPE_INDIRECT_LIST + */ +struct rte_flow_action_indirect_list { + struct rte_flow_action_list_handle; /**< Indirect action list handle */ + /** + * Flow mutable configuration array. + * NULL if the handle has no flow mutable configuration update. + * Otherwise, if the handle was created with list A1 / A2 .. An / END + * size of conf is n. + * conf[i] points to flow mutable update of Ai in the handle + * actions list or NULL if Ai has no update. + */ + const void **conf; +}; + + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Create an indirect flow action object from flow actions list. + * The object is identified by a unique handle. + * The handle has single state and configuration + * across all the flow rules using it. + * + * @param[in] port_id + * The port identifier of the Ethernet device. + * @param[in] conf + * Action configuration for the indirect action list creation. + * @param[in] actions + * Specific configuration of the indirect action lists. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * @return + * A valid handle in case of success, NULL otherwise and rte_errno is set + * to one of the error codes defined: + * - (-ENODEV) if *port_id* invalid. + * - (-ENOSYS) if underlying device does not support this functionality. + * - (-EIO) if underlying device is removed. + * - (-EINVAL) if *actions* list invalid. + * - (-ENOTSUP) if *action* list element valid but unsupported. + * - (-E2BIG) to many elements in *actions* + */ +__rte_experimental +struct rte_flow_action_list_handle * +rte_flow_action_list_handle_create(uint16_t port_id, + const + struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *actions, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Async function call to create an indirect flow action object + * from flow actions list. + * The object is identified by a unique handle. + * The handle has single state and configuration + * across all the flow rules using it. + * + * @param[in] port_id + * The port identifier of the Ethernet device. + * @param[in] queue_id + * Flow queue which is used to update the rule. + * @param[in] attr + * Indirect action update operation attributes. + * @param[in] conf + * Action configuration for the indirect action list creation. + * @param[in] actions + * Specific configuration of the indirect action list. + * @param[in] user_data + * The user data that will be returned on async completion event. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * @return + * A valid handle in case of success, NULL otherwise and rte_errno is set + * to one of the error codes defined: + * - (-ENODEV) if *port_id* invalid. + * - (-ENOSYS) if underlying device does not support this functionality. + * - (-EIO) if underlying device is removed. + * - (-EINVAL) if *actions* list invalid. + * - (-ENOTSUP) if *action* list element valid but unsupported. + * - (-E2BIG) to many elements in *actions* + */ +__rte_experimental +struct rte_flow_action_list_handle * +rte_flow_async_action_list_handle_create(uint16_t port_id, uint32_t queue_id, + const struct rte_flow_op_attr *attr, + const struct + rte_flow_indir_action_conf *conf, + const struct rte_flow_action *actions, + void *user_data, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Destroy indirect actions list by handle. + * + * @param[in] port_id + * The port identifier of the Ethernet device. + * @param[in] handle + * Handle for the indirect actions list to be destroyed. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * @return + * - (0) if success. + * - (-ENODEV) if *port_id* invalid. + * - (-ENOSYS) if underlying device does not support this functionality. + * - (-EIO) if underlying device is removed. + * - (-ENOENT) if actions list pointed by *action* handle was not found. + * - (-EBUSY) if actions list pointed by *action* handle still used + * rte_errno is also set. + */ +__rte_experimental +int +rte_flow_action_list_handle_destroy(uint16_t port_id, + struct rte_flow_action_list_handle *handle, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue indirect action list destruction operation. + * The destroy queue must be the same + * as the queue on which the action was created. + * + * @param[in] port_id + * Port identifier of Ethernet device. + * @param[in] queue_id + * Flow queue which is used to destroy the rule. + * @param[in] op_attr + * Indirect action destruction operation attributes. + * @param[in] handle + * Handle for the indirect action object to be destroyed. + * @param[in] user_data + * The user data that will be returned on the completion events. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_async_action_list_handle_destroy + (uint16_t port_id, uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_action_list_handle *handle, + void *user_data, struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Query and/or update indirect flow actions list. + * If both query and update not NULL, the function atomically + * queries and updates indirect action. Query and update are carried in order + * specified in the mode parameter. + * If ether query or update is NULL, the function executes + * complementing operation. + * + * @param port_id + * Port identifier of Ethernet device. + * @param handle + * Handle for the indirect actions list object to be updated. + * @param update + * If not NULL, update profile specification used to modify the action + * pointed by handle. + * @see struct rte_flow_action_indirect_list + * @param query + * If not NULL pointer to storage for the associated query data type. + * @see struct rte_flow_action_indirect_list + * @param mode + * Operational mode. + * @param error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + * - (-ENODEV) if *port_id* invalid. + * - (-ENOTSUP) if underlying device does not support this functionality. + * - (-EINVAL) if *handle* or *mode* invalid or + * both *query* and *update* are NULL. + */ +__rte_experimental +int +rte_flow_action_list_handle_query_update(uint16_t port_id, + const struct + rte_flow_action_list_handle *handle, + const void **update, void **query, + enum rte_flow_query_update_mode mode, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue async indirect flow actions list query and/or update + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue_id + * Flow queue which is used to update the rule. + * @param attr + * Indirect action update operation attributes. + * @param handle + * Handle for the indirect actions list object to be updated. + * @param update + * If not NULL, update profile specification used to modify the action + * pointed by handle. + * @see struct rte_flow_action_indirect_list + * @param query + * If not NULL, pointer to storage for the associated query data type. + * Query result returned on async completion event. + * @see struct rte_flow_action_indirect_list + * @param mode + * Operational mode. + * @param user_data + * The user data that will be returned on async completion event. + * @param error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + * - (-ENODEV) if *port_id* invalid. + * - (-ENOTSUP) if underlying device does not support this functionality. + * - (-EINVAL) if *handle* or *mode* invalid or + * both *update* and *query* are NULL. + */ +__rte_experimental +int +rte_flow_async_action_list_handle_query_update(uint16_t port_id, uint32_t queue_id, + const struct rte_flow_op_attr *attr, + const struct + rte_flow_action_list_handle *handle, + const void *update, void *query, + enum rte_flow_query_update_mode mode, + void *user_data, + struct rte_flow_error *error); + + #ifdef __cplusplus } #endif diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h index a129a4605d..8dc803023c 100644 --- a/lib/ethdev/rte_flow_driver.h +++ b/lib/ethdev/rte_flow_driver.h @@ -121,6 +121,17 @@ struct rte_flow_ops { const void *update, void *query, enum rte_flow_query_update_mode qu_mode, struct rte_flow_error *error); + /** @see rte_flow_action_list_handle_create() */ + struct rte_flow_action_list_handle *(*action_list_handle_create) + (struct rte_eth_dev *dev, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action actions[], + struct rte_flow_error *error); + /** @see rte_flow_action_list_handle_destroy() */ + int (*action_list_handle_destroy) + (struct rte_eth_dev *dev, + struct rte_flow_action_list_handle *handle, + struct rte_flow_error *error); /** See rte_flow_tunnel_decap_set() */ int (*tunnel_decap_set) (struct rte_eth_dev *dev, @@ -302,6 +313,36 @@ struct rte_flow_ops { const void *update, void *query, enum rte_flow_query_update_mode qu_mode, void *user_data, struct rte_flow_error *error); + /** @see rte_flow_async_action_list_handle_create() */ + struct rte_flow_action_list_handle * + (*async_action_list_handle_create) + (struct rte_eth_dev *dev, uint32_t queue_id, + const struct rte_flow_op_attr *attr, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *actions, + void *user_data, struct rte_flow_error *error); + /** @see rte_flow_async_action_list_handle_destroy() */ + int (*async_action_list_handle_destroy) + (struct rte_eth_dev *dev, uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_action_list_handle *action_handle, + void *user_data, struct rte_flow_error *error); + /** @see rte_flow_action_list_handle_query_update() */ + int (*action_list_handle_query_update) + (uint16_t port_id, + const struct rte_flow_action_list_handle *handle, + const void **update, void **query, + enum rte_flow_query_update_mode mode, + struct rte_flow_error *error); + /** @see rte_flow_async_action_list_handle_query_update() */ + int (*async_action_list_handle_query_update) + (uint16_t port_id, uint32_t queue_id, + const struct rte_flow_op_attr *attr, + const struct rte_flow_action_list_handle *handle, + const void **update, void **query, + enum rte_flow_query_update_mode mode, + void *user_data, struct rte_flow_error *error); + }; /** From patchwork Mon May 8 13:49:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kiran Kumar Kokkilagadda X-Patchwork-Id: 126768 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F0C9142A97; Mon, 8 May 2023 15:49:49 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B5C8A410ED; Mon, 8 May 2023 15:49:48 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id C444140685 for ; Mon, 8 May 2023 15:49:47 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 348ALPtY005229; Mon, 8 May 2023 06:49:45 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=5n20BbL/8uPxtgP4II/MnLPteVqOBE8vhdYFkot8VJw=; b=bULVxRnqhzbB6tUL2FE0isEc5G2mNyaQVntD2CR4axwhcd26ddJyEDbiaaXa4Fn3Mh2k 93ufobt3FdOEYhUPgcC/xJ2Q/eFGTHiMuUgAngcuId5LTRIRxVKMLlFV6K1BviG+kYoq 3jj8Q1nZr3+YvA8Aas7aeKj5yoDs0VCHNilzL//K1/JLFj/qYcai8HSD6AM01yWPW3Ql 6OA3L9XVYuUF0lAT/x1vDUM5Na7gdssTH5I/HjS+xjearxgF01wQr5vPVELeB+ra38ik +/AESSkzMKGMd8tH1Xq1megrzwCveX43u3PfRCLSFwz/1NJql4EHIsSBj49Cw5KKJIHm Wg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3qeuyxhekk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 08 May 2023 06:49:45 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Mon, 8 May 2023 06:49:44 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Mon, 8 May 2023 06:49:44 -0700 Received: from cavium-DT31.. (unknown [10.28.36.159]) by maili.marvell.com (Postfix) with ESMTP id 90C983F7054; Mon, 8 May 2023 06:49:41 -0700 (PDT) From: To: Ori Kam , Aman Singh , "Yuying Zhang" , Thomas Monjalon , "Ferruh Yigit" , Andrew Rybchenko CC: , Kiran Kumar K Subject: [PATCH v2] ethdev: add Tx queue flow matching item Date: Mon, 8 May 2023 19:19:34 +0530 Message-ID: <20230508134935.971197-1-kirankumark@marvell.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-Proofpoint-GUID: 5unBtA_q5pl8zj1jAY2JjuwEQFNl9wG2 X-Proofpoint-ORIG-GUID: 5unBtA_q5pl8zj1jAY2JjuwEQFNl9wG2 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-05-08_10,2023-05-05_01,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Kiran Kumar K Adding support for Tx queue flow matching item. This item is valid only for egress rules. An example use case would be that application can set different vlan insert rules with different PCP values based on Tx queue number. Signed-off-by: Kiran Kumar K Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 28 +++++++++++++++++++++ doc/guides/prog_guide/rte_flow.rst | 7 ++++++ doc/guides/rel_notes/release_23_07.rst | 5 ++++ doc/guides/testpmd_app_ug/testpmd_funcs.rst | 4 +++ lib/ethdev/rte_flow.c | 1 + lib/ethdev/rte_flow.h | 26 +++++++++++++++++++ 6 files changed, 71 insertions(+) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 58939ec321..a68a6080a8 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -496,6 +496,8 @@ enum index { ITEM_QUOTA_STATE_NAME, ITEM_AGGR_AFFINITY, ITEM_AGGR_AFFINITY_VALUE, + ITEM_TX_QUEUE, + ITEM_TX_QUEUE_VALUE, /* Validate/create actions. */ ACTIONS, @@ -1452,6 +1454,7 @@ static const enum index next_item[] = { ITEM_METER, ITEM_QUOTA, ITEM_AGGR_AFFINITY, + ITEM_TX_QUEUE, END_SET, ZERO, }; @@ -1953,6 +1956,12 @@ static const enum index item_aggr_affinity[] = { ZERO, }; +static const enum index item_tx_queue[] = { + ITEM_TX_QUEUE_VALUE, + ITEM_NEXT, + ZERO, +}; + static const enum index next_action[] = { ACTION_END, ACTION_VOID, @@ -6945,6 +6954,22 @@ static const struct token token_list[] = { .args = ARGS(ARGS_ENTRY(struct rte_flow_item_aggr_affinity, affinity)), }, + [ITEM_TX_QUEUE] = { + .name = "tx_queue", + .help = "match on the tx queue of send packet", + .priv = PRIV_ITEM(TX_QUEUE, + sizeof(struct rte_flow_item_tx_queue)), + .next = NEXT(item_tx_queue), + .call = parse_vc, + }, + [ITEM_TX_QUEUE_VALUE] = { + .name = "tx_queue_value", + .help = "tx queue value", + .next = NEXT(item_tx_queue, NEXT_ENTRY(COMMON_UNSIGNED), + item_param), + .args = ARGS(ARGS_ENTRY(struct rte_flow_item_tx_queue, + tx_queue)), + }, }; /** Remove and return last entry from argument stack. */ @@ -11849,6 +11874,9 @@ flow_item_default_mask(const struct rte_flow_item *item) case RTE_FLOW_ITEM_TYPE_AGGR_AFFINITY: mask = &rte_flow_item_aggr_affinity_mask; break; + case RTE_FLOW_ITEM_TYPE_TX_QUEUE: + mask = &rte_flow_item_tx_queue_mask; + break; default: break; } diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 32fc45516a..ac5c65131f 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -1486,6 +1486,13 @@ This item is meant to use the same structure as `Item: PORT_REPRESENTOR`_. See also `Action: REPRESENTED_PORT`_. +Item: ``TX_QUEUE`` +^^^^^^^^^^^^^^^^^^^^^^^ + +Matches on the Tx queue of send packet . + +- ``tx_queue``: Tx queue. + Item: ``AGGR_AFFINITY`` ^^^^^^^^^^^^^^^^^^^^^^^ diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst index a9b1293689..bb04d99125 100644 --- a/doc/guides/rel_notes/release_23_07.rst +++ b/doc/guides/rel_notes/release_23_07.rst @@ -55,6 +55,11 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= + * **Added flow matching of tx queue.** + + Added ``RTE_FLOW_ITEM_TYPE_TX_QUEUE`` rte_flow pattern to match tx queue of + send packet. + Removed Items ------------- diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 8f23847859..29f7dd4428 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3779,6 +3779,10 @@ This section lists supported pattern items and their attributes, if any. - ``affinity {value}``: aggregated port (starts from 1). +- ``tx_queue``: match tx queue of send packet. + + - ``tx_queue {value}``: send queue value (starts from 0). + - ``send_to_kernel``: send packets to kernel. diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 69e6e749f7..f0d7f868fa 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -164,6 +164,7 @@ static const struct rte_flow_desc_data rte_flow_desc_item[] = { MK_FLOW_ITEM(IPV6_ROUTING_EXT, sizeof(struct rte_flow_item_ipv6_routing_ext)), MK_FLOW_ITEM(QUOTA, sizeof(struct rte_flow_item_quota)), MK_FLOW_ITEM(AGGR_AFFINITY, sizeof(struct rte_flow_item_aggr_affinity)), + MK_FLOW_ITEM(TX_QUEUE, sizeof(struct rte_flow_item_tx_queue)), }; /** Generate flow_action[] entry. */ diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 713ba8b65c..fe28ba0a82 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -672,8 +672,34 @@ enum rte_flow_item_type { * @see struct rte_flow_item_aggr_affinity. */ RTE_FLOW_ITEM_TYPE_AGGR_AFFINITY, + /** + * Match Tx queue number. + * This is valid only for egress rules. + * + * @see struct rte_flow_item_tx_queue + */ + RTE_FLOW_ITEM_TYPE_TX_QUEUE, }; +/** + * RTE_FLOW_ITEM_TYPE_TX_QUEUE + * + * Tx queue number + * + * @see struct rte_flow_item_tx_queue + */ +struct rte_flow_item_tx_queue { + /** Tx queue number that packet is being transmitted */ + uint16_t tx_queue; +}; + +/** Default mask for RTE_FLOW_ITEM_TX_QUEUE. */ +#ifndef __cplusplus +static const struct rte_flow_item_tx_queue rte_flow_item_tx_queue_mask = { + .tx_queue = RTE_BE16(0xffff), +}; +#endif + /** * @warning * @b EXPERIMENTAL: this API may change without prior notice. From patchwork Thu May 11 07:55:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dong Zhou X-Patchwork-Id: 126812 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E3C5C42AD3; Thu, 11 May 2023 09:55:59 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D000942D6C; Thu, 11 May 2023 09:55:59 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2082.outbound.protection.outlook.com [40.107.92.82]) by mails.dpdk.org (Postfix) with ESMTP id E741142D63 for ; Thu, 11 May 2023 09:55:58 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=VOUOOSqhDZ29lrw12GERS3frW53d6uPxM+9rZZAajjUihPL4BSC0ORoOT8anSaLJRS81HEt6/oLjZ2F1fet37GMsjpykactDpEdUEAaQwLg60drXzPgYBcxuSeKhFx/5t641P+FjM2Eom1zvXmerVZJZ58C/zbjrqHFQAPfbBsQuJfRrDA5Wics1xhGbjUDnlj1fTyv+dv8U2DrlVU9tu/zUwjcQmpEXgcLCYtiTMf9qw0tCrCdUinw0qtkddr8e5+yCE+rBYKAi9Q1ebKTiRQSZ8qGwXvtrv+a7oHMLVjbH+IUii+DHQgoIlPhzbKbpLGRcmNNKZICqaaYYSG5XJg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=1YoVLlgFUNfNc5/0Nbbrm9cT4dgWjbENa5itZDtOufw=; b=KTvRluis9jckcIz73JaKNp8GeD7SwrLm7wPG7OZdWwJsBkyYe+6kT6YvlwdC3SlGTDz/kCtUAijmaS50SjSg3ZAq76ZLyJSnrfw1ppntZ6IBpqmEaJBi+id3OhgD3D0WKbxNVm0WG6gEUsHsG1tzvR+Z/MiYpvDGQzT6hRUOf+fmuyqamH15cfQh1iihsCUAxODNOHIhNYclZN8d1e1USGqHSORx9ElkmmtWQ+70Bad63hDafrIFh47oPy+6x9o4pNjP93RQV68yZV0cl4jfADSpCCVLcdP5SfJmpK+jimXe4A8QSGA18DhnnWpogG/iqA2pjT3ZxeXfSUj8IwMG7w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=1YoVLlgFUNfNc5/0Nbbrm9cT4dgWjbENa5itZDtOufw=; b=RHcu+NV5/hzmVUbteLXDkJfXxzRyvbRJtPABrwe+NOXtzcRx4dFs96UXuamLs3OxhzaClK5RA5ADw3HU4tWsFBsrS/oFucykFcceyi7MC1LjqRbm447lQkwhNZPqvdXGoJlPV6J2o3Y+XECV1uBM2ra++VtHtxTpx1TPH7r5oKSqC3yf/GSh3lxb7+mNr4utWt2CN+bAkWJDrJ/f03Ep6LRvBsIx7Q00TCYkJRgFRhI5KWI7QeX9Cm+afrWOWO0myjWc7eYZ+D72QqOCT0IYtxJrr3m2vRxJDiGcZZeE1aRLhZOKJMEoDr2Xmg5ExQllPnShDbXFpHWmKooDvQ14OA== Received: from BN8PR04CA0034.namprd04.prod.outlook.com (2603:10b6:408:70::47) by LV3PR12MB9234.namprd12.prod.outlook.com (2603:10b6:408:1a0::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.21; Thu, 11 May 2023 07:55:39 +0000 Received: from BN8NAM11FT065.eop-nam11.prod.protection.outlook.com (2603:10b6:408:70:cafe::8b) by BN8PR04CA0034.outlook.office365.com (2603:10b6:408:70::47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.21 via Frontend Transport; Thu, 11 May 2023 07:55:39 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BN8NAM11FT065.mail.protection.outlook.com (10.13.177.63) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.21 via Frontend Transport; Thu, 11 May 2023 07:55:39 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Thu, 11 May 2023 00:55:25 -0700 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Thu, 11 May 2023 00:55:22 -0700 From: Dong Zhou To: , , , "Aman Singh" , Yuying Zhang , Ferruh Yigit , Andrew Rybchenko , Olivier Matz CC: , Subject: [PATCH v1 1/3] ethdev: add flow item for RoCE infiniband BTH Date: Thu, 11 May 2023 10:55:02 +0300 Message-ID: <20230511075504.664871-2-dongzhou@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230511075504.664871-1-dongzhou@nvidia.com> References: <20230511075504.664871-1-dongzhou@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT065:EE_|LV3PR12MB9234:EE_ X-MS-Office365-Filtering-Correlation-Id: a0e6b05c-4005-441d-7fc2-08db51f51cc6 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 75AUkWSLDR/U0XsB4nQQcqHJsg4T2PnzsDf1Nay5yOkmkutUJZzAsjYg5EbF4UR2CFTbwqDJSnRAViXcBgJPeqEhI/004dR6MJ9/6xf8ZM2RT3acew+4NoAv49hi1O/um/w7+lX/w6CnMQaF+UtS21J3jVtGLmn9n3HZ+QXPQMRopwhxYmSOfVBBVwkmjUI/xhCx7K9tb0L5Ap1wXLgksOcyehooCrzPypF87bpcJFSiD4E/D94od9gwSgFAE5z34IFtpJm957ZBChOh4xGLXK5tEhol3DwksB481jidelmunIhx6foDst1T1L+eXBosWhrJj/gDZQ9eh15fJaab1vUBXbi0G8eUFRPxY9BEa4FWbTwK6iXf4kx7QMCLSlX1ROg/tLdwyQmS5yjuM1YXGJ0Mw+Mo6cevIQ+aWEgho00Onu4cI1ahU4Nu56pl92rkUhxR3l+CDhPa+Pbj2/Mi0L5WD5gyjY9/A28dCTVNCWnCNISDoWCP6JXyicdj4jUEXUxFlWVxqaFVfx9cNeLqRRr5BYSELhwJR5QNaZvdw+JPUnRWyuiv5HsENgBlnCZMJ8kW1vpJy7yi4Zkm6gFJ4zqzxC1BbMdGDRck0p+lAwme66K76wIvGjEHDdA5hrB5Nf+Kpds9JnzcbYP7UDwIneOX2eop538Q27XvIY/5EaPHDQtwDCmLk0GWtYoERigxVXJPlM8iBV6zg57EJMRa9q+WWIU8gs7p5yzR6Ozmb7cHnGLdJ8NZ2b21MoxO8rkA X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(346002)(396003)(39860400002)(136003)(376002)(451199021)(46966006)(40470700004)(36840700001)(8936002)(8676002)(40460700003)(5660300002)(7696005)(82310400005)(110136005)(54906003)(4326008)(36756003)(82740400003)(70586007)(40480700001)(70206006)(316002)(478600001)(55016003)(356005)(41300700001)(6666004)(7636003)(16526019)(6286002)(186003)(2906002)(83380400001)(86362001)(2616005)(47076005)(107886003)(426003)(66574015)(36860700001)(336012)(1076003)(26005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 07:55:39.2790 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a0e6b05c-4005-441d-7fc2-08db51f51cc6 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT065.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV3PR12MB9234 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org IB(InfiniBand) is one type of networking used in high-performance computing with high throughput and low latency. Like Ethernet, IB defines a layered protocol (Physical, Link, Network, Transport Layers). IB provides native support for RDMA(Remote DMA), an extension of the DMA that allows direct access to remote host memory without CPU intervention. IB network requires NICs and switches to support the IB protocol. RoCE(RDMA over Converged Ethernet) is a network protocol that allows RDMA to run on Ethernet. RoCE encapsulates IB packets on ethernet and has two versions, RoCEv1 and RoCEv2. RoCEv1 is an ethernet link layer protocol, IB packets are encapsulated in the ethernet layer and use ethernet type 0x8915. RoCEv2 is an internet layer protocol, IB packets are encapsulated in UDP payload and use a destination port 4791, The format of the RoCEv2 packet is as follows: ETH + IP + UDP(dport 4791) + IB(BTH + ExtHDR + PAYLOAD + CRC) BTH(Base Transport Header) is the IB transport layer header, RoCEv1 and RoCEv2 both contain this header. This patch introduces a new RTE item to match the IB BTH in RoCE packets. One use of this match is that the user can monitor RoCEv2's CNP(Congestion Notification Packet) by matching BTH opcode 0x81. This patch also adds the testpmd command line to match the RoCEv2 BTH. Usage example: testpmd> flow create 0 group 1 ingress pattern eth / ipv4 / udp dst is 4791 / ib_bth opcode is 0x81 dst_qp is 0xd3 / end actions queue index 0 / end Signed-off-by: Dong Zhou Acked-by: Ori Kam Acked-by: Andrew Rybchenko --- app/test-pmd/cmdline_flow.c | 58 ++++++++++++++++++ doc/guides/nics/features/default.ini | 1 + doc/guides/prog_guide/rte_flow.rst | 7 +++ doc/guides/testpmd_app_ug/testpmd_funcs.rst | 7 +++ lib/ethdev/rte_flow.c | 1 + lib/ethdev/rte_flow.h | 27 ++++++++ lib/net/meson.build | 1 + lib/net/rte_ib.h | 68 +++++++++++++++++++++ 8 files changed, 170 insertions(+) create mode 100644 lib/net/rte_ib.h diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 58939ec321..3ade229ffc 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -496,6 +496,11 @@ enum index { ITEM_QUOTA_STATE_NAME, ITEM_AGGR_AFFINITY, ITEM_AGGR_AFFINITY_VALUE, + ITEM_IB_BTH, + ITEM_IB_BTH_OPCODE, + ITEM_IB_BTH_PKEY, + ITEM_IB_BTH_DST_QPN, + ITEM_IB_BTH_PSN, /* Validate/create actions. */ ACTIONS, @@ -1452,6 +1457,7 @@ static const enum index next_item[] = { ITEM_METER, ITEM_QUOTA, ITEM_AGGR_AFFINITY, + ITEM_IB_BTH, END_SET, ZERO, }; @@ -1953,6 +1959,15 @@ static const enum index item_aggr_affinity[] = { ZERO, }; +static const enum index item_ib_bth[] = { + ITEM_IB_BTH_OPCODE, + ITEM_IB_BTH_PKEY, + ITEM_IB_BTH_DST_QPN, + ITEM_IB_BTH_PSN, + ITEM_NEXT, + ZERO, +}; + static const enum index next_action[] = { ACTION_END, ACTION_VOID, @@ -5523,6 +5538,46 @@ static const struct token token_list[] = { .call = parse_quota_state_name, .comp = comp_quota_state_name }, + [ITEM_IB_BTH] = { + .name = "ib_bth", + .help = "match ib bth fields", + .priv = PRIV_ITEM(IB_BTH, + sizeof(struct rte_flow_item_ib_bth)), + .next = NEXT(item_ib_bth), + .call = parse_vc, + }, + [ITEM_IB_BTH_OPCODE] = { + .name = "opcode", + .help = "match ib bth opcode", + .next = NEXT(item_ib_bth, NEXT_ENTRY(COMMON_UNSIGNED), + item_param), + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ib_bth, + hdr.opcode)), + }, + [ITEM_IB_BTH_PKEY] = { + .name = "pkey", + .help = "partition key", + .next = NEXT(item_ib_bth, NEXT_ENTRY(COMMON_UNSIGNED), + item_param), + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ib_bth, + hdr.pkey)), + }, + [ITEM_IB_BTH_DST_QPN] = { + .name = "dst_qp", + .help = "destination qp", + .next = NEXT(item_ib_bth, NEXT_ENTRY(COMMON_UNSIGNED), + item_param), + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ib_bth, + hdr.dst_qp)), + }, + [ITEM_IB_BTH_PSN] = { + .name = "psn", + .help = "packet sequence number", + .next = NEXT(item_ib_bth, NEXT_ENTRY(COMMON_UNSIGNED), + item_param), + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ib_bth, + hdr.psn)), + }, /* Validate/create actions. */ [ACTIONS] = { .name = "actions", @@ -11849,6 +11904,9 @@ flow_item_default_mask(const struct rte_flow_item *item) case RTE_FLOW_ITEM_TYPE_AGGR_AFFINITY: mask = &rte_flow_item_aggr_affinity_mask; break; + case RTE_FLOW_ITEM_TYPE_IB_BTH: + mask = &rte_flow_item_ib_bth_mask; + break; default: break; } diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini index 1a5087abad..1738715e26 100644 --- a/doc/guides/nics/features/default.ini +++ b/doc/guides/nics/features/default.ini @@ -104,6 +104,7 @@ gtpc = gtpu = gtp_psc = higig2 = +ib_bth = icmp = icmp6 = icmp6_echo_request = diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 32fc45516a..e2957df71c 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -1551,6 +1551,13 @@ Matches flow quota state set by quota action. - ``state``: Flow quota state +Item: ``IB_BTH`` +^^^^^^^^^^^^^^^^ + +Matches an InfiniBand base transport header in RoCE packet. + +- ``hdr``: InfiniBand base transport header definition (``rte_ib.h``). + Actions ~~~~~~~ diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 8f23847859..4bad244029 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3781,6 +3781,13 @@ This section lists supported pattern items and their attributes, if any. - ``send_to_kernel``: send packets to kernel. +- ``ib_bth``: match InfiniBand BTH(base transport header). + + - ``opcode {unsigned}``: Opcode. + - ``pkey {unsigned}``: Partition key. + - ``dst_qp {unsigned}``: Destination Queue Pair. + - ``psn {unsigned}``: Packet Sequence Number. + Actions list ^^^^^^^^^^^^ diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 69e6e749f7..6e099deca3 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -164,6 +164,7 @@ static const struct rte_flow_desc_data rte_flow_desc_item[] = { MK_FLOW_ITEM(IPV6_ROUTING_EXT, sizeof(struct rte_flow_item_ipv6_routing_ext)), MK_FLOW_ITEM(QUOTA, sizeof(struct rte_flow_item_quota)), MK_FLOW_ITEM(AGGR_AFFINITY, sizeof(struct rte_flow_item_aggr_affinity)), + MK_FLOW_ITEM(IB_BTH, sizeof(struct rte_flow_item_ib_bth)), }; /** Generate flow_action[] entry. */ diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 713ba8b65c..2b7f144c27 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -38,6 +38,7 @@ #include #include #include +#include #ifdef __cplusplus extern "C" { @@ -672,6 +673,13 @@ enum rte_flow_item_type { * @see struct rte_flow_item_aggr_affinity. */ RTE_FLOW_ITEM_TYPE_AGGR_AFFINITY, + + /** + * Matches an InfiniBand base transport header in RoCE packet. + * + * See struct rte_flow_item_ib_bth. + */ + RTE_FLOW_ITEM_TYPE_IB_BTH, }; /** @@ -2260,6 +2268,25 @@ rte_flow_item_aggr_affinity_mask = { }; #endif +/** + * RTE_FLOW_ITEM_TYPE_IB_BTH. + * + * Matches an InfiniBand base transport header in RoCE packet. + */ +struct rte_flow_item_ib_bth { + struct rte_ib_bth hdr; /**< InfiniBand base transport header definition. */ +}; + +/** Default mask for RTE_FLOW_ITEM_TYPE_IB_BTH. */ +#ifndef __cplusplus +static const struct rte_flow_item_ib_bth rte_flow_item_ib_bth_mask = { + .hdr = { + .opcode = 0xff, + .dst_qp = "\xff\xff\xff", + }, +}; +#endif + /** * Action types. * diff --git a/lib/net/meson.build b/lib/net/meson.build index 379d161ee0..b7a0684101 100644 --- a/lib/net/meson.build +++ b/lib/net/meson.build @@ -22,6 +22,7 @@ headers = files( 'rte_geneve.h', 'rte_l2tpv2.h', 'rte_ppp.h', + 'rte_ib.h', ) sources = files( diff --git a/lib/net/rte_ib.h b/lib/net/rte_ib.h new file mode 100644 index 0000000000..c1b2797815 --- /dev/null +++ b/lib/net/rte_ib.h @@ -0,0 +1,68 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 NVIDIA Corporation & Affiliates + */ + +#ifndef RTE_IB_H +#define RTE_IB_H + +/** + * @file + * + * InfiniBand headers definitions + * + * The infiniBand headers are used by RoCE (RDMA over Converged Ethernet). + */ + +#include + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * InfiniBand Base Transport Header according to + * IB Specification Vol 1-Release-1.4. + */ +__extension__ +struct rte_ib_bth { + uint8_t opcode; /**< Opcode. */ +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + uint8_t tver:4; /**< Transport Header Version. */ + uint8_t padcnt:2; /**< Pad Count. */ + uint8_t m:1; /**< MigReq. */ + uint8_t se:1; /**< Solicited Event. */ +#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint8_t se:1; /**< Solicited Event. */ + uint8_t m:1; /**< MigReq. */ + uint8_t padcnt:2; /**< Pad Count. */ + uint8_t tver:4; /**< Transport Header Version. */ +#endif + rte_be16_t pkey; /**< Partition key. */ +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + uint8_t rsvd0:6; /**< Reserved. */ + uint8_t b:1; /**< BECN. */ + uint8_t f:1; /**< FECN. */ +#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint8_t f:1; /**< FECN. */ + uint8_t b:1; /**< BECN. */ + uint8_t rsvd0:6; /**< Reserved. */ +#endif + uint8_t dst_qp[3]; /**< Destination QP */ +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + uint8_t rsvd1:7; /**< Reserved. */ + uint8_t a:1; /**< Acknowledge Request. */ +#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint8_t a:1; /**< Acknowledge Request. */ + uint8_t rsvd1:7; /**< Reserved. */ +#endif + uint8_t psn[3]; /**< Packet Sequence Number */ +} __rte_packed; + +/** RoCEv2 default port. */ +#define RTE_ROCEV2_DEFAULT_PORT 4791 + +#ifdef __cplusplus +} +#endif + +#endif /* RTE_IB_H */ From patchwork Thu May 11 07:55:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dong Zhou X-Patchwork-Id: 126813 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 28B6742AD3; Thu, 11 May 2023 09:56:18 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1641742D69; Thu, 11 May 2023 09:56:18 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2058.outbound.protection.outlook.com [40.107.244.58]) by mails.dpdk.org (Postfix) with ESMTP id 574F042D64 for ; Thu, 11 May 2023 09:56:17 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ZIPEQVp8A/QMF1DX957soTEwC9iCZM48ZPip2bvm85d1Vm4I6veBssLTg26Y3DjWflf3ldRve+7eMteYBCOzSu7XMkvtHiQJlwFegT8avAZffiHurOW2xHP0ePtQPtrjpIMHXHH8SiZvgQ0LV7VSHWO54sGvKfMRq09tRCTs0aF/al40i2hmVREro0qQZX17uOl2F1rf+zwoN22QvPv0xqwDe34YtMLDwHWLFGJYcF4bz4UZW6mnd652S8FPjLIKurkQbGx3aJXvEQOi5zxAt9VsdzX3bFpoWyhVE9mXZh7dQV45bsbPCg2XH+k5hjRiTE/MhogHNCQe99vSLxWeng== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=UsyAN+qMmKcfObirHHNi8ajylIMA6dIaM//zz8ZBS3k=; b=m5ctDUHk7HNBRYqMEP52/NkX5lp2+002gf0QapdBejmp8etPqWMr4KaPyY7DHhcK8mGjgYqQDZXdHeD9YvY+rxKbSpoyg9DwZDJajljHZFyVyjwX9qRliYHb5gPU41+Pc2winM2Y7wXTJhvE9Iy5sCavXSKRSVZI1H8Pe14tL7rpGjeHTm7y2UR281qUvFC4uiZvVkOjiFER3hvOLIqWyXEPLdKX9Ci46bhUStOu8xvxskyJSRuohUnO+gFHn95O4tX4x5ZuPWAKRPnnagCk5+61Yisj4aqJmh/QXLwVDzIegmyRckNWadDKiqXrfOaT+a2bayU+qAEWUPREg4WQnw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=UsyAN+qMmKcfObirHHNi8ajylIMA6dIaM//zz8ZBS3k=; b=gGAlNbYYLwzS2NQB1FZjyne4/hcYHIjrJi2cEyCTz9SHXvVKzWmCWLIehrAHdkFSqke3whNRlNh7AFPIDZIfXfmiighkmrUhjyuAgPOlGmYf1d3kMOCWhAeyGAozJ/kmlcmNkGmfKjGshVMqNUJR1b3Pa6DwQ3w3PGvzC+pBjGLxlpyvxl2ml+v6lX5nVS8T5VMnCPHm0CH+K1jr0dCaUmSXNKrRYUEs8u+XXTZbRSP8bMTDEV7DcWXyeoXCUH4aCRQ/Zz+qd1ls/ajUiXBL17GhXLzKwMECg6mOhtG1RNz0QPdxbrBf7qTCgocK9cbLn1OhcabjPDE9p7WafqZ2kQ== Received: from BN9PR03CA0285.namprd03.prod.outlook.com (2603:10b6:408:f5::20) by CH3PR12MB7572.namprd12.prod.outlook.com (2603:10b6:610:144::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6363.33; Thu, 11 May 2023 07:56:15 +0000 Received: from BN8NAM11FT103.eop-nam11.prod.protection.outlook.com (2603:10b6:408:f5:cafe::9) by BN9PR03CA0285.outlook.office365.com (2603:10b6:408:f5::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.21 via Frontend Transport; Thu, 11 May 2023 07:56:14 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BN8NAM11FT103.mail.protection.outlook.com (10.13.176.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.21 via Frontend Transport; Thu, 11 May 2023 07:56:14 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Thu, 11 May 2023 00:56:04 -0700 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Thu, 11 May 2023 00:56:02 -0700 From: Dong Zhou To: , , , "Matan Azrad" CC: , Subject: [PATCH v1 2/3] net/mlx5: add support for infiniband BTH match Date: Thu, 11 May 2023 10:55:03 +0300 Message-ID: <20230511075504.664871-3-dongzhou@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230511075504.664871-1-dongzhou@nvidia.com> References: <20230511075504.664871-1-dongzhou@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT103:EE_|CH3PR12MB7572:EE_ X-MS-Office365-Filtering-Correlation-Id: b31c6eb0-a3fc-4bb3-997a-08db51f531d9 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: VhNBzRwKxAMBDj3/5RxPXJVrnCVyou+H7C8HovlnUM/ogWhy8PIPpKA0pro0IacQmP0vOkBlBJaGhIPkBeYhTTaKPX/dPsdJ6ONsuP3Z7XzGBySe/UaX9wJ5xJjfTlgL0AiEYi79YcB17nLSf6ukyTPYjcG9AeT+Vt1f05+hm8HfABEjZ1O7nS83QfoqrtevTpn4oMX+pdGdryHv7LtvyS2RxUKk0fbuqpywoxNsA0+ZlTeyexoKjrvdJ5wSU/cq5eyE8wIwjude6AEMXJgNAoiHTXDxc4XjOndq4+0Zu3TZPL6Tx7XCtavg4h+eCoeUtxxHzZqh4hLEP01G4d6vteelqQ5UC6bQ9mJBSEKxw/armTiqqwMZ2OI7QJIFdYDupNsBA8ktNz7UVprcPr29oUx+Zt5fhStglCzeVRw7BowxbbXSPVhytF2i+/yRBbgbxnYv+G9fZWZwys4v9od/NYCkeetwhzS0jFW3QhDthJIlgaqLA49L3XEsceFRaV9L8oIQeuTb+3aRSwFuYh8KXqcl3/3YRf6LimmFj/xecUtDsUE8//FQKPcbuqgXW0yfkX24wWkpKbumtM337SBoADprWUDbZyDW/oXh7QMfQaUI1UOr57KjjG78fnh7inNGYPTNUf1sxD5h5JZd1TpETcggzLURHfiuQSzVyKoMbU1+vCYtfXljwF+p5RCRa4zjfB9IxFyR9yGc5goXDw3D7A== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(136003)(346002)(396003)(39860400002)(376002)(451199021)(46966006)(40470700004)(36840700001)(4326008)(6636002)(8676002)(82310400005)(70586007)(478600001)(70206006)(41300700001)(82740400003)(8936002)(7696005)(110136005)(316002)(86362001)(5660300002)(54906003)(107886003)(186003)(26005)(1076003)(40460700003)(6286002)(16526019)(2906002)(7636003)(47076005)(55016003)(356005)(426003)(2616005)(83380400001)(336012)(40480700001)(36756003)(36860700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 07:56:14.6520 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b31c6eb0-a3fc-4bb3-997a-08db51f531d9 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT103.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB7572 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch adds support to match opcode and dst_qp fields in infiniband BTH. Currently, only the RoCEv2 packet is supported, the input BTH match item is defaulted to match one RoCEv2 packet. Signed-off-by: Dong Zhou --- drivers/common/mlx5/mlx5_prm.h | 5 +- drivers/net/mlx5/mlx5_flow.h | 6 ++ drivers/net/mlx5/mlx5_flow_dv.c | 102 ++++++++++++++++++++++++++++++++ 3 files changed, 111 insertions(+), 2 deletions(-) diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index ed3d5efbb7..8f55fd59b3 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -932,7 +932,7 @@ struct mlx5_ifc_fte_match_set_misc_bits { u8 gre_key_h[0x18]; u8 gre_key_l[0x8]; u8 vxlan_vni[0x18]; - u8 reserved_at_b8[0x8]; + u8 bth_opcode[0x8]; u8 geneve_vni[0x18]; u8 lag_rx_port_affinity[0x4]; u8 reserved_at_e8[0x2]; @@ -945,7 +945,8 @@ struct mlx5_ifc_fte_match_set_misc_bits { u8 reserved_at_120[0xa]; u8 geneve_opt_len[0x6]; u8 geneve_protocol_type[0x10]; - u8 reserved_at_140[0x20]; + u8 reserved_at_140[0x8]; + u8 bth_dst_qp[0x18]; u8 inner_esp_spi[0x20]; u8 outer_esp_spi[0x20]; u8 reserved_at_1a0[0x60]; diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 1d116ea0f6..c1d6a71708 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -227,6 +227,9 @@ enum mlx5_feature_name { /* Aggregated affinity item */ #define MLX5_FLOW_ITEM_AGGR_AFFINITY (UINT64_C(1) << 49) +/* IB BTH ITEM. */ +#define MLX5_FLOW_ITEM_IB_BTH (1ull << 51) + /* Outer Masks. */ #define MLX5_FLOW_LAYER_OUTER_L3 \ (MLX5_FLOW_LAYER_OUTER_L3_IPV4 | MLX5_FLOW_LAYER_OUTER_L3_IPV6) @@ -364,6 +367,9 @@ enum mlx5_feature_name { #define MLX5_UDP_PORT_VXLAN 4789 #define MLX5_UDP_PORT_VXLAN_GPE 4790 +/* UDP port numbers for RoCEv2. */ +#define MLX5_UDP_PORT_ROCEv2 4791 + /* UDP port numbers for GENEVE. */ #define MLX5_UDP_PORT_GENEVE 6081 diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index f136f43b0a..b7dc8ecaf7 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -7193,6 +7193,65 @@ flow_dv_validate_item_flex(struct rte_eth_dev *dev, return 0; } +/** + * Validate IB BTH item. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] udp_dport + * UDP destination port + * @param[in] item + * Item specification. + * @param root + * Whether action is on root table. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_flow_validate_item_ib_bth(struct rte_eth_dev *dev, + uint16_t udp_dport, + const struct rte_flow_item *item, + bool root, + struct rte_flow_error *error) +{ + const struct rte_flow_item_ib_bth *mask = item->mask; + struct mlx5_priv *priv = dev->data->dev_private; + const struct rte_flow_item_ib_bth *valid_mask; + int ret; + + valid_mask = &rte_flow_item_ib_bth_mask; + if (udp_dport && udp_dport != MLX5_UDP_PORT_ROCEv2) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "protocol filtering not compatible" + " with UDP layer"); + if (mask && (mask->hdr.se || mask->hdr.m || mask->hdr.padcnt || + mask->hdr.tver || mask->hdr.pkey || mask->hdr.f || mask->hdr.b || + mask->hdr.rsvd0 || mask->hdr.a || mask->hdr.rsvd1 || + mask->hdr.psn[0] || mask->hdr.psn[1] || mask->hdr.psn[2])) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "only opcode and dst_qp are supported"); + if (root || priv->sh->steering_format_version == + MLX5_STEERING_LOGIC_FORMAT_CONNECTX_5) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "IB BTH item is not supported"); + if (!mask) + mask = &rte_flow_item_ib_bth_mask; + ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask, + (const uint8_t *)valid_mask, + sizeof(struct rte_flow_item_ib_bth), + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); + if (ret < 0) + return ret; + return 0; +} + /** * Internal validation function. For validating both actions and items. * @@ -7700,6 +7759,14 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, return ret; last_item = MLX5_FLOW_ITEM_AGGR_AFFINITY; break; + case RTE_FLOW_ITEM_TYPE_IB_BTH: + ret = mlx5_flow_validate_item_ib_bth(dev, udp_dport, + items, is_root, error); + if (ret < 0) + return ret; + + last_item = MLX5_FLOW_ITEM_IB_BTH; + break; default: return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, @@ -10971,6 +11038,37 @@ flow_dv_translate_item_aggr_affinity(void *key, affinity_v->affinity & affinity_m->affinity); } +static void +flow_dv_translate_item_ib_bth(void *key, + const struct rte_flow_item *item, + int inner, uint32_t key_type) +{ + const struct rte_flow_item_ib_bth *bth_m; + const struct rte_flow_item_ib_bth *bth_v; + void *headers_v, *misc_v; + uint16_t udp_dport; + char *qpn_v; + int i, size; + + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + if (!MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport)) { + udp_dport = key_type & MLX5_SET_MATCHER_M ? + 0xFFFF : MLX5_UDP_PORT_ROCEv2; + MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, udp_dport); + } + if (MLX5_ITEM_VALID(item, key_type)) + return; + MLX5_ITEM_UPDATE(item, key_type, bth_v, bth_m, &rte_flow_item_ib_bth_mask); + misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); + MLX5_SET(fte_match_set_misc, misc_v, bth_opcode, + bth_v->hdr.opcode & bth_m->hdr.opcode); + qpn_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, bth_dst_qp); + size = sizeof(bth_m->hdr.dst_qp); + for (i = 0; i < size; ++i) + qpn_v[i] = bth_m->hdr.dst_qp[i] & bth_v->hdr.dst_qp[i]; +} + static uint32_t matcher_zero[MLX5_ST_SZ_DW(fte_match_param)] = { 0 }; #define HEADER_IS_ZERO(match_criteria, headers) \ @@ -13772,6 +13870,10 @@ flow_dv_translate_items(struct rte_eth_dev *dev, flow_dv_translate_item_aggr_affinity(key, items, key_type); last_item = MLX5_FLOW_ITEM_AGGR_AFFINITY; break; + case RTE_FLOW_ITEM_TYPE_IB_BTH: + flow_dv_translate_item_ib_bth(key, items, tunnel, key_type); + last_item = MLX5_FLOW_ITEM_IB_BTH; + break; default: break; } From patchwork Thu May 11 07:55:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dong Zhou X-Patchwork-Id: 126814 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C1B9B42AD3; Thu, 11 May 2023 09:56:44 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B2CAF42D5E; Thu, 11 May 2023 09:56:44 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2082.outbound.protection.outlook.com [40.107.93.82]) by mails.dpdk.org (Postfix) with ESMTP id 0CB7142D5D for ; Thu, 11 May 2023 09:56:43 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=WG1FRhImVlhuzDeKFJJ+zBuYcDZ9XDPbohdgfTTIEzKsEbUB4wIXcd6Ai5IxPvPZq6Gz05/C+mogQla7qUeVPpPAgzMk2X31lK+oyBSaJ5dhXJADk955jz7gGa7ZwYlChiVq3PhmXDS5u1OBFdSpVCY6ToPON46yxGfMBAQOt1sJUgsPJhKnK75hYAGyLyf5bp+ndi5rpND5Hwe8Noffal8UIYoC3iFVag3UMVeF9r80LkSqNFTt8JoYHU/CGUxY9fsNBEslTBUsEh6A51yrNV0OmuJxfmBjSrKIW7+tUgp/BoLH7J85XLhO+bazij0IsC7nhvUTrIp8AYh0wmDxtg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=etkWcloWFua9mFw73zisFPXlitZ+jq09qBEQHmhX2l4=; b=RInxXuZCCFQlsXJisRjrMxIeUmXe03C5VSjqWRlSCcE5b9gNAFL+J8DyIVoJUH1B0nql08+9A5rBX7Uv/eD8QwNbMQmMvoJJRzwhvxUAHi4op8lnoTYdsBeWZPJl7MzDL4uON8bSHHYQPnnk/kSSh9TS26wLjktHytPbIYpoEcaTCvPKxQWrbn5AVQR+7WYAPtT2XdrYAm7ixt2vNyhpw9NBaUoODEmNE3P1dz1gjmnV2dQJheoJbUdF/B4PMW9IL9ir8O2wSkhE9SwY2Vls4mKGm8qBYnYXl07EdtCY5FnxH/1mqMuC/yRkbjMCVpmVHcUSuy77J48EO532YiGoEQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=etkWcloWFua9mFw73zisFPXlitZ+jq09qBEQHmhX2l4=; b=GAA8swj6IXJN2++Hx55mYlLysz6/YM/XH9osbBDaMSoHVw1eRQV6FHbes9raXbdK4N9VOdzdGMPxFE6CIoVdcWbmX7k8VbD7W5+Pj96+h0KGTeFrfbcFp2mPTSpQgNhY86l4XGQB+PG/HoqVCASRu42Vjqaq+1cw/q0XCVNaSQNFNiQOulOVWSLoaF+/H48BO+rKq6ItHA2MUhbih7KmL2unQ8m8gzHlgBfJgPoDsH2vfkziUAmGOGjdUTcLcsHpiJ2+qAq40dCSPkNlINMRcd42Bk+m/HTUrspG2QGZ4XAU49fNqqvEVp3jvP476wPH+87Insic+52AmDaPWryUGg== Received: from BN0PR07CA0003.namprd07.prod.outlook.com (2603:10b6:408:141::15) by CH2PR12MB4117.namprd12.prod.outlook.com (2603:10b6:610:ae::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.20; Thu, 11 May 2023 07:56:41 +0000 Received: from BN8NAM11FT049.eop-nam11.prod.protection.outlook.com (2603:10b6:408:141:cafe::d9) by BN0PR07CA0003.outlook.office365.com (2603:10b6:408:141::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.21 via Frontend Transport; Thu, 11 May 2023 07:56:41 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT049.mail.protection.outlook.com (10.13.177.157) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.21 via Frontend Transport; Thu, 11 May 2023 07:56:40 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Thu, 11 May 2023 00:56:23 -0700 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Thu, 11 May 2023 00:56:21 -0700 From: Dong Zhou To: , , , "Matan Azrad" CC: , Subject: [PATCH v1 3/3] net/mlx5/hws: add support for infiniband BTH match Date: Thu, 11 May 2023 10:55:04 +0300 Message-ID: <20230511075504.664871-4-dongzhou@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230511075504.664871-1-dongzhou@nvidia.com> References: <20230511075504.664871-1-dongzhou@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT049:EE_|CH2PR12MB4117:EE_ X-MS-Office365-Filtering-Correlation-Id: f37aac97-a150-437a-b9ce-08db51f54145 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: p24uv60XP8KtGmJ3UJcM6zvAcT5ncz6AjUW22Vc+Q6RN598Qqy6DoTFOLf3Z2Oaq4WdaKUuHPIaRwsYwsrbhV3T6dn0KdineAdYg02e5FMO+L0WpCEy2DMJm/CzPXNmFSG8ayliQ+b8SLwkE8eWG/zoS8Lng8cr7YUIsDVhVEPLCWSe2+IJfeTMJkviE1pEheA6WWhilskYXZ0yFwicFjXGsAmwolnbHR2dFewyztGE0Na1XXN05k+vOLCEe9te+OJotUfE4ZLHINJ5gw3F9DOcyogdRMsrbyVWU9/h56lxzEpM2kvOp+AI2cvMZkGZHrEvcwpzkdXJCEOaZF6WlF5f0eJWMk20GIqD/qLQVWoXcKU8JPx3sKUaLKN8inHojn9gE97OTJUnMGIeunDDBXjfIOpMTEIKr2DrHWJKUv7ts+JXBqf5AF8QPLufpkyQtPhU7ZrY7kTnv0L7RG03JSKApiUtbU9sGlZkvlHzC1TP4LBFEkifHvmQIm2NJpY3CrC8ntRJglH/SsKnlz147oAXI19JN1FJ7lAy3YiJSAMn4MR/hqGBrJGsXsC6RyTxfdrVQ6Iyb5/pYQmf4ojvqVfIxNBarRuyV7spdtitTaEBigcCcFtuGc6iv/cRYImnxoiZ51UsynT87MI6IzA2EqGegDr5zCctsHPkomYgdrFqZSA0RnD8Uu382WcZXYGiWsBddyukNFML12gO0ErEVcph3DhOjOODp/LVIy+0fR1iqOOrbbBN4QwdNmjsBgqkC X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(136003)(396003)(346002)(39860400002)(376002)(451199021)(40470700004)(46966006)(36840700001)(4326008)(54906003)(110136005)(316002)(6636002)(83380400001)(82740400003)(5660300002)(36756003)(70206006)(7636003)(2906002)(47076005)(107886003)(41300700001)(7696005)(70586007)(86362001)(356005)(6666004)(36860700001)(336012)(426003)(26005)(1076003)(40460700003)(82310400005)(2616005)(8676002)(8936002)(55016003)(16526019)(186003)(6286002)(40480700001)(478600001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2023 07:56:40.5108 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f37aac97-a150-437a-b9ce-08db51f54145 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT049.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4117 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch adds support to match opcode and dst_qp fields in infiniband BTH. Currently, only the RoCEv2 packet is supported, the input BTH match item is defaulted to match one RoCEv2 packet. Signed-off-by: Dong Zhou --- drivers/net/mlx5/hws/mlx5dr_definer.c | 76 ++++++++++++++++++++++++++- drivers/net/mlx5/hws/mlx5dr_definer.h | 2 + drivers/net/mlx5/mlx5_flow_hw.c | 1 + 3 files changed, 78 insertions(+), 1 deletion(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c index f92d3e8e1f..1a427c9b64 100644 --- a/drivers/net/mlx5/hws/mlx5dr_definer.c +++ b/drivers/net/mlx5/hws/mlx5dr_definer.c @@ -10,6 +10,7 @@ #define ETH_TYPE_IPV6_VXLAN 0x86DD #define ETH_VXLAN_DEFAULT_PORT 4789 #define IP_UDP_PORT_MPLS 6635 +#define UDP_ROCEV2_PORT 4791 #define DR_FLOW_LAYER_TUNNEL_NO_MPLS (MLX5_FLOW_LAYER_TUNNEL & ~MLX5_FLOW_LAYER_MPLS) #define STE_NO_VLAN 0x0 @@ -171,7 +172,9 @@ struct mlx5dr_definer_conv_data { X(SET_BE16, gre_opt_checksum, v->checksum_rsvd.checksum, rte_flow_item_gre_opt) \ X(SET, meter_color, rte_col_2_mlx5_col(v->color), rte_flow_item_meter_color) \ X(SET_BE32, ipsec_spi, v->hdr.spi, rte_flow_item_esp) \ - X(SET_BE32, ipsec_sequence_number, v->hdr.seq, rte_flow_item_esp) + X(SET_BE32, ipsec_sequence_number, v->hdr.seq, rte_flow_item_esp) \ + X(SET, ib_l4_udp_port, UDP_ROCEV2_PORT, rte_flow_item_ib_bth) \ + X(SET, ib_l4_opcode, v->hdr.opcode, rte_flow_item_ib_bth) /* Item set function format */ #define X(set_type, func_name, value, item_type) \ @@ -583,6 +586,16 @@ mlx5dr_definer_mpls_label_set(struct mlx5dr_definer_fc *fc, memcpy(tag + fc->byte_off + sizeof(v->label_tc_s), &v->ttl, sizeof(v->ttl)); } +static void +mlx5dr_definer_ib_l4_qp_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_ib_bth *v = item_spec; + + memcpy(tag + fc->byte_off, &v->hdr.dst_qp, sizeof(v->hdr.dst_qp)); +} + static int mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd, struct rte_flow_item *item, @@ -2041,6 +2054,63 @@ mlx5dr_definer_conv_item_flex_parser(struct mlx5dr_definer_conv_data *cd, return 0; } +static int +mlx5dr_definer_conv_item_ib_l4(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_ib_bth *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + /* In order to match on RoCEv2(layer4 ib), we must match + * on ip_protocol and l4_dport. + */ + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + if (!fc->tag_set) { + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_udp_protocol_set; + DR_CALC_SET(fc, eth_l2, l4_type_bwc, inner); + } + + fc = &cd->fc[DR_CALC_FNAME(L4_DPORT, inner)]; + if (!fc->tag_set) { + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_ib_l4_udp_port_set; + DR_CALC_SET(fc, eth_l4, destination_port, inner); + } + } + + if (!m) + return 0; + + if (m->hdr.se || m->hdr.m || m->hdr.padcnt || m->hdr.tver || + m->hdr.pkey || m->hdr.f || m->hdr.b || m->hdr.rsvd0 || + m->hdr.a || m->hdr.rsvd1 || !is_mem_zero(m->hdr.psn, 3)) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->hdr.opcode) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_IB_L4_OPCODE]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ib_l4_opcode_set; + DR_CALC_SET_HDR(fc, ib_l4, opcode); + } + + if (!is_mem_zero(m->hdr.dst_qp, 3)) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_IB_L4_QPN]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ib_l4_qp_set; + DR_CALC_SET_HDR(fc, ib_l4, qp); + } + + return 0; +} + static int mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx, struct mlx5dr_match_template *mt, @@ -2182,6 +2252,10 @@ mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx, item_flags |= MLX5_FLOW_LAYER_MPLS; cd.mpls_idx++; break; + case RTE_FLOW_ITEM_TYPE_IB_BTH: + ret = mlx5dr_definer_conv_item_ib_l4(&cd, items, i); + item_flags |= MLX5_FLOW_ITEM_IB_BTH; + break; default: DR_LOG(ERR, "Unsupported item type %d", items->type); rte_errno = ENOTSUP; diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.h b/drivers/net/mlx5/hws/mlx5dr_definer.h index 90ec4ce845..6b645f4cf0 100644 --- a/drivers/net/mlx5/hws/mlx5dr_definer.h +++ b/drivers/net/mlx5/hws/mlx5dr_definer.h @@ -134,6 +134,8 @@ enum mlx5dr_definer_fname { MLX5DR_DEFINER_FNAME_OKS2_MPLS2_I, MLX5DR_DEFINER_FNAME_OKS2_MPLS3_I, MLX5DR_DEFINER_FNAME_OKS2_MPLS4_I, + MLX5DR_DEFINER_FNAME_IB_L4_OPCODE, + MLX5DR_DEFINER_FNAME_IB_L4_QPN, MLX5DR_DEFINER_FNAME_MAX, }; diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 7e0ee8d883..9381646267 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -4969,6 +4969,7 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev, case RTE_FLOW_ITEM_TYPE_IPV6_ROUTING_EXT: case RTE_FLOW_ITEM_TYPE_ESP: case RTE_FLOW_ITEM_TYPE_FLEX: + case RTE_FLOW_ITEM_TYPE_IB_BTH: break; case RTE_FLOW_ITEM_TYPE_INTEGRITY: /* From patchwork Tue May 16 06:37:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 126866 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CF19442B20; Tue, 16 May 2023 08:38:04 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 716C242D41; Tue, 16 May 2023 08:38:01 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2052.outbound.protection.outlook.com [40.107.220.52]) by mails.dpdk.org (Postfix) with ESMTP id 1BF8442D39; Tue, 16 May 2023 08:38:00 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=H/RRfP7fM9Kpa3Y+ygXbXy/2T+I46KZq7J/v6kcs7Lm4IqIW6lmnz0C7qmzQjjID9xJ8XGdHzm3BGWOjWIW9Op96lsJwuCyVnDyha6NzH9sHepvgCAAJQaojIZX7JsnSTySJDA+7WIXidPShxVASq9z5jFyWIDyjcMAr42ptTjBS5D81KotUrG+OkEo3wx80Z9OOu/Fg5mqHN3d+vrO7uWMucpeznXgbzaQSEU8fgtRCjo6l9voQP5Bzn74sKmuwrff3guAdWvNJgOgSdWSjiOC/+GeCOnHTtguTUferxjdwOQpbFNfHf3w9obMNgnYNG3ulH5uGtChB5AvJdXZ5wA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=3627LIotpgp6BILDvCXKuaDznZItiZ4j3jX8U/lQ9/0=; b=AGNUWTRE6WSC3i7MRCEZ5l0eKPcie66xJqzdPvlRCGrzfPuQNneW5t8L4sN6vBI0gdh24EaKgSH5aQXWm7Vc7K5cQEoglaS1yCDGh4rYiSAWqJ9qepaGAtUNsQpI5RhvG2Q7I51vSF9YzSJjNmyD5yMPs+rBxR8UyujaqmtEi+6t9vJfIFKX8JQD8WxBHgXzwnczs3aK6N6/j0PnBlNSKqb+3SUS79UkWMn3RGyoP4ryfcShGVwn5mvRVu5Xn4Vz/DVWSaEnen+Uz/pW1QtTWkNgQRbSSjvzB9VqfDPAoMb0jFBSi9gQTtfowHpkz6GOyJnNSw++FaMiuGNtSI3yzQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.233) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=3627LIotpgp6BILDvCXKuaDznZItiZ4j3jX8U/lQ9/0=; b=PAr5QTpul6ms2kk1dKFw86JJHJCviza9GXE9GP1yIfStMR3azb+mRybz+0zRoy06BuTtNz3NUa9LuXX9rVyj9wuuQKVoGLmvhANRy168VXPZ2LGDHyXMWU34qLWY0ITy8M3E6ccF71XUfezz846xEeEOVHquNzxHUTuaBdqR3MsvL2s3ZO/qmxadUJdbw+/dewE5alCc+ixcj1naMH9UU5N5PV5l4WFZaI2hG1kOKMc+7MiQ3utNW604b/8iIgHWb4uHb/T11qG36zVLKk6j++jI1CAupJsayk+rtyJSeXU6OzOYIR2QJRlxaik1brlLRG2Lto14cn9bL0K9sPP56w== Received: from DM6PR05CA0053.namprd05.prod.outlook.com (2603:10b6:5:335::22) by LV8PR12MB9109.namprd12.prod.outlook.com (2603:10b6:408:18a::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Tue, 16 May 2023 06:37:58 +0000 Received: from DS1PEPF0000E644.namprd02.prod.outlook.com (2603:10b6:5:335:cafe::d5) by DM6PR05CA0053.outlook.office365.com (2603:10b6:5:335::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.14 via Frontend Transport; Tue, 16 May 2023 06:37:57 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.233) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.233 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.233; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.233) by DS1PEPF0000E644.mail.protection.outlook.com (10.167.17.200) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.11 via Frontend Transport; Tue, 16 May 2023 06:37:57 +0000 Received: from drhqmail201.nvidia.com (10.126.190.180) by mail.nvidia.com (10.127.129.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 15 May 2023 23:37:56 -0700 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail201.nvidia.com (10.126.190.180) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 15 May 2023 23:37:56 -0700 Received: from nvidia.com (10.127.8.13) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37 via Frontend Transport; Mon, 15 May 2023 23:37:54 -0700 From: Michael Baum To: CC: Ori Kam , Aman Singh , "Yuying Zhang" , Ferruh Yigit , "Thomas Monjalon" , , Subject: [PATCH v1 1/7] doc: fix blank lines in modify field action description Date: Tue, 16 May 2023 09:37:41 +0300 Message-ID: <20230516063747.3047758-2-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230516063747.3047758-1-michaelba@nvidia.com> References: <20230516063747.3047758-1-michaelba@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF0000E644:EE_|LV8PR12MB9109:EE_ X-MS-Office365-Filtering-Correlation-Id: abec1f8a-9831-4e89-9a3c-08db55d81635 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Ejvw639fQvrjpLrCLrKWt1uEMdLhwEXBMjXUL1eh8WPJGNxyLoClFhLKGoMfn8uUTmVOToYKXIyBKboPkLICCAa3wMkuQcl+liqTGbzQibQZQxT2dgSxVIBMTgNTnqh5x+4whs3yLQv4mDX4RKq7cZMAvRkCO5nLnYLaM4hkXifeamFoH9NOzK1QZ+v2xa3EtewofzOcUQPU048fgRzVJxNQreL5QmWzljv1DuJVHL+X0vw3mj13nMlsjKuOHf8EmGjpLm15t85XEQGbyd2NjcGA/L0VK0YfXXGQtU1mBt4rvFIVZ/oRqTYW11PvFHygIah9zaM63OGK5StH5lDDXs9OCuFz5Cig42Bd8RCfe/EtHA73VCMpS4jWsOOxwm0uITFtdmoNdmgGmdZxTABesYBIoV//0d21/N71lsQDoeBIqcjU2Y83eNGDMtbgUy8LgKvfHxvJi6IxAuX/ckw+8RUDSR64GkAZZSIZITnmEMsJP4VCMsonK6s0LT5p+n7rHQgKd82AxCH/9nAaHXyJbZjjnr/dqDCKEMQBXOmvAuJkJYI1qwVkyPykXmJ5qbghDu2rE1uJ9BEDahsMKsYML68pKys+4W3vw2gGKhcY2xeP7XvjhEvWzmESRhkUkYGGDEdBKSbh84gv7D7zyL6w3XgvJGIa2Jm20uwQJpywcwAxQ+ULXALkEglKkUooM0wKqJX39sfWO0h2xIONkSFFYVnp0w1nn7V/ytlTRIXnquA= X-Forefront-Antispam-Report: CIP:216.228.118.233; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc7edge2.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(346002)(136003)(396003)(39860400002)(376002)(451199021)(46966006)(40470700004)(36840700001)(40480700001)(7636003)(86362001)(40460700003)(478600001)(54906003)(82310400005)(55016003)(7696005)(6666004)(36756003)(356005)(186003)(26005)(1076003)(6286002)(5660300002)(70206006)(70586007)(2906002)(83380400001)(6916009)(4326008)(316002)(82740400003)(8936002)(8676002)(426003)(336012)(2616005)(41300700001)(36860700001)(47076005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 06:37:57.5559 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: abec1f8a-9831-4e89-9a3c-08db55d81635 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.118.233]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF0000E644.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV8PR12MB9109 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The modify field action description inside "Generic flow API (rte_flow)" documentation, lists all operations supported for a destination field. In addition, it lists the values supported for a encapsulation level field. Before the lists, in both cases, miss a blank line causing them to look regular text lines. This patch adds the blank lines. Fixes: 73b68f4c54a0 ("ethdev: introduce generic modify flow action") Cc: akozyrev@nvidia.com Cc: stable@dpdk.org Signed-off-by: Michael Baum --- doc/guides/prog_guide/rte_flow.rst | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 32fc45516a..e7faa368a1 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -2917,20 +2917,23 @@ The immediate value ``RTE_FLOW_FIELD_VALUE`` (or a pointer to it ``RTE_FLOW_FIELD_START`` is used to point to the beginning of a packet. See ``enum rte_flow_field_id`` for the list of supported fields. -``op`` selects the operation to perform on a destination field. +``op`` selects the operation to perform on a destination field: + - ``set`` copies the data from ``src`` field to ``dst`` field. - ``add`` adds together ``dst`` and ``src`` and stores the result into ``dst``. -- ``sub`` subtracts ``src`` from ``dst`` and stores the result into ``dst`` +- ``sub`` subtracts ``src`` from ``dst`` and stores the result into ``dst``. ``width`` defines a number of bits to use from ``src`` field. ``level`` is used to access any packet field on any encapsulation level -as well as any tag element in the tag array. -- ``0`` means the default behaviour. Depending on the packet type, it can -mean outermost, innermost or anything in between. +as well as any tag element in the tag array: + +- ``0`` means the default behaviour. Depending on the packet type, + it can mean outermost, innermost or anything in between. - ``1`` requests access to the outermost packet encapsulation level. - ``2`` and subsequent values requests access to the specified packet -encapsulation level, from outermost to innermost (lower to higher values). + encapsulation level, from outermost to innermost (lower to higher values). + For the tag array (in case of multiple tags are supported and present) ``level`` translates directly into the array index. From patchwork Tue May 16 06:37:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 126867 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E9EC042B1F; Tue, 16 May 2023 08:38:11 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9191342D48; Tue, 16 May 2023 08:38:02 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2041.outbound.protection.outlook.com [40.107.236.41]) by mails.dpdk.org (Postfix) with ESMTP id DA86F42D44; Tue, 16 May 2023 08:38:01 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ieGLzJTn+5S0G7OAdkNl+/y4RAyqW3/cnfDXX4s+nVTp47FVtF8cHmlMxx7tpcDdIUA8+5iMBYuWj80ErcmgyE0mNinAMtDyyDPbab1l7Ox/a+Tm76AAuQI1JKnVGJMC1Op3Rgx6qR/yU0f+8Nxv+K+Ut8vEm3b0/1ADsGycuYEtI07Qy1dkl6215TR7JJBqU/ycT1/TjkC0sVz9b20P6skv/MWdNQfL4DCES9Cx912VZGsIDd/R4N7AsdKl+q48rmQU+wdGbXnB9uJCBbI0jJBDYr7+0nKQffIk+4n1Ub68IFUS4YxDvdq6RQPX2/qTy20AWbJo/jB72VZxAgjRhg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=/CfZD85Sxp04CRLqu0toRF0GmUg6OTrwIMt0S4eW5bY=; b=U2p/6GBEFcC8AmjLCyjGMoSX+FcnBHeA/V8Ux+TBZSMObATqlEdLe8eE+L/jYZXVsxHulX1D09kRhDkKPKTUwDePRn6Rmx+KX6BuJstzNonqT+WUj6EMhIDMBrdlYLk4vOVH9sq1B+/LfipMWnv74h96ksMJPEk09aJ9yjfYPQLdL9APVrGEWkJz76+Y2wlEZqG7l/nbTwFEW5+zunhcl0H7gqL3rHrZzq+ivKLHIVugH4eHeSRSRDC35M2eu/1qFEaCsco2UfBu1LlI4gT3HBxbgDtIZB5UsSEc7Q1YWW9/9hx2BnpwrBsIU7JsOUypXss+5Xa/tHvqMc+/QHcuWw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.232) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/CfZD85Sxp04CRLqu0toRF0GmUg6OTrwIMt0S4eW5bY=; b=lgOzQ/o0QwiQsVsmwOmNeMG7w3sMfPhBIk/riOXI79kl6oKx86hFx5X7P7kzia/OqtZ+qJiSItjL5KH9GlgOdyzuYrnN17r24YzOenjeBqL+cLSiLfLjG/izdqqPvE5DqMnEPk7OpAxq2DZS08+/CMMPJDQrYW7Sw+YYTAi5SZO5JbhcHaOStvTNyPpDZarrAaMLvim+N/iBLgjJDtZOm8mo6qgK+yokAbVhl+MctScHZFsEGLZ00MiGh7zxBXWTmdsSnQ+CtEAvbSAP6d+HBiXGOqKxV5DBUQ4yikxPsXA5CfhmnMN+GX4uyAN9Spl9fvrCHD6x3jDlQvLKkSo9jg== Received: from DM6PR03CA0057.namprd03.prod.outlook.com (2603:10b6:5:100::34) by IA0PR12MB7530.namprd12.prod.outlook.com (2603:10b6:208:440::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Tue, 16 May 2023 06:38:00 +0000 Received: from CY4PEPF0000C984.namprd02.prod.outlook.com (2603:10b6:5:100:cafe::ee) by DM6PR03CA0057.outlook.office365.com (2603:10b6:5:100::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33 via Frontend Transport; Tue, 16 May 2023 06:38:00 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.232) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.232 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.232; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.232) by CY4PEPF0000C984.mail.protection.outlook.com (10.167.241.202) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.11 via Frontend Transport; Tue, 16 May 2023 06:37:59 +0000 Received: from drhqmail201.nvidia.com (10.126.190.180) by mail.nvidia.com (10.127.129.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 15 May 2023 23:37:59 -0700 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail201.nvidia.com (10.126.190.180) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 15 May 2023 23:37:58 -0700 Received: from nvidia.com (10.127.8.13) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37 via Frontend Transport; Mon, 15 May 2023 23:37:56 -0700 From: Michael Baum To: CC: Ori Kam , Aman Singh , "Yuying Zhang" , Ferruh Yigit , "Thomas Monjalon" , , Subject: [PATCH v1 2/7] doc: fix blank line in asynchronous operations description Date: Tue, 16 May 2023 09:37:42 +0300 Message-ID: <20230516063747.3047758-3-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230516063747.3047758-1-michaelba@nvidia.com> References: <20230516063747.3047758-1-michaelba@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000C984:EE_|IA0PR12MB7530:EE_ X-MS-Office365-Filtering-Correlation-Id: 333d81e6-3be2-4347-9dac-08db55d8176d X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 4dUKhSZopRldg2Y1f0o3rhZibeSnWuhk9MNjS0zTnKRRtPO5Te0+rE60x8QVkgt8zpi6QqZPCQpZfjsutnd4x5yF2BEBFOjzITHoRbJvkGzfgxi5t6pIHEsxJLBQaVYJ5TRf/33L9BQUDTgnkI0LnJoEsUn2kYHurX8Bb0qAoRqWSxjY0TWU8m8/i45DKOO2ABXBhk154embBgJwSCor50x//FH0FNYwFLIwFb7Xp1+Rn9OWqc7PsKOVFktb0i6/fVlsVtUZFgcntpPqhZcIB0IKuon0c4OJuyog+uy7GQsY1V3G6vL5QNfaYtL2TlGB0hfj4cSlqTzbq98sLQENNVNRLtcLGLF1KSN84Z/ZyoX0H5jiP1wL8INq0SmXT6Go8feFHxUh2XcHZIFsa8PKSADvJ6WpXe4gpm7j2eLuaAAkjTQGfjD9U7lJ26ZjoZvOS0dgfCMwCFn39KVG92ci90ZulrqC0wPp1oquaVahyRQ4RvlMp5Lf68TRKnRL89SbGIgCKQ0p0WizaT4qm7OescRqnkr8E1/DqnPrPINV0iMNO0M+tTwDC6R4CU/TkMF6cZu5q0v1RVCEI3KZVrDbsHeXKM9C21kW5Hdr3FaueXrzxAbrN8Snj8lQyTMhniyQcwpqFBUMn4YXEcSyzgIk0tx4ANIslWdEVYD8aeVVJohBQm3etGTpIlq4IlSwyyfDKANNz/xFeyrQW6xIl1xcqAB829Q00cSvXA4nznaBwyIT4e3sDpWv4ffQSA+ZG9kH X-Forefront-Antispam-Report: CIP:216.228.118.232; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc7edge1.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(346002)(376002)(136003)(396003)(39860400002)(451199021)(46966006)(36840700001)(40470700004)(478600001)(40460700003)(54906003)(8676002)(86362001)(8936002)(5660300002)(7636003)(2906002)(36756003)(4744005)(82310400005)(4326008)(6916009)(70586007)(70206006)(82740400003)(356005)(316002)(40480700001)(41300700001)(36860700001)(47076005)(186003)(6286002)(2616005)(1076003)(26005)(55016003)(83380400001)(336012)(426003)(7696005)(6666004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 06:37:59.6475 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 333d81e6-3be2-4347-9dac-08db55d8176d X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.118.232]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000C984.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB7530 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The asynchronous operations description inside "Generic flow API (rte_flow)" documentation, adds some bullets to describe asynchronous operations behavior. Before the first bullet, miss a blank line causing it to look a regular text line. This patch adds the blank line. Fixes: 197e820c6685 ("ethdev: bring in async queue-based flow rules operations") Cc: akozyrev@nvidia.com Cc: stable@dpdk.org Signed-off-by: Michael Baum Acked-by: Ori Kam --- doc/guides/prog_guide/rte_flow.rst | 1 + 1 file changed, 1 insertion(+) diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index e7faa368a1..76e69190fc 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -3702,6 +3702,7 @@ Asynchronous operations ----------------------- Flow rules management can be done via special lockless flow management queues. + - Queue operations are asynchronous and not thread-safe. - Operations can thus be invoked by the app's datapath, From patchwork Tue May 16 06:37:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 126868 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7E7B042B1F; Tue, 16 May 2023 08:38:19 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D8F7B42D40; Tue, 16 May 2023 08:38:06 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2049.outbound.protection.outlook.com [40.107.223.49]) by mails.dpdk.org (Postfix) with ESMTP id 7C06B42D3A; Tue, 16 May 2023 08:38:05 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mQWQyGcclJZJh32shxmiJC9xf/EtQv7nOQ7eFGMSniHpep2QA+fFJDUx+resVJOSv/B5JW4MT32FG025yhdryL1xRCr6Kg0whFgFBgKIee+7nHoYQEIqA7DdsN41jAO25hl/UVfzY2Lic7gNo7SPsrhP5mVaDpwk5M6bVCgjVVkFkINgkGJiJm70AFJDrLvrn9XRMNNH17GaPv/sAd6JkAlIASHId2qmY8AHBTG1XYwnmHhUPZLhv2oqqvVFXwdWcm5+RPZEg7Em0zLldXny16BFgq2de9mP5C1Z2koqas87Q1T17z+IcMyiUOW3wZ9eTp2EN3xzj8v+FM76sm9R9Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=/C4uFEHzmRSCszuJIEUQQQ7FdXe11Y50g9A1ubO5JcA=; b=Iv6BEjl4Lv9dyFdartsg9wcHUu1MlzuQeVRMBAortkvu0YiVKW62tRV/8WNxqUdOLS6AXwtx7Uudnx1Dx70uCjUY/l87ZB4WPwNBw/mzUKf8VQDA7vuIYOxnE+KFmuEtrqFau7foGK5OXNtRZd+og3Cq+shQKuW0fNXf71/oWaTPDKnt0WB3nCnXQCKtmamf7eErA8a2VCEvGh3g052SVwYzn+F+HUyJ08+2qPBQZ6MNCMMaePmEsnbeNOcqGgezKQt7l8YQdwSmLJVnek6+XwUfvCLvGyoBAKbzGs4i9H9NO5VJQoBerq8D8iUc4wzkAtJGYSmpZj4XNBe6cuqqPg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.232) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/C4uFEHzmRSCszuJIEUQQQ7FdXe11Y50g9A1ubO5JcA=; b=g89WCw0HrUBFcp1UWcsvXAL0J5M9oOrkimZDsGXRin24Qr327gaZjJKIGHCYNYWXdqGTgWfIdNOW1ZdFTD4M3hLHCDTdm50hhdMyMxK0ttCq95TmoC4sjvljvA+e4qrNU8q3AHcy8broodiwWQfov2NFUaQn8MejWIyuNvHCMDoIHf+FiBQ73FlCvjD2z37ZsTRwWmrmuDEi6YQq30H9YL8q1Ugj3YqHiQfUZN6dRaQQCyCSCokCYvRYJUNE9eqtu9+XqANtnF/cq7U0chmwc5bSHzRxxtp76ZxEJCnY0prbvvyLYyqZhYPNSdJWQQkwjMui+HBWs4HmYt7D5w0Kxg== Received: from DM6PR03CA0057.namprd03.prod.outlook.com (2603:10b6:5:100::34) by DS0PR12MB8501.namprd12.prod.outlook.com (2603:10b6:8:15d::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Tue, 16 May 2023 06:38:04 +0000 Received: from CY4PEPF0000C984.namprd02.prod.outlook.com (2603:10b6:5:100:cafe::3d) by DM6PR03CA0057.outlook.office365.com (2603:10b6:5:100::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33 via Frontend Transport; Tue, 16 May 2023 06:38:03 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.232) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.232 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.232; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.232) by CY4PEPF0000C984.mail.protection.outlook.com (10.167.241.202) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.11 via Frontend Transport; Tue, 16 May 2023 06:38:03 +0000 Received: from drhqmail201.nvidia.com (10.126.190.180) by mail.nvidia.com (10.127.129.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 15 May 2023 23:38:02 -0700 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail201.nvidia.com (10.126.190.180) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 15 May 2023 23:38:01 -0700 Received: from nvidia.com (10.127.8.13) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37 via Frontend Transport; Mon, 15 May 2023 23:37:59 -0700 From: Michael Baum To: CC: Ori Kam , Aman Singh , "Yuying Zhang" , Ferruh Yigit , "Thomas Monjalon" , , Subject: [PATCH v1 3/7] doc: fix wrong indentation in RSS action description Date: Tue, 16 May 2023 09:37:43 +0300 Message-ID: <20230516063747.3047758-4-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230516063747.3047758-1-michaelba@nvidia.com> References: <20230516063747.3047758-1-michaelba@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000C984:EE_|DS0PR12MB8501:EE_ X-MS-Office365-Filtering-Correlation-Id: 1ad59209-6ed0-487a-6d05-08db55d819e5 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: PXssF12k/G+sxohoIk4Re6jWY4cijkliDbr40bLzt0gt6evMODybeMSVDXZz+5RiHevvFFncKmh9Bm3h4TzVXeytjfdcoyYkns+Qw59BFw/dan0ZaTBPcG6XfAsMhx8x+U+YaMTi/yPJFoCSIxt7dcr9yehH2IHQSdA2VuZmovJB89OAF83eyP/HoTCYdSIh9AnQV+mixVEf5NAsWuqALm4bqwFzOgxBM6r1durt0/pJ8Pzz/PdEPaZXgPaFoV+l/UrVXKPJEasFbLJH8+JvpK54A7EFMz/M/F6iUJ41Q32+D7cYp7urubTYtO7w229SzmcuJvrFCfJDwAybkUukQt8wtMO6yDJLgys/4X6WP3614HPb4PwNNTfmlr+ny20n9Y/KJwIfI5S5QeNLKM4A9EYhgiLaJuQCabPPxaZr0YuhuvOUpGxXtesEmUbI037d/UVERwzsPkyx4daZRgQaOURqTrwfwt7SjHSRSk4dit/mGU23WXS8QBZwHZxHRqN8sVcqLav8DMoPCQdVDRIHn+xNWZ70Zdmt5GeOTn9NuKhnxiFGlnkYc3Ocq6YnNvSuuhMZgJbJhGjCjoAP0clYViT2LdE4gdlCAvhYpJ+rYkxKa/EZMc/k/QFl4VIMth7qYbEDOZHUZawtpeNsVoeIaOZPSSwb0XGE9eMKZnZNqecgrhAVFwhPE5sEJaPtwg6efKsg6qN94wmibbmX9KvCQJkj2emRlPIvSzgkdEBJXsDZWuvbFN70WBUJzhSunEAJ X-Forefront-Antispam-Report: CIP:216.228.118.232; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc7edge1.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(346002)(396003)(39860400002)(376002)(136003)(451199021)(36840700001)(46966006)(40470700004)(478600001)(54906003)(40460700003)(5660300002)(8676002)(8936002)(7636003)(2906002)(82310400005)(86362001)(36756003)(40480700001)(4326008)(82740400003)(6916009)(316002)(70586007)(70206006)(356005)(55016003)(83380400001)(41300700001)(2616005)(1076003)(26005)(36860700001)(426003)(336012)(6286002)(186003)(7696005)(6666004)(47076005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 06:38:03.7726 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1ad59209-6ed0-487a-6d05-08db55d819e5 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.118.232]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000C984.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB8501 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The RSS action description inside "Generic flow API (rte_flow)" documentation, lists the values supported for a encapsulation level field. For "2" value, it uses 3 spaces as an indentation instead of 2 after line breaking, causing the first line to be bold. This patch updates the number of spaces in the indentation. Fixes: 18aee2861a1f ("ethdev: add encap level to RSS flow API action") Cc: adrien.mazarguil@6wind.com Cc: stable@dpdk.org Signed-off-by: Michael Baum Acked-by: Ori Kam --- doc/guides/prog_guide/rte_flow.rst | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 76e69190fc..25b57bf86d 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -1954,8 +1954,8 @@ Also, regarding packet encapsulation ``level``: level. - ``2`` and subsequent values request RSS to be performed on the specified - inner packet encapsulation level, from outermost to innermost (lower to - higher values). + inner packet encapsulation level, from outermost to innermost (lower to + higher values). Values other than ``0`` are not necessarily supported. From patchwork Tue May 16 06:37:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 126869 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EF9F642B1F; Tue, 16 May 2023 08:38:28 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8647142D56; Tue, 16 May 2023 08:38:12 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2053.outbound.protection.outlook.com [40.107.94.53]) by mails.dpdk.org (Postfix) with ESMTP id DE8DA42D3A for ; Tue, 16 May 2023 08:38:10 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=B1fDnfSYclWz1NzZnXPtwH2kjqzu5/FE0+l3Cylgy9RN/WXDibYch2FZG5AkHcmtOYcepx4Yci+Aj6GohHAnXW7PESF47wcz6hJ41OjFFfdtZqq2D75a0/QP5YDqGa9UQ8A/potq/Sa/SIZ6Qn8h19unrD6D8QLWNdNhqyU9X7IxYrLh8cTK1D/4vu7h+5ja5Thv/Ee2uW2GmPowtApzbMn1KE6g1IlTHuoD81FQ6djObw+edvZUyWZr3zz8AO+nsaswWhodDz4dilK3filsW2qc3OKJBmXu2PjyAeDI1gEOq2UikqpC8tCrUTnTJQ13z6Uv8IZ5OJef1vzNLRZULg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=xcB2cDkV9LHlZtF1BsYaDqhk+hW9QqG57+Vj/eaZ/Eg=; b=bDz4Bw2VlBrJQd/Rzu+kc0o2RGxthWtX2WliHsK90HGhzYNFWvpDlg0ibEZ1R9GPT79jCeGkZL0VysLiRovnt/eyWmmSAHp1Zv7UiPlEl3rpUnuo97KcXSCxTSlNMMW9LXb6Xz+UvxqBG0Sf8UOqn51tMi5GEmyePxvf/D2auk0q5hxzkZMY/RscBze5po0IjaxAhqNXrnpnwjMeKfDEKjPHRYPNNeKnzZ7PIb0uIXFgVPy/kaWGIdc29tzhx9oVr80fle2vdrDphqlpqBs0qmhikXtNg2vMfqDjYpa9RhdV2GkQmzaxHUajas1CHKygThLPi2VgYo6DBjFxWA1Wsg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.232) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=xcB2cDkV9LHlZtF1BsYaDqhk+hW9QqG57+Vj/eaZ/Eg=; b=d+FnNVXXrUlPFcFZ9f8LrKzPvchU6ZC9iQzseKAZP/VVpMx7wbUfm3zPDysLUdwltIcEJjRtOOBXmAzJxM7p/nthDqx+ird2evjzQHug2PdVBBxuNOwXDSrk0MrlUlMRxmJtQ3r66LnLj258/Ss0yqXMb0cKd9TqN6Z5p+dey+TFcmtL0EELu+QJO2hyqvxJPjjpDNWNnDFemvCG6ujOOnhSiVuy4KthxTxVg3Wd83Tp936Uj5xYhdNTIDZMJ7HAZcowK8ydOFaPzAre9Qt4WtZ/OimKZDLMLSLTzcDQd0WfQ+6WPq2J95bTVXXNYxQmCbH1CArEuy3WzYTBPmtYAA== Received: from DM6PR03CA0064.namprd03.prod.outlook.com (2603:10b6:5:100::41) by SA1PR12MB7409.namprd12.prod.outlook.com (2603:10b6:806:29c::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33; Tue, 16 May 2023 06:38:08 +0000 Received: from CY4PEPF0000C984.namprd02.prod.outlook.com (2603:10b6:5:100:cafe::69) by DM6PR03CA0064.outlook.office365.com (2603:10b6:5:100::41) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30 via Frontend Transport; Tue, 16 May 2023 06:38:07 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.232) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.232 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.232; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.232) by CY4PEPF0000C984.mail.protection.outlook.com (10.167.241.202) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.11 via Frontend Transport; Tue, 16 May 2023 06:38:07 +0000 Received: from drhqmail202.nvidia.com (10.126.190.181) by mail.nvidia.com (10.127.129.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 15 May 2023 23:38:03 -0700 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail202.nvidia.com (10.126.190.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 15 May 2023 23:38:03 -0700 Received: from nvidia.com (10.127.8.13) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37 via Frontend Transport; Mon, 15 May 2023 23:38:01 -0700 From: Michael Baum To: CC: Ori Kam , Aman Singh , "Yuying Zhang" , Ferruh Yigit , "Thomas Monjalon" Subject: [PATCH v1 4/7] net/mlx5: reduce modify field encapsulation level size Date: Tue, 16 May 2023 09:37:44 +0300 Message-ID: <20230516063747.3047758-5-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230516063747.3047758-1-michaelba@nvidia.com> References: <20230516063747.3047758-1-michaelba@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000C984:EE_|SA1PR12MB7409:EE_ X-MS-Office365-Filtering-Correlation-Id: a577ea9c-86bf-42e5-05e7-08db55d81c24 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: nZa/ZTawxvJHJ8oBH69ipkXsP7J1XK5Ijrc0ypCdfm21akeXDCYovNwiwEhaWw1Hgvh9M8itiKFfB6ekH32iPYvRQ7MnbmVkyuiP95yGSqsUKmhfmQ6D/MHa/w/tOZrWJ0C94ajRz1B7kg8Mf6UM0E/kiq53NJR052PjSoeuSvi7Dx5GVdUf55GXC7wBA31ezV5ezyzeOpQDHyiJXTdt383lHHSJbCV7hObs2FkGch1lXFZlt9sLSf+HLn443bdlJtj/vzKHozP1xeuZoKl2vKIk981N9RMdon8oRsckJYF32Fz7kaW9xz2K2CTGuPY9UpkVL51nKgCDQuPdHRdeTj+0s3ljxoqE0ODPd+K7MIDqX3KtH8Pr+bay5hwJTfzQ+T+rXwVWPmlyi656LtqiFgCCtC8f2G6QS6p62KV5F4sEN49y+hdHT7L+/SB3aHyUcU027AWzPSVE4fP0lGvPxbby3eFEkcPnS8HllnTDcMpeBpIxBtq1buMFV4zII0ei4ABoBJk4Aegeri5dvDshuXb5Pj3x2WNjVP5lrdiQum3ROM7QeOIOz/8bQKE5LcCCaVYCVsQIKgVbd/PKredlHT3z1PBnilUaAkql5OXS3E8F7p4BdcV8Z2rfRXwfdUMw96R14Ehv87JnOUxnSbHaNHjpcCv/F5HrHCa5WD/aUFxlRwyMvY5POzaS+BBmTNSuelyseqF0UKj03tnYbydO2F6QQ0nmoKqQelZ3RQo9iZA= X-Forefront-Antispam-Report: CIP:216.228.118.232; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc7edge1.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(136003)(376002)(39860400002)(396003)(346002)(451199021)(36840700001)(40470700004)(46966006)(83380400001)(36860700001)(47076005)(336012)(426003)(478600001)(6666004)(7696005)(54906003)(2616005)(1076003)(26005)(6286002)(186003)(40460700003)(2906002)(36756003)(4326008)(70206006)(82740400003)(6916009)(70586007)(7636003)(356005)(41300700001)(8676002)(82310400005)(8936002)(316002)(55016003)(86362001)(40480700001)(5660300002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 06:38:07.5539 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a577ea9c-86bf-42e5-05e7-08db55d81c24 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.118.232]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000C984.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB7409 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The type of "level" field in "rte_flow_action_modify_data" structure is uint32_t for now, but it is going to be changed to uint8_t in the next patch. For representing encapsulation level, 8 bits are more than enough and this change shouldn't affect the current implementation. However, when action template is created, the PMD requests to provide this field "fully masked" in action mask. The "fully masked" value is different between uint32_t and uint8_t types. This patch reduces all modify field encapsulation level "fully masked" initializations to use UINT8_MAX instead of UINT32_MAX. This change will avoid compilation warning after it will be changed to uint8_t by API. Signed-off-by: Michael Baum --- drivers/net/mlx5/mlx5_flow_hw.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 7e0ee8d883..1b68a19900 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -3565,7 +3565,7 @@ flow_hw_validate_action_modify_field(const struct rte_flow_action *action, return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, action, "immediate value, pointer and hash result cannot be used as destination"); - if (mask_conf->dst.level != UINT32_MAX) + if (mask_conf->dst.level != UINT8_MAX) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, action, "destination encapsulation level must be fully masked"); @@ -3579,7 +3579,7 @@ flow_hw_validate_action_modify_field(const struct rte_flow_action *action, "destination field mask and template are not equal"); if (action_conf->src.field != RTE_FLOW_FIELD_POINTER && action_conf->src.field != RTE_FLOW_FIELD_VALUE) { - if (mask_conf->src.level != UINT32_MAX) + if (mask_conf->src.level != UINT8_MAX) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, action, "source encapsulation level must be fully masked"); @@ -4450,7 +4450,7 @@ flow_hw_set_vlan_vid(struct rte_eth_dev *dev, .operation = RTE_FLOW_MODIFY_SET, .dst = { .field = RTE_FLOW_FIELD_VLAN_ID, - .level = 0xffffffff, .offset = 0xffffffff, + .level = 0xff, .offset = 0xffffffff, }, .src = { .field = RTE_FLOW_FIELD_VALUE, @@ -4583,12 +4583,12 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, .operation = RTE_FLOW_MODIFY_SET, .dst = { .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG, - .level = UINT32_MAX, + .level = UINT8_MAX, .offset = UINT32_MAX, }, .src = { .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG, - .level = UINT32_MAX, + .level = UINT8_MAX, .offset = UINT32_MAX, }, .width = UINT32_MAX, @@ -5653,7 +5653,7 @@ flow_hw_create_tx_repr_tag_jump_acts_tmpl(struct rte_eth_dev *dev) .operation = RTE_FLOW_MODIFY_SET, .dst = { .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG, - .level = UINT32_MAX, + .level = UINT8_MAX, .offset = UINT32_MAX, }, .src = { @@ -5677,12 +5677,12 @@ flow_hw_create_tx_repr_tag_jump_acts_tmpl(struct rte_eth_dev *dev) .operation = RTE_FLOW_MODIFY_SET, .dst = { .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG, - .level = UINT32_MAX, + .level = UINT8_MAX, .offset = UINT32_MAX, }, .src = { .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG, - .level = UINT32_MAX, + .level = UINT8_MAX, .offset = UINT32_MAX, }, .width = UINT32_MAX, @@ -6009,7 +6009,7 @@ flow_hw_create_ctrl_regc_jump_actions_template(struct rte_eth_dev *dev) .operation = RTE_FLOW_MODIFY_SET, .dst = { .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG, - .level = UINT32_MAX, + .level = UINT8_MAX, .offset = UINT32_MAX, }, .src = { @@ -6182,12 +6182,12 @@ flow_hw_create_tx_default_mreg_copy_actions_template(struct rte_eth_dev *dev) .operation = RTE_FLOW_MODIFY_SET, .dst = { .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG, - .level = UINT32_MAX, + .level = UINT8_MAX, .offset = UINT32_MAX, }, .src = { .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG, - .level = UINT32_MAX, + .level = UINT8_MAX, .offset = UINT32_MAX, }, .width = UINT32_MAX, From patchwork Tue May 16 06:37:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 126871 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id ECD1B42B1F; Tue, 16 May 2023 08:38:41 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 215B742D74; Tue, 16 May 2023 08:38:18 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2086.outbound.protection.outlook.com [40.107.237.86]) by mails.dpdk.org (Postfix) with ESMTP id 00AE742D6D for ; Tue, 16 May 2023 08:38:16 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=f3ZxLIJc9NPWdV+w1fTdm9DKBw3a+j53/fEWFHCS/QFZPhIW8mxU9+XoJs8/1woGq/17pbLJFfjmnHtXsmCQLQ1bgBz67QY+K1h4vpCTsu59scIDHbhf+Nterk7pe03tYX1zUzIkXTRljdxYwtFVd9reSiZSgOJYI7iFi66X1ZljxIzSb6hpsyOoYvWEcw7FmFKcjUUOEMVTCLYQr8cD/f8PM4t6AGC3wh/1m3Z5IVfLqIAvSKXE6KC0XHnL7zRNgtvF2vF8q2j/GmgVc3lCVJsOklO9ZxwwPIB+jNSyVCKuLTwHKBegMHWNWeQfnE3q4rsGLTqaBKZ5mLx+lH5aNA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=g3AN0DyO2hoEfPewtdVCUtcmUALz+iVNxCua84TkSJY=; b=Do8XgD0HQZq8++T/DB8XpFIte+TxN1JXyVqFxEjCmPoQ1E0vbywZIsyEXmjESruw21Vo9SztudL8gKihP6ZHgtef74KaBXjsArZMaSbn9uHIyd++DGjIXTBffgckZJ3Ypgev91CY41UXgLidnNju/qgVmSbdRqmI8eBL+4VkJ+GGGd8Vffmes9wu+YzAwaiwZnZnd908HDHFz2Glz5bT9DTdFjwtRTF9AYOs7Ad6I+q0IgHT8LaiDhqnfhxayHmdwgQVpJU0UpoIkwS/Jmxyabsyl2Vj4/XVx9ZPE+JmIFjER+qyal+zozqWVVLmZ/hx5f/OxYIM2joSXF9jEMTAuQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.232) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=g3AN0DyO2hoEfPewtdVCUtcmUALz+iVNxCua84TkSJY=; b=La7k/ZE4Rb2o9wOXUxAS8ed4QZHRddQ9n18QlA3MtBsN19JIhfKzQTfBTDCMbdpMAHvav6PYRLJwGJeXud8Tca81uPNoFIZID0N6r71CrWqTUcFpJiMvwna4wdiU+Fb916pixQ9oEYLd//jf5zsC2QR2ww7R7xAI0WGBxVhsMNAHvgzXhAw8tEdAqosx6nZD7Bu9mxKoLrEGJuB2OIvzi46KJPsurSSvf7L7p+RL9uLvyI9s9KrERvbhK17rM60nboZEBep8erhYXoWz1EgxnmTg8k7lPrhM0nloJZJUPeciJDWNHPH3gsXB8yKBp1T3dsCudLt454EZxuhNlHjoAg== Received: from CYZPR10CA0001.namprd10.prod.outlook.com (2603:10b6:930:8a::25) by PH7PR12MB6859.namprd12.prod.outlook.com (2603:10b6:510:1b5::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Tue, 16 May 2023 06:38:14 +0000 Received: from CY4PEPF0000C97E.namprd02.prod.outlook.com (2603:10b6:930:8a:cafe::69) by CYZPR10CA0001.outlook.office365.com (2603:10b6:930:8a::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.31 via Frontend Transport; Tue, 16 May 2023 06:38:13 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.232) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.232 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.232; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.232) by CY4PEPF0000C97E.mail.protection.outlook.com (10.167.241.198) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.11 via Frontend Transport; Tue, 16 May 2023 06:38:13 +0000 Received: from drhqmail201.nvidia.com (10.126.190.180) by mail.nvidia.com (10.127.129.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 15 May 2023 23:38:06 -0700 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail201.nvidia.com (10.126.190.180) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 15 May 2023 23:38:05 -0700 Received: from nvidia.com (10.127.8.13) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37 via Frontend Transport; Mon, 15 May 2023 23:38:04 -0700 From: Michael Baum To: CC: Ori Kam , Aman Singh , "Yuying Zhang" , Ferruh Yigit , "Thomas Monjalon" Subject: [PATCH v1 5/7] ethdev: add GENEVE TLV option modification support Date: Tue, 16 May 2023 09:37:45 +0300 Message-ID: <20230516063747.3047758-6-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230516063747.3047758-1-michaelba@nvidia.com> References: <20230516063747.3047758-1-michaelba@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000C97E:EE_|PH7PR12MB6859:EE_ X-MS-Office365-Filtering-Correlation-Id: 62ea44ed-3348-4ec6-8191-08db55d81fbe X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: RO1YfK3XckpbB6pQ7lp8bx8VKc/GHEArqjlqTpa5bUXm4SYUkJiS5/vm9T0zzXsZ1SVhWw2h82BnEGy87gjOG/9TiYDEjK3disBaEvrm9jHsAAJNY9D9jFcyXz1RVS5TjrX4wY0xGG28bsSH2SPQ4BfE37u5q/vQ2DjSkITH7xNDb+BuitF8oQIWDVV2xHo1NwLyJ1LDv31Tj1J+yeuc5PUG5fyr+x284d3v8z4a0lur/BKJA/Oebp4XwIFhsrU35QWRmgNPOv0/wS164Fmf2BGzxr5bWxtVWYftBUykoXKW8G6g+7Mwf9h4NDUgEstvciX0x2isVRSq1ob7zWorcIBGyQpqR8Yo8xQJeThcWVQMOPX31mdrVQIMczrAxAcWXQvqlQqLpG5n18H0bfdKgVkWY1D1fR+aguIJxGBvi+9nH9bn7fK2D2hvCrk5oNJCV+ngS40CYWJRTsnM09XSGKQbsIFXqu1gwCOx8WKm/bTkh3tyE7QHjPkHVUXiNH5KseQqvsNv6hNDBBFekUt5iuCcgcGetgi5UsNHflh5gFXOF6SzGdVMGoyLLjfrsMm4ODSuyrN+/aYou7x1tj8e66ab9NCi/t2fpugzZdX3VKnK47G/K9vyF1WTLn9M+w4s6EioZQYEi0OPbhWVclhWu60qrVl4QBvZ9tPeHGzINcZUP3r29ycKm6mPh8PPGRmAwY9hWiFACJ5eMkQty4axQd/7jLBkMp3BvqeABPJe6Hbc9/FjbCmi/s8BJM0MlGWtyWyJlRB5Ex614SGyx2ENtg== X-Forefront-Antispam-Report: CIP:216.228.118.232; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc7edge1.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(346002)(376002)(136003)(396003)(39860400002)(451199021)(46966006)(36840700001)(40470700004)(478600001)(40460700003)(54906003)(8676002)(86362001)(8936002)(5660300002)(7636003)(2906002)(36756003)(82310400005)(4326008)(6916009)(70586007)(70206006)(82740400003)(356005)(316002)(40480700001)(41300700001)(36860700001)(47076005)(186003)(6286002)(2616005)(1076003)(26005)(55016003)(83380400001)(336012)(426003)(7696005)(6666004)(21314003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 06:38:13.5838 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 62ea44ed-3348-4ec6-8191-08db55d81fbe X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.118.232]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000C97E.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB6859 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add modify field support for GENEVE option fields: - "RTE_FLOW_FIELD_GENEVE_OPT_TYPE" - "RTE_FLOW_FIELD_GENEVE_OPT_CLASS" - "RTE_FLOW_FIELD_GENEVE_OPT_DATA" Each GENEVE TLV option is identified by both its "class" and "type", so 2 new fields were added to "rte_flow_action_modify_data" structure to help specify which option to modify. To get room for those 2 new fields, the "level" field move to use "uint8_t" which is more than enough for encapsulation level. Signed-off-by: Michael Baum --- app/test-pmd/cmdline_flow.c | 48 +++++++++++++++++++++++- doc/guides/prog_guide/rte_flow.rst | 12 ++++++ doc/guides/rel_notes/release_23_07.rst | 3 ++ lib/ethdev/rte_flow.h | 51 +++++++++++++++++++++++++- 4 files changed, 112 insertions(+), 2 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 58939ec321..8c1dea53c0 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -636,11 +636,15 @@ enum index { ACTION_MODIFY_FIELD_DST_TYPE_VALUE, ACTION_MODIFY_FIELD_DST_LEVEL, ACTION_MODIFY_FIELD_DST_LEVEL_VALUE, + ACTION_MODIFY_FIELD_DST_TYPE_ID, + ACTION_MODIFY_FIELD_DST_CLASS_ID, ACTION_MODIFY_FIELD_DST_OFFSET, ACTION_MODIFY_FIELD_SRC_TYPE, ACTION_MODIFY_FIELD_SRC_TYPE_VALUE, ACTION_MODIFY_FIELD_SRC_LEVEL, ACTION_MODIFY_FIELD_SRC_LEVEL_VALUE, + ACTION_MODIFY_FIELD_SRC_TYPE_ID, + ACTION_MODIFY_FIELD_SRC_CLASS_ID, ACTION_MODIFY_FIELD_SRC_OFFSET, ACTION_MODIFY_FIELD_SRC_VALUE, ACTION_MODIFY_FIELD_SRC_POINTER, @@ -854,7 +858,9 @@ static const char *const modify_field_ids[] = { "ipv4_ecn", "ipv6_ecn", "gtp_psc_qfi", "meter_color", "ipv6_proto", "flex_item", - "hash_result", NULL + "hash_result", + "geneve_opt_type", "geneve_opt_class", "geneve_opt_data", + NULL }; static const char *const meter_colors[] = { @@ -2295,6 +2301,8 @@ static const enum index next_action_sample[] = { static const enum index action_modify_field_dst[] = { ACTION_MODIFY_FIELD_DST_LEVEL, + ACTION_MODIFY_FIELD_DST_TYPE_ID, + ACTION_MODIFY_FIELD_DST_CLASS_ID, ACTION_MODIFY_FIELD_DST_OFFSET, ACTION_MODIFY_FIELD_SRC_TYPE, ZERO, @@ -2302,6 +2310,8 @@ static const enum index action_modify_field_dst[] = { static const enum index action_modify_field_src[] = { ACTION_MODIFY_FIELD_SRC_LEVEL, + ACTION_MODIFY_FIELD_SRC_TYPE_ID, + ACTION_MODIFY_FIELD_SRC_CLASS_ID, ACTION_MODIFY_FIELD_SRC_OFFSET, ACTION_MODIFY_FIELD_SRC_VALUE, ACTION_MODIFY_FIELD_SRC_POINTER, @@ -6388,6 +6398,24 @@ static const struct token token_list[] = { .call = parse_vc_modify_field_level, .comp = comp_none, }, + [ACTION_MODIFY_FIELD_DST_TYPE_ID] = { + .name = "dst_type_id", + .help = "destination field type ID", + .next = NEXT(action_modify_field_dst, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field, + dst.type)), + .call = parse_vc_conf, + }, + [ACTION_MODIFY_FIELD_DST_CLASS_ID] = { + .name = "dst_class", + .help = "destination field class ID", + .next = NEXT(action_modify_field_dst, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_action_modify_field, + dst.class_id)), + .call = parse_vc_conf, + }, [ACTION_MODIFY_FIELD_DST_OFFSET] = { .name = "dst_offset", .help = "destination field bit offset", @@ -6423,6 +6451,24 @@ static const struct token token_list[] = { .call = parse_vc_modify_field_level, .comp = comp_none, }, + [ACTION_MODIFY_FIELD_SRC_TYPE_ID] = { + .name = "src_type_id", + .help = "source field type ID", + .next = NEXT(action_modify_field_src, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field, + src.type)), + .call = parse_vc_conf, + }, + [ACTION_MODIFY_FIELD_SRC_CLASS_ID] = { + .name = "src_class", + .help = "source field class ID", + .next = NEXT(action_modify_field_src, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_action_modify_field, + src.class_id)), + .call = parse_vc_conf, + }, [ACTION_MODIFY_FIELD_SRC_OFFSET] = { .name = "src_offset", .help = "source field bit offset", diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 25b57bf86d..cd38f0de46 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -2937,6 +2937,14 @@ as well as any tag element in the tag array: For the tag array (in case of multiple tags are supported and present) ``level`` translates directly into the array index. +``type`` is used to specify (along with ``class_id``) the Geneve option which +is being modified. +This field is relevant only for ``RTE_FLOW_FIELD_GENEVE_OPT_XXXX`` type. + +``class_id`` is used to specify (along with ``type``) the Geneve option which +is being modified. +This field is relevant only for ``RTE_FLOW_FIELD_GENEVE_OPT_XXXX`` type. + ``flex_handle`` is used to specify the flex item pointer which is being modified. ``flex_handle`` and ``level`` are mutually exclusive. @@ -2994,6 +3002,10 @@ value as sequence of bytes {xxx, xxx, 0x85, xxx, xxx, xxx}. +-----------------+----------------------------------------------------------+ | ``level`` | encapsulation level of a packet field or tag array index | +-----------------+----------------------------------------------------------+ + | ``type`` | geneve option type | + +-----------------+----------------------------------------------------------+ + | ``class_id`` | geneve option class ID | + +-----------------+----------------------------------------------------------+ | ``flex_handle`` | flex item handle of a packet field | +-----------------+----------------------------------------------------------+ | ``offset`` | number of bits to skip at the beginning | diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst index a9b1293689..ce1755096f 100644 --- a/doc/guides/rel_notes/release_23_07.rst +++ b/doc/guides/rel_notes/release_23_07.rst @@ -84,6 +84,9 @@ API Changes Also, make sure to start the actual text at the margin. ======================================================= +* The ``level`` field in experimental structure + ``struct rte_flow_action_modify_data`` was reduced to 8 bits. + ABI Changes ----------- diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 713ba8b65c..b82eb0c0a8 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -3773,6 +3773,9 @@ enum rte_flow_field_id { RTE_FLOW_FIELD_IPV6_PROTO, /**< IPv6 next header. */ RTE_FLOW_FIELD_FLEX_ITEM, /**< Flex item. */ RTE_FLOW_FIELD_HASH_RESULT, /**< Hash result. */ + RTE_FLOW_FIELD_GENEVE_OPT_TYPE, /**< GENEVE option type */ + RTE_FLOW_FIELD_GENEVE_OPT_CLASS,/**< GENEVE option class */ + RTE_FLOW_FIELD_GENEVE_OPT_DATA /**< GENEVE option data */ }; /** @@ -3788,7 +3791,53 @@ struct rte_flow_action_modify_data { struct { /** Encapsulation level or tag index or flex item handle. */ union { - uint32_t level; + struct { + /** + * Packet encapsulation level containing + * the field modify to. + * + * - @p 0 requests the default behavior. + * Depending on the packet type, it + * can mean outermost, innermost or + * anything in between. + * + * It basically stands for the + * innermost encapsulation level + * modification can be performed on + * according to PMD and device + * capabilities. + * + * - @p 1 requests modification to be + * performed on the outermost packet + * encapsulation level. + * + * - @p 2 and subsequent values request + * modification to be performed on + * the specified inner packet + * encapsulation level, from + * outermost to innermost (lower to + * higher values). + * + * Values other than @p 0 are not + * necessarily supported. + * + * For RTE_FLOW_FIELD_TAG it represents + * the tag element in the tag array. + */ + uint8_t level; + /** + * Geneve option type. relevant only + * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX + * modification type. + */ + uint8_t type; + /** + * Geneve option class. relevant only + * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX + * modification type. + */ + rte_be16_t class_id; + }; struct rte_flow_item_flex_handle *flex_handle; }; /** Number of bits to skip from a field. */ From patchwork Tue May 16 06:37:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 126870 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4824342B1F; Tue, 16 May 2023 08:38:35 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C082142D5F; Tue, 16 May 2023 08:38:15 +0200 (CEST) Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam04on2078.outbound.protection.outlook.com [40.107.100.78]) by mails.dpdk.org (Postfix) with ESMTP id EAFD742D5D for ; Tue, 16 May 2023 08:38:14 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=YpDji1TnPEeIIz2tQLqEx1hm+109udvbHOWl5fJcA1BU7w074csQ6ItoU8EhmdFqIYFbLz4N5M5TftdEdcNm1smC/ZfEYY2qSleNqyicasdrtgkZEEHkJIXsSzMtpi1WjysjnxgS4uOnWEMc2c+cXRan05Zl6WB+dSCewPDlDklrf5LDFNw9HQXL9cruDwmjqaCMrSHZaaQ93cMkRuiyXX2tup/7RY49q3SW9XlKgNANrDQzVZKqu3g/sylQTWf5QfejiZ9Qlag3LLCQXv/2lvZpz/9LgpON0sbsgpQZXtTWnM5Pt9MgUIoZCCGk6j3OGDRZ/gHjMMFfrCsdXlzw8A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=tBDsm+SAzmlckzOfjnfTkCFyNkfJKu4LG0G1GHnjmNg=; b=M/O41umBwKhrviX7PnukSDTceyJDbI09oH99pEE8g8/tKakSoKJv/0Eu9ztuFF++N7fn/JOYjZ4Nc+tWKTB6zcoeVC5V/APIIZOkpcMd3yBgfCs3RYeZrWfG9XinudCx+P2Qyc40XF27R/KlS/29d+hwTDEg+w40IleZkhMuwmj/zwTSuXqVimP0mJJpp+9r2Gv+TWYfUlJbclnbK6jqkhuI8LjryTDwaFr46TO/BBXDkYdTkQJPnRcV+W299tsNrWUYGe41T4tN/X5PpthyfG3SoAIyxacTMWlkErg2aWoQaB/czZ+hh/4iayllApnNWNHEGyZCbic64CcqzWunQQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.233) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=tBDsm+SAzmlckzOfjnfTkCFyNkfJKu4LG0G1GHnjmNg=; b=ZzEJF3Y6/5pHamIBiqgZ5kKEwm7Iz2TsWtK4QvItReSF7qra5b/nHg2aJ5o5AqslBaW/yyL5m1WSQwXJRBP93sVuT2ZCK/2j0FCgkt/daQGyonCZhY3uQJKLW3RrwB12bPr5l3uyj2cAvnUZIsHK9DKoyOGMlW0y1Q9/rA1HEnYsmrOftrgLn30AmrxTOJwt97GyR92AZWshXWRo+yEeSOSG2qKBJeW3lfrf3Dx0qQ7EwhdpkLs6aNlTvwodmqu/9XQ9jxQb7p2l7Td+TZIteDd9c9TZZOS3R9aAgcLGxQUmQoo6UDXSJzn+8FQjgyGOwSbUWo8XqxeTj7MX6Avccw== Received: from DM6PR08CA0042.namprd08.prod.outlook.com (2603:10b6:5:1e0::16) by DS0PR12MB7582.namprd12.prod.outlook.com (2603:10b6:8:13c::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Tue, 16 May 2023 06:38:12 +0000 Received: from DS1PEPF0000E645.namprd02.prod.outlook.com (2603:10b6:5:1e0:cafe::70) by DM6PR08CA0042.outlook.office365.com (2603:10b6:5:1e0::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33 via Frontend Transport; Tue, 16 May 2023 06:38:12 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.233) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.233 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.233; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.233) by DS1PEPF0000E645.mail.protection.outlook.com (10.167.17.203) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.11 via Frontend Transport; Tue, 16 May 2023 06:38:12 +0000 Received: from drhqmail201.nvidia.com (10.126.190.180) by mail.nvidia.com (10.127.129.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 15 May 2023 23:38:08 -0700 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail201.nvidia.com (10.126.190.180) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 15 May 2023 23:38:07 -0700 Received: from nvidia.com (10.127.8.13) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37 via Frontend Transport; Mon, 15 May 2023 23:38:06 -0700 From: Michael Baum To: CC: Ori Kam , Aman Singh , "Yuying Zhang" , Ferruh Yigit , "Thomas Monjalon" Subject: [PATCH v1 6/7] ethdev: add MPLS header modification support Date: Tue, 16 May 2023 09:37:46 +0300 Message-ID: <20230516063747.3047758-7-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230516063747.3047758-1-michaelba@nvidia.com> References: <20230516063747.3047758-1-michaelba@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF0000E645:EE_|DS0PR12MB7582:EE_ X-MS-Office365-Filtering-Correlation-Id: 45913d20-04f3-442a-8b7b-08db55d81f3f X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: K5RWN49jvHLzTrkcDZnAhABnZuZp6mKFb55tnSNBoO9tXi5RFuJyt4j0BRPg/ophO5BGfazK8mNVUxYrE23BRBEWDei+PHZs/EiC/9nIN33cw8q+SQgDXnQsNDIil41OH3s9RVjL4u83n/lUSWjHgut46QTxDLY6XByU55DnTDaQwmJJy72hr+WqPWUJIfZBjQYTs5yyjwiMDd6jD8H+siXjK8KqYRukve4sDCyh+ivFPBJXBpizWPyJ6GwxgpRCdp29Xrczoc780/wPM77LqRzvzeD1U6c8E56XgyjAUaKAFFLMcjnl5lTN8zzwXli9B9LQBVTBpdiZzlsK3o0aXbFFJeXLQKdcVuU8wa84ih8PwdWLeRI2kZVWU5rzgytlGYNhB+xyf5n9X/lO0wWvBH/CMSjjgtma+0T63abNFieEsbZlU8c87O/LjcE+kd04vg+QNoUmE6Lq6euFEPINTxEeSGbvTE3cLh4N11F1U5UTvNoDJAqEEQIn6N9WFgYMk7Ftd+Rn2ZsuFoUaeLR/Hwwe250ypP/KQolmE49CML22wprJKiGfB885KPCGwxi9wQ/xBGDQ+B0GcsrzTckfVXp6FU2acAlVdx3G8AOzjrIe7nT3uTdwexGIWkEMgGT2DXdlroJqLEjFU0Z5zgjPjNw0VbNoiXL2Ob/IVk9EZxs94iBQLRXD3K0XSQpQus8bimSIz1PxqKThLPdBSsmf5tW12MGf+2KfPaQj5ZHFJ55kpm31c0lgR6ZDHa1dttsLu5JlooFMiTEaGVqD9H5usg== X-Forefront-Antispam-Report: CIP:216.228.118.233; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc7edge2.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(376002)(39860400002)(396003)(136003)(346002)(451199021)(36840700001)(40470700004)(46966006)(55016003)(86362001)(40460700003)(2906002)(82310400005)(40480700001)(316002)(70206006)(70586007)(4326008)(6916009)(7696005)(6666004)(36756003)(47076005)(54906003)(478600001)(36860700001)(336012)(83380400001)(26005)(8936002)(1076003)(82740400003)(186003)(8676002)(356005)(7636003)(6286002)(426003)(41300700001)(2616005)(5660300002)(21314003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 06:38:12.7206 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 45913d20-04f3-442a-8b7b-08db55d81f3f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.118.233]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF0000E645.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB7582 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for MPLS modify header using "RTE_FLOW_FIELD_MPLS" id. Since MPLS heaser might appear more the one time in inner/outer/tunnel, a new field was added to "rte_flow_action_modify_data" structure in addition to "level" field. The "sub_level" field is the index of the header inside encapsulation level. It is used for modify multiple MPLS headers in same encapsulation level. This addition enables to modify multiple VLAN headers too, so the description of "RTE_FLOW_FIELD_VLAN_XXXX" was updated. Signed-off-by: Michael Baum --- app/test-pmd/cmdline_flow.c | 24 ++++++++++++++- doc/guides/prog_guide/rte_flow.rst | 6 ++++ lib/ethdev/rte_flow.h | 47 ++++++++++++++++++++---------- 3 files changed, 61 insertions(+), 16 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 8c1dea53c0..5370be1e3c 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -636,6 +636,7 @@ enum index { ACTION_MODIFY_FIELD_DST_TYPE_VALUE, ACTION_MODIFY_FIELD_DST_LEVEL, ACTION_MODIFY_FIELD_DST_LEVEL_VALUE, + ACTION_MODIFY_FIELD_DST_SUB_LEVEL, ACTION_MODIFY_FIELD_DST_TYPE_ID, ACTION_MODIFY_FIELD_DST_CLASS_ID, ACTION_MODIFY_FIELD_DST_OFFSET, @@ -643,6 +644,7 @@ enum index { ACTION_MODIFY_FIELD_SRC_TYPE_VALUE, ACTION_MODIFY_FIELD_SRC_LEVEL, ACTION_MODIFY_FIELD_SRC_LEVEL_VALUE, + ACTION_MODIFY_FIELD_SRC_SUB_LEVEL, ACTION_MODIFY_FIELD_SRC_TYPE_ID, ACTION_MODIFY_FIELD_SRC_CLASS_ID, ACTION_MODIFY_FIELD_SRC_OFFSET, @@ -859,7 +861,7 @@ static const char *const modify_field_ids[] = { "ipv6_proto", "flex_item", "hash_result", - "geneve_opt_type", "geneve_opt_class", "geneve_opt_data", + "geneve_opt_type", "geneve_opt_class", "geneve_opt_data", "mpls", NULL }; @@ -2301,6 +2303,7 @@ static const enum index next_action_sample[] = { static const enum index action_modify_field_dst[] = { ACTION_MODIFY_FIELD_DST_LEVEL, + ACTION_MODIFY_FIELD_DST_SUB_LEVEL, ACTION_MODIFY_FIELD_DST_TYPE_ID, ACTION_MODIFY_FIELD_DST_CLASS_ID, ACTION_MODIFY_FIELD_DST_OFFSET, @@ -2310,6 +2313,7 @@ static const enum index action_modify_field_dst[] = { static const enum index action_modify_field_src[] = { ACTION_MODIFY_FIELD_SRC_LEVEL, + ACTION_MODIFY_FIELD_SRC_SUB_LEVEL, ACTION_MODIFY_FIELD_SRC_TYPE_ID, ACTION_MODIFY_FIELD_SRC_CLASS_ID, ACTION_MODIFY_FIELD_SRC_OFFSET, @@ -6398,6 +6402,15 @@ static const struct token token_list[] = { .call = parse_vc_modify_field_level, .comp = comp_none, }, + [ACTION_MODIFY_FIELD_DST_SUB_LEVEL] = { + .name = "dst_sub_level", + .help = "destination field sub level", + .next = NEXT(action_modify_field_dst, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field, + dst.sub_level)), + .call = parse_vc_conf, + }, [ACTION_MODIFY_FIELD_DST_TYPE_ID] = { .name = "dst_type_id", .help = "destination field type ID", @@ -6451,6 +6464,15 @@ static const struct token token_list[] = { .call = parse_vc_modify_field_level, .comp = comp_none, }, + [ACTION_MODIFY_FIELD_SRC_SUB_LEVEL] = { + .name = "stc_sub_level", + .help = "source field sub level", + .next = NEXT(action_modify_field_src, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field, + src.sub_level)), + .call = parse_vc_conf, + }, [ACTION_MODIFY_FIELD_SRC_TYPE_ID] = { .name = "src_type_id", .help = "source field type ID", diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index cd38f0de46..1f681a38e4 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -2937,6 +2937,10 @@ as well as any tag element in the tag array: For the tag array (in case of multiple tags are supported and present) ``level`` translates directly into the array index. +- ``sub_level`` is the index of the header inside encapsulation level. + It is used for modify either ``VLAN`` or ``MPLS`` headers which multiple of + them might be supported in same encapsulation level. + ``type`` is used to specify (along with ``class_id``) the Geneve option which is being modified. This field is relevant only for ``RTE_FLOW_FIELD_GENEVE_OPT_XXXX`` type. @@ -3002,6 +3006,8 @@ value as sequence of bytes {xxx, xxx, 0x85, xxx, xxx, xxx}. +-----------------+----------------------------------------------------------+ | ``level`` | encapsulation level of a packet field or tag array index | +-----------------+----------------------------------------------------------+ + | ``sub_level`` | header level inside encapsulation level | + +-----------------+----------------------------------------------------------+ | ``type`` | geneve option type | +-----------------+----------------------------------------------------------+ | ``class_id`` | geneve option class ID | diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index b82eb0c0a8..4b2e17e266 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -3740,8 +3740,8 @@ enum rte_flow_field_id { RTE_FLOW_FIELD_START = 0, /**< Start of a packet. */ RTE_FLOW_FIELD_MAC_DST, /**< Destination MAC Address. */ RTE_FLOW_FIELD_MAC_SRC, /**< Source MAC Address. */ - RTE_FLOW_FIELD_VLAN_TYPE, /**< 802.1Q Tag Identifier. */ - RTE_FLOW_FIELD_VLAN_ID, /**< 802.1Q VLAN Identifier. */ + RTE_FLOW_FIELD_VLAN_TYPE, /**< VLAN Tag Identifier. */ + RTE_FLOW_FIELD_VLAN_ID, /**< VLAN Identifier. */ RTE_FLOW_FIELD_MAC_TYPE, /**< EtherType. */ RTE_FLOW_FIELD_IPV4_DSCP, /**< IPv4 DSCP. */ RTE_FLOW_FIELD_IPV4_TTL, /**< IPv4 Time To Live. */ @@ -3775,7 +3775,8 @@ enum rte_flow_field_id { RTE_FLOW_FIELD_HASH_RESULT, /**< Hash result. */ RTE_FLOW_FIELD_GENEVE_OPT_TYPE, /**< GENEVE option type */ RTE_FLOW_FIELD_GENEVE_OPT_CLASS,/**< GENEVE option class */ - RTE_FLOW_FIELD_GENEVE_OPT_DATA /**< GENEVE option data */ + RTE_FLOW_FIELD_GENEVE_OPT_DATA, /**< GENEVE option data */ + RTE_FLOW_FIELD_MPLS /**< MPLS header. */ }; /** @@ -3821,22 +3822,38 @@ struct rte_flow_action_modify_data { * Values other than @p 0 are not * necessarily supported. * + * @note that for MPLS field, + * encapsulation level also include + * tunnel since MPLS may appear in + * outer, inner or tunnel. + * * For RTE_FLOW_FIELD_TAG it represents * the tag element in the tag array. */ uint8_t level; - /** - * Geneve option type. relevant only - * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX - * modification type. - */ - uint8_t type; - /** - * Geneve option class. relevant only - * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX - * modification type. - */ - rte_be16_t class_id; + union { + /** + * Header level inside + * encapsulation level. + */ + uint8_t sub_level; + /** + * Geneve option identifier. + * relevant only for + * RTE_FLOW_FIELD_GENEVE_OPT_XXXX + * modification type. + */ + struct { + /** + * Geneve option type. + */ + uint8_t type; + /** + * Geneve option class. + */ + rte_be16_t class_id; + }; + }; }; struct rte_flow_item_flex_handle *flex_handle; }; From patchwork Tue May 16 06:37:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 126872 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BFF6E42B1F; Tue, 16 May 2023 08:38:49 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C73BE42D88; Tue, 16 May 2023 08:38:20 +0200 (CEST) Received: from NAM04-DM6-obe.outbound.protection.outlook.com (mail-dm6nam04on2077.outbound.protection.outlook.com [40.107.102.77]) by mails.dpdk.org (Postfix) with ESMTP id C70F642D47 for ; Tue, 16 May 2023 08:38:18 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=BrSwaqCOljT73trOggg3f5SePXgW4kuHAcGmUkB4mbBN23iNtEoX/pQj4E/A9N0Lgwy9BFb0Gl1W81nWHYQgGPsKs1XVAnQP1hCy0kiNKTZi7qIRHKnTdAt6uE5FqtVWJr4ogrK75c6nFym0BlKq8tj9ePlsGwaQTjRY66L9xsXWMYTgXABfOzG58mw1myiR0yCxbyY8fbYCSFME7Ja78j5ocVwOqIRXwGI4Kpjwa4ZEysfTOHQblz/GKuQ5Nh8GgqfXb5AjmhFvrKKUMVgyFrCsiOmqOutJPbB2VNn+Id5uzP45pTGcSGgDo9GaJO08zcY42aaXbV/5/tsldW0EIQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=yq613IPiI3VvhJCWbbJ13LLrqXW8B83Q1NW/HQJ4Crs=; b=hoMq/ZHK64LIJnsOfdVk9NTgN52GN5ZI3ifp3vFwrVt1RsEiZ3jxe6amDnlwjifUGsyR3nk+sUQbSIAa0TzqXFFxbq2XaGvAwHpvJgSDJ7Lt1EdqtMNvurqljpjzsa0qnvcF18lvZjwl7J2uJ21XDFn1ScZcNDS1oHe3BdLorOSlSYdaU4xvogqyz2S4QLn1YKPsdUmIs8HsK66SEMBC0oHmfslI09U+7WMtfKj/53eZMOdMT2Gp+d60+uBqu3ktYkLOw/MOIxKyp2bRc5pdmBSoENrQpQWW//27+wDdv5PqJ9+vU9PdZuViXgItsHES1WN99rVnlcbClgi7OgAtdg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.233) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=yq613IPiI3VvhJCWbbJ13LLrqXW8B83Q1NW/HQJ4Crs=; b=ZJgtklstrWN04zJtqRZcrhUKr/HvoYby/pa74C37VBFR3JHcLzM/bANNlXr3aSSMQseLX07pXZsRmFd668mTICZ4/2lgxDEww4GwoWyOgJxd2GjAALkTZ8RO8mJmaQQEKb6p8UPRf/QVlgsqKHXsOm0OOzL3nqLy3YKUKp0czku6LocKiFEoBe3WZT/5La3uF8HkfFM4lgPFHRr2RTkjUlM1ilk/+/h2igtuaAqO/9qzzjF4Ycn/KEw72ZNuHIuKnAiA6BFF3VSlMz0cxgQnyOwxhsKcgB3JqsYdweEbJWqGsqQg2vnkHqvTMTHuVlOL7xLmzQbuzPQcpy5Vsstsxw== Received: from MW4PR03CA0057.namprd03.prod.outlook.com (2603:10b6:303:8e::32) by CY8PR12MB8315.namprd12.prod.outlook.com (2603:10b6:930:7e::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.33; Tue, 16 May 2023 06:38:17 +0000 Received: from CO1NAM11FT060.eop-nam11.prod.protection.outlook.com (2603:10b6:303:8e:cafe::3d) by MW4PR03CA0057.outlook.office365.com (2603:10b6:303:8e::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30 via Frontend Transport; Tue, 16 May 2023 06:38:17 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.233) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.233 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.233; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.233) by CO1NAM11FT060.mail.protection.outlook.com (10.13.175.132) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.15 via Frontend Transport; Tue, 16 May 2023 06:38:16 +0000 Received: from drhqmail201.nvidia.com (10.126.190.180) by mail.nvidia.com (10.127.129.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 15 May 2023 23:38:10 -0700 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail201.nvidia.com (10.126.190.180) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 15 May 2023 23:38:10 -0700 Received: from nvidia.com (10.127.8.13) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37 via Frontend Transport; Mon, 15 May 2023 23:38:08 -0700 From: Michael Baum To: CC: Ori Kam , Aman Singh , "Yuying Zhang" , Ferruh Yigit , "Thomas Monjalon" Subject: [PATCH v1 7/7] net/mlx5: add MPLS modify field support Date: Tue, 16 May 2023 09:37:47 +0300 Message-ID: <20230516063747.3047758-8-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230516063747.3047758-1-michaelba@nvidia.com> References: <20230516063747.3047758-1-michaelba@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT060:EE_|CY8PR12MB8315:EE_ X-MS-Office365-Filtering-Correlation-Id: 641f53af-f06b-4ee4-ac85-08db55d821c3 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: S8B7OfGc8jQOlQYCzv5pTToeJMPP44OBFqOBuuWoMi5eP3ofKKjfz+phZl+UfNlNDq2xgfvTkNKFDCJwCN9WMC9BcLYgQDvyFMoZullrwutknaFvzDwNvtC9jnvgfqyAoJVhwmQ4h12QFO0q272UO9CFaME98criuSIFigGs3diNV/PeFVcFtTjvvDfb+uiDVn0tg/upeq9qklJA1MwObVcXSIRECJXOdvQDPxn0esufLrFSzdh5g+mA6lat8/S9ppriPXTmhFQoSZrXLJrahRpOvALx1VIQ5GQL1r77g9la/VYVoNsTx17dMSWZQUkxaaCHWpFukFtoSrP+UBJb39O8K6QJE3WXQAIynxe971RVsoYZr7bcmROdXloTCMLzgFc/LLhnuk9c3KhiKNqnml7PUmPttYgga1jiCeInLyfhxdCnpbo28JJ5OZmlpATetwrQ2s0LA62y7eLrV22z7eYWAZ4V63weEY7ae6GrsTxvn4xa8IDsjuISuz/6wkGpwTyQbBzBCVEkR8q8hArrCpnuV5asmuI8ZNL9Fm7vsx+muFFBssn1JYniu1pR1cb7vFxnvVbW+i0mSGlete+zTIX9HC4KzEb4C7VeXwmCtsxEzOJCMoJzduQDMeLiHzH8ZNV8n0r4XNNUcYoEh1dMK/37G8Ant5E6P7I8hsVAFIOcmixa6S5WfoRGI4dVVCpzrbfmucKaGkgKeCBRNBrJQ8WB8iVd0girUZnnBSOECOZxEy7wnKisEJ2abZp81/TJ X-Forefront-Antispam-Report: CIP:216.228.118.233; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc7edge2.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(396003)(346002)(39860400002)(136003)(376002)(451199021)(46966006)(40470700004)(36840700001)(36756003)(86362001)(54906003)(316002)(4326008)(70586007)(70206006)(6916009)(478600001)(7696005)(6666004)(40480700001)(55016003)(82310400005)(8936002)(8676002)(5660300002)(41300700001)(2906002)(7636003)(356005)(82740400003)(336012)(2616005)(1076003)(26005)(426003)(6286002)(186003)(36860700001)(83380400001)(47076005)(40460700003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 May 2023 06:38:16.9888 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 641f53af-f06b-4ee4-ac85-08db55d821c3 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.118.233]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT060.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY8PR12MB8315 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for modify field in tunnel MPLS header. For now it is supported only to copy from. Signed-off-by: Michael Baum --- drivers/common/mlx5/mlx5_prm.h | 5 +++++ drivers/net/mlx5/mlx5_flow_dv.c | 23 +++++++++++++++++++++++ drivers/net/mlx5/mlx5_flow_hw.c | 16 +++++++++------- 3 files changed, 37 insertions(+), 7 deletions(-) diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index ed3d5efbb7..04c1400a1e 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -787,6 +787,11 @@ enum mlx5_modification_field { MLX5_MODI_TUNNEL_HDR_DW_1 = 0x75, MLX5_MODI_GTPU_FIRST_EXT_DW_0 = 0x76, MLX5_MODI_HASH_RESULT = 0x81, + MLX5_MODI_IN_MPLS_LABEL_0 = 0x8a, + MLX5_MODI_IN_MPLS_LABEL_1, + MLX5_MODI_IN_MPLS_LABEL_2, + MLX5_MODI_IN_MPLS_LABEL_3, + MLX5_MODI_IN_MPLS_LABEL_4, MLX5_MODI_OUT_IPV6_NEXT_HDR = 0x4A, MLX5_MODI_INVALID = INT_MAX, }; diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index f136f43b0a..93cce16a1e 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -1388,6 +1388,7 @@ mlx5_flow_item_field_width(struct rte_eth_dev *dev, case RTE_FLOW_FIELD_GENEVE_VNI: return 24; case RTE_FLOW_FIELD_GTP_TEID: + case RTE_FLOW_FIELD_MPLS: case RTE_FLOW_FIELD_TAG: return 32; case RTE_FLOW_FIELD_MARK: @@ -1435,6 +1436,12 @@ flow_modify_info_mask_32_masked(uint32_t length, uint32_t off, uint32_t post_mas return rte_cpu_to_be_32(mask & post_mask); } +static __rte_always_inline enum mlx5_modification_field +mlx5_mpls_modi_field_get(const struct rte_flow_action_modify_data *data) +{ + return MLX5_MODI_IN_MPLS_LABEL_0 + data->sub_level; +} + static void mlx5_modify_flex_item(const struct rte_eth_dev *dev, const struct mlx5_flex_item *flex, @@ -1893,6 +1900,16 @@ mlx5_flow_field_id_to_modify_info else info[idx].offset = off_be; break; + case RTE_FLOW_FIELD_MPLS: + MLX5_ASSERT(data->offset + width <= 32); + off_be = 32 - (data->offset + width); + info[idx] = (struct field_modify_info){4, 0, + mlx5_mpls_modi_field_get(data)}; + if (mask) + mask[idx] = flow_modify_info_mask_32(width, off_be); + else + info[idx].offset = off_be; + break; case RTE_FLOW_FIELD_TAG: { MLX5_ASSERT(data->offset + width <= 32); @@ -5344,6 +5361,12 @@ flow_dv_validate_action_modify_field(struct rte_eth_dev *dev, RTE_FLOW_ERROR_TYPE_ACTION, action, "modifications of the GENEVE Network" " Identifier is not supported"); + if (action_modify_field->dst.field == RTE_FLOW_FIELD_MPLS || + action_modify_field->src.field == RTE_FLOW_FIELD_MPLS) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "modifications of the MPLS header " + "is not supported"); if (action_modify_field->dst.field == RTE_FLOW_FIELD_MARK || action_modify_field->src.field == RTE_FLOW_FIELD_MARK) if (config->dv_xmeta_en == MLX5_XMETA_MODE_LEGACY || diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 1b68a19900..80e6398992 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -3546,10 +3546,8 @@ flow_hw_validate_action_modify_field(const struct rte_flow_action *action, const struct rte_flow_action *mask, struct rte_flow_error *error) { - const struct rte_flow_action_modify_field *action_conf = - action->conf; - const struct rte_flow_action_modify_field *mask_conf = - mask->conf; + const struct rte_flow_action_modify_field *action_conf = action->conf; + const struct rte_flow_action_modify_field *mask_conf = mask->conf; if (action_conf->operation != mask_conf->operation) return rte_flow_error_set(error, EINVAL, @@ -3604,6 +3602,11 @@ flow_hw_validate_action_modify_field(const struct rte_flow_action *action, return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, action, "modifying Geneve VNI is not supported"); + /* Due to HW bug, tunnel MPLS header is read only. */ + if (action_conf->dst.field == RTE_FLOW_FIELD_MPLS) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "MPLS cannot be used as destination"); return 0; } @@ -4134,9 +4137,8 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev, action_flags |= MLX5_FLOW_ACTION_METER; break; case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: - ret = flow_hw_validate_action_modify_field(action, - mask, - error); + ret = flow_hw_validate_action_modify_field(action, mask, + error); if (ret < 0) return ret; action_flags |= MLX5_FLOW_ACTION_MODIFY_FIELD; From patchwork Thu May 18 08:28:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kiran Kumar Kokkilagadda X-Patchwork-Id: 126978 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 866C642B39; Thu, 18 May 2023 10:28:41 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 765CE42B71; Thu, 18 May 2023 10:28:41 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 6189340E25 for ; Thu, 18 May 2023 10:28:40 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 34I618o3001232; Thu, 18 May 2023 01:28:38 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=5n20BbL/8uPxtgP4II/MnLPteVqOBE8vhdYFkot8VJw=; b=bNu/10lvEalLbZX63c9mKjHt6EtxBffxYt48i9inJNZFQ0/qSTDEmM6F7lGCz5Ty3krN NdH90okaKpIUyzsn4LehpWenx4MaMBlGOvQuxU8iaSbZujd3dEGwc9plFG+eyc7nD5OO BXZy8FHBeUnViAOZvaj7+OaT3HL62oTV+3Ovi556CvVXNkE5wGRXKTAsSMvrAmNoAePv 08u5tU1XlgriFdLBZKotxx4o8qUdjQzM5V+Ru6igd3KAjBMOMSjvfjlqsOgrgIT58nf7 90yP8jA8TX+rfo2aOs1dZpY50aASO3U47qxdr79Ju2sZQ03vrEslwqeyHoCBOUsJg9km wg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3qmyexbc17-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 18 May 2023 01:28:38 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Thu, 18 May 2023 01:28:36 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Thu, 18 May 2023 01:28:36 -0700 Received: from cavium-DT31.. (unknown [10.28.36.159]) by maili.marvell.com (Postfix) with ESMTP id BFC143F70A5; Thu, 18 May 2023 01:28:33 -0700 (PDT) From: To: Ori Kam , Aman Singh , "Yuying Zhang" , Thomas Monjalon , "Ferruh Yigit" , Andrew Rybchenko CC: , Kiran Kumar K Subject: [PATCH v3] ethdev: add Tx queue flow matching item Date: Thu, 18 May 2023 13:58:30 +0530 Message-ID: <20230518082830.1730890-1-kirankumark@marvell.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-Proofpoint-GUID: Mz4ceg2MxLOJnm5vLv3sxASZNZVjTLZO X-Proofpoint-ORIG-GUID: Mz4ceg2MxLOJnm5vLv3sxASZNZVjTLZO X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-05-18_06,2023-05-17_02,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Kiran Kumar K Adding support for Tx queue flow matching item. This item is valid only for egress rules. An example use case would be that application can set different vlan insert rules with different PCP values based on Tx queue number. Signed-off-by: Kiran Kumar K Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 28 +++++++++++++++++++++ doc/guides/prog_guide/rte_flow.rst | 7 ++++++ doc/guides/rel_notes/release_23_07.rst | 5 ++++ doc/guides/testpmd_app_ug/testpmd_funcs.rst | 4 +++ lib/ethdev/rte_flow.c | 1 + lib/ethdev/rte_flow.h | 26 +++++++++++++++++++ 6 files changed, 71 insertions(+) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 58939ec321..a68a6080a8 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -496,6 +496,8 @@ enum index { ITEM_QUOTA_STATE_NAME, ITEM_AGGR_AFFINITY, ITEM_AGGR_AFFINITY_VALUE, + ITEM_TX_QUEUE, + ITEM_TX_QUEUE_VALUE, /* Validate/create actions. */ ACTIONS, @@ -1452,6 +1454,7 @@ static const enum index next_item[] = { ITEM_METER, ITEM_QUOTA, ITEM_AGGR_AFFINITY, + ITEM_TX_QUEUE, END_SET, ZERO, }; @@ -1953,6 +1956,12 @@ static const enum index item_aggr_affinity[] = { ZERO, }; +static const enum index item_tx_queue[] = { + ITEM_TX_QUEUE_VALUE, + ITEM_NEXT, + ZERO, +}; + static const enum index next_action[] = { ACTION_END, ACTION_VOID, @@ -6945,6 +6954,22 @@ static const struct token token_list[] = { .args = ARGS(ARGS_ENTRY(struct rte_flow_item_aggr_affinity, affinity)), }, + [ITEM_TX_QUEUE] = { + .name = "tx_queue", + .help = "match on the tx queue of send packet", + .priv = PRIV_ITEM(TX_QUEUE, + sizeof(struct rte_flow_item_tx_queue)), + .next = NEXT(item_tx_queue), + .call = parse_vc, + }, + [ITEM_TX_QUEUE_VALUE] = { + .name = "tx_queue_value", + .help = "tx queue value", + .next = NEXT(item_tx_queue, NEXT_ENTRY(COMMON_UNSIGNED), + item_param), + .args = ARGS(ARGS_ENTRY(struct rte_flow_item_tx_queue, + tx_queue)), + }, }; /** Remove and return last entry from argument stack. */ @@ -11849,6 +11874,9 @@ flow_item_default_mask(const struct rte_flow_item *item) case RTE_FLOW_ITEM_TYPE_AGGR_AFFINITY: mask = &rte_flow_item_aggr_affinity_mask; break; + case RTE_FLOW_ITEM_TYPE_TX_QUEUE: + mask = &rte_flow_item_tx_queue_mask; + break; default: break; } diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 32fc45516a..ac5c65131f 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -1486,6 +1486,13 @@ This item is meant to use the same structure as `Item: PORT_REPRESENTOR`_. See also `Action: REPRESENTED_PORT`_. +Item: ``TX_QUEUE`` +^^^^^^^^^^^^^^^^^^^^^^^ + +Matches on the Tx queue of send packet . + +- ``tx_queue``: Tx queue. + Item: ``AGGR_AFFINITY`` ^^^^^^^^^^^^^^^^^^^^^^^ diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst index a9b1293689..bb04d99125 100644 --- a/doc/guides/rel_notes/release_23_07.rst +++ b/doc/guides/rel_notes/release_23_07.rst @@ -55,6 +55,11 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= + * **Added flow matching of tx queue.** + + Added ``RTE_FLOW_ITEM_TYPE_TX_QUEUE`` rte_flow pattern to match tx queue of + send packet. + Removed Items ------------- diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 8f23847859..29f7dd4428 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3779,6 +3779,10 @@ This section lists supported pattern items and their attributes, if any. - ``affinity {value}``: aggregated port (starts from 1). +- ``tx_queue``: match tx queue of send packet. + + - ``tx_queue {value}``: send queue value (starts from 0). + - ``send_to_kernel``: send packets to kernel. diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 69e6e749f7..f0d7f868fa 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -164,6 +164,7 @@ static const struct rte_flow_desc_data rte_flow_desc_item[] = { MK_FLOW_ITEM(IPV6_ROUTING_EXT, sizeof(struct rte_flow_item_ipv6_routing_ext)), MK_FLOW_ITEM(QUOTA, sizeof(struct rte_flow_item_quota)), MK_FLOW_ITEM(AGGR_AFFINITY, sizeof(struct rte_flow_item_aggr_affinity)), + MK_FLOW_ITEM(TX_QUEUE, sizeof(struct rte_flow_item_tx_queue)), }; /** Generate flow_action[] entry. */ diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 713ba8b65c..fe28ba0a82 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -672,8 +672,34 @@ enum rte_flow_item_type { * @see struct rte_flow_item_aggr_affinity. */ RTE_FLOW_ITEM_TYPE_AGGR_AFFINITY, + /** + * Match Tx queue number. + * This is valid only for egress rules. + * + * @see struct rte_flow_item_tx_queue + */ + RTE_FLOW_ITEM_TYPE_TX_QUEUE, }; +/** + * RTE_FLOW_ITEM_TYPE_TX_QUEUE + * + * Tx queue number + * + * @see struct rte_flow_item_tx_queue + */ +struct rte_flow_item_tx_queue { + /** Tx queue number that packet is being transmitted */ + uint16_t tx_queue; +}; + +/** Default mask for RTE_FLOW_ITEM_TX_QUEUE. */ +#ifndef __cplusplus +static const struct rte_flow_item_tx_queue rte_flow_item_tx_queue_mask = { + .tx_queue = RTE_BE16(0xffff), +}; +#endif + /** * @warning * @b EXPERIMENTAL: this API may change without prior notice. From patchwork Thu May 18 21:48:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 127051 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1EE3F42B3D; Thu, 18 May 2023 23:49:40 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E7EAD41141; Thu, 18 May 2023 23:49:39 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2054.outbound.protection.outlook.com [40.107.237.54]) by mails.dpdk.org (Postfix) with ESMTP id E7D3D40E25 for ; Thu, 18 May 2023 23:49:37 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=EsibfWckJTYdj2u5YeZ0y2A/zYJLeVDSeeUiUmgsU8pAS6TG7b/ITt3hCyFGgJkRwVKCwY5bOo5GQ3sRtI4NwiJ7CdtZRDY4wTtLETxvjifWhzEtpU0t8P6y2I9vmoxQN3ovURdffnUYO1hGV89GFyLjIjfr3DoGf2FVMncH61tu/ZfUR+L58zHd9IUDL+YkLEFH86/dvIHwjvS9yZ+L8punkKcldv0P2soTEcZ3S3WTf9EtkpEtCapZE4lLWp+lzSwfi133t0I9LruPE/NDDnY4Z9iVog0fN8wMd6QUvDNgGXmJb9wydIjZKfYq5yodz+PYEaJ0Mdx+mP6XvV6bcw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=B+qKt90oNp7KpBA9hZHl+iOT1c9rMnCOzxwl+V4rhA0=; b=BsH003+a0JZo9PqsnZqLp93mTfQGycOKUpR3QRgCozunEmU1SuVu66tJc7L1g2mE4mQ2y8BrpB4iqpQbLuNnxiAOkgQu2wsVEcZ4g62VtLg0xvSjhoxO2JtuQTL7TU/23GO3TKgfWwv+p2/Dqt2D1XaSDTEqnF23ghuMyiLojr+cs4RniXJJIEhUGZxv9DBr+eaKAP8XZsFbDACnb0Q+I8LTk2DkiIrgu02BOLe3fxXehcEbbE0QgXcuARbMbz7JsWPiKwzT8afCHV0MK7CVEEejbFZ5Q8X0k0xuVh8QG63WS6UhkV655fJXvv2XQPEvgi0J1jiMrcbKn4a3OdIMMw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=B+qKt90oNp7KpBA9hZHl+iOT1c9rMnCOzxwl+V4rhA0=; b=LHUz8/Y6Ig/cS+RWBM9JZsC0aOO6yAwA6ETfApCBrRAhtE6UQx3tPFv0RibsZyW0mG36GVR5M8YiviXDtmgKAoruXyGT6pnsTacLDmc6pDID1JCjqDiHDeVZcsY6q02jRYOZLnyF4yyDdqQDh9YkoWHb2j36N26lVLG2vqcGhdw3mvxdPhOwD6MuhTR/Ce3QF7eyoa5+wvJKv9OCsFlvvlVPzWDTH9+9/drBl2WTHrH58+l1MJjGHEjgaH+S9s1GgmCmz3ZwxTwuZVMqLrK4MXLI3waRewAXiGs7G9Tg/WI3QvEG3Y1pSpOkznfzXEe8R4kqHbD88IoT1evdfhIqQA== Received: from DS7PR03CA0304.namprd03.prod.outlook.com (2603:10b6:8:2b::16) by MN0PR12MB5977.namprd12.prod.outlook.com (2603:10b6:208:37c::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6387.30; Thu, 18 May 2023 21:49:33 +0000 Received: from DS1PEPF0000E646.namprd02.prod.outlook.com (2603:10b6:8:2b:cafe::15) by DS7PR03CA0304.outlook.office365.com (2603:10b6:8:2b::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19 via Frontend Transport; Thu, 18 May 2023 21:49:32 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by DS1PEPF0000E646.mail.protection.outlook.com (10.167.18.36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.11 via Frontend Transport; Thu, 18 May 2023 21:49:32 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Thu, 18 May 2023 14:49:14 -0700 Received: from pegasus01.mtr.labs.mlnx (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Thu, 18 May 2023 14:49:12 -0700 From: Alexander Kozyrev To: CC: , , Subject: [PATCH v3] ethdev: add flow rule actions update API Date: Fri, 19 May 2023 00:48:52 +0300 Message-ID: <20230518214852.2364233-1-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20230518194943.2338558-1-akozyrev@nvidia.com> References: <20230518194943.2338558-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF0000E646:EE_|MN0PR12MB5977:EE_ X-MS-Office365-Filtering-Correlation-Id: a98ffa5f-8a40-4451-bf34-08db57e9c377 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: hg5BwRr1SL5Vnz0sUXvswQ0e4RXYNpZusUcZoO+WTaD6OkDzOsXMljeNBV9ZzALtmEjEVg+1nAb19kZD56hGcF4ixaJ3mZQG8m7QCRvIXNCht4C4scJgBVTVBfQ5V4a36fZU6YhONlqcOi1vBEDDIN6Fo8wDftu3pmTsmhdxJ1gYGI+9PLwopsGQw/9tCZVJub27sBN9PcG0gmVjVd7lBpYWOv4S33zU72v5duvg9alWLgcI6qFGI8BkSiEV8+aShMiSEdrZh2atYvf2Ji9cMZzXgl91f2xNilZpK9MJW8CqId4Qd7mCF8XE4/l0XhZ/crv0OF5AXdYStiObPVHqF6vOuObmHvvx9m75NH5+S1I544d5FtVmHEgZ2+UcT9YWeb9hSrnrV5s054dp96XgX7UTuzCrPtP/e56OFyw+fCI8zDp174HEBER7RRHJqSV/7WveMYhF+JL0ZYrFiKbpYR/B4FbnpwdQuoTYlArk+HrekHzzYPy5ixmwezqQcGzJSkTF0/MDDzETZ2Qgs1RRqNF+noag96Ewj7fOQVx5RYjb0pxA2UiRZrRQQXaIpJtyzufZE/Ul0pdhWo9pZ5IwmiF2ceOgY6jH02VRp0qluf8HUBBmi6cy2fCAIBMy10Wr/wJZ1Wb919ILQqywxvOfj7fZ08YdDR4IJIyPMm6FaZptFVHdUhsVzIIA0yHr/l1cD0Si6dsdGFoCXnYLugkEtrx35rPscImQPIhE5ZTjFBoN2DWXAxwV1jOo/KeTLRN+ X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(346002)(396003)(376002)(136003)(39860400002)(451199021)(46966006)(36840700001)(40470700004)(82310400005)(83380400001)(426003)(336012)(47076005)(36860700001)(6916009)(316002)(82740400003)(2906002)(356005)(7636003)(41300700001)(4326008)(478600001)(5660300002)(70586007)(40480700001)(2616005)(26005)(70206006)(40460700003)(8676002)(54906003)(86362001)(16526019)(8936002)(30864003)(15650500001)(186003)(1076003)(36756003)(6666004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 21:49:32.0208 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a98ffa5f-8a40-4451-bf34-08db57e9c377 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF0000E646.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB5977 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Introduce the new rte_flow_update() API allowing users to update the action list in the already existing rule. Flow rules can be updated now without the need to destroy the rule first and create a new one instead. A single API call ensures that no packets are lost by guaranteeing atomicity and flow state correctness. The rte_flow_async_update() is added as well. The matcher is not updated, only the action list is. Signed-off-by: Alexander Kozyrev Acked-by: Ori Kam --- doc/guides/prog_guide/rte_flow.rst | 42 +++++++++++++++++ doc/guides/rel_notes/release_23_07.rst | 4 ++ lib/ethdev/ethdev_trace.h | 29 ++++++++++++ lib/ethdev/ethdev_trace_points.c | 6 +++ lib/ethdev/rte_flow.c | 53 ++++++++++++++++++++++ lib/ethdev/rte_flow.h | 62 ++++++++++++++++++++++++++ lib/ethdev/rte_flow_driver.h | 16 +++++++ lib/ethdev/version.map | 2 + 8 files changed, 214 insertions(+) diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 32fc45516a..0930accfea 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -3446,6 +3446,31 @@ Return values: - 0 on success, a negative errno value otherwise and ``rte_errno`` is set. +Update +~~~~~~ + +Update an existing flow rule with a new set of actions. + +.. code-block:: c + + struct rte_flow * + rte_flow_update(uint16_t port_id, + struct rte_flow *flow, + const struct rte_flow_action *actions[], + struct rte_flow_error *error); + +Arguments: + +- ``port_id``: port identifier of Ethernet device. +- ``flow``: flow rule handle to update. +- ``actions``: associated actions (list terminated by the END action). +- ``error``: perform verbose error reporting if not NULL. PMDs initialize + this structure in case of error only. + +Return values: + +- 0 on success, a negative errno value otherwise and ``rte_errno`` is set. + Flush ~~~~~ @@ -3795,6 +3820,23 @@ Enqueueing a flow rule destruction operation is similar to simple destruction. void *user_data, struct rte_flow_error *error); +Enqueue update operation +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Enqueueing a flow rule update operation to replace actions in the existing rule. + +.. code-block:: c + + int + rte_flow_async_update(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow *flow, + const struct rte_flow_action actions[], + uint8_t actions_template_index, + void *user_data, + struct rte_flow_error *error); + Enqueue indirect action creation operation ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst index a9b1293689..94e9f8b3ae 100644 --- a/doc/guides/rel_notes/release_23_07.rst +++ b/doc/guides/rel_notes/release_23_07.rst @@ -55,6 +55,10 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= + * **Added flow rule update to the Flow API.** + + * Added API for updating the action list in the already existing rule. + Introduced both rte_flow_update() and rte_flow_async_update() functions. Removed Items ------------- diff --git a/lib/ethdev/ethdev_trace.h b/lib/ethdev/ethdev_trace.h index 3dc7d028b8..ba7871aa3e 100644 --- a/lib/ethdev/ethdev_trace.h +++ b/lib/ethdev/ethdev_trace.h @@ -2220,6 +2220,17 @@ RTE_TRACE_POINT_FP( rte_trace_point_emit_int(ret); ) +/* Called in loop in app/test-flow-perf */ +RTE_TRACE_POINT_FP( + rte_flow_trace_update, + RTE_TRACE_POINT_ARGS(uint16_t port_id, const struct rte_flow *flow, + const struct rte_flow_action *actions, int ret), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(flow); + rte_trace_point_emit_ptr(actions); + rte_trace_point_emit_int(ret); +) + RTE_TRACE_POINT_FP( rte_flow_trace_query, RTE_TRACE_POINT_ARGS(uint16_t port_id, const struct rte_flow *flow, @@ -2345,6 +2356,24 @@ RTE_TRACE_POINT_FP( rte_trace_point_emit_ptr(flow); ) +RTE_TRACE_POINT_FP( + rte_flow_trace_async_update, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + const struct rte_flow *flow, + const struct rte_flow_action *actions, + uint8_t actions_template_index, + const void *user_data, int ret), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(queue_id); + rte_trace_point_emit_ptr(op_attr); + rte_trace_point_emit_ptr(flow); + rte_trace_point_emit_ptr(actions); + rte_trace_point_emit_u8(actions_template_index); + rte_trace_point_emit_ptr(user_data); + rte_trace_point_emit_int(ret); +) + RTE_TRACE_POINT_FP( rte_flow_trace_pull, RTE_TRACE_POINT_ARGS(uint16_t port_id, uint32_t queue_id, diff --git a/lib/ethdev/ethdev_trace_points.c b/lib/ethdev/ethdev_trace_points.c index 61010cae56..f7bc6f342b 100644 --- a/lib/ethdev/ethdev_trace_points.c +++ b/lib/ethdev/ethdev_trace_points.c @@ -490,6 +490,9 @@ RTE_TRACE_POINT_REGISTER(rte_flow_trace_create, RTE_TRACE_POINT_REGISTER(rte_flow_trace_destroy, lib.ethdev.flow.destroy) +RTE_TRACE_POINT_REGISTER(rte_flow_trace_update, + lib.ethdev.flow.update) + RTE_TRACE_POINT_REGISTER(rte_flow_trace_flush, lib.ethdev.flow.flush) @@ -580,6 +583,9 @@ RTE_TRACE_POINT_REGISTER(rte_flow_trace_async_create, RTE_TRACE_POINT_REGISTER(rte_flow_trace_async_destroy, lib.ethdev.flow.async_destroy) +RTE_TRACE_POINT_REGISTER(rte_flow_trace_async_update, + lib.ethdev.flow.async_update) + RTE_TRACE_POINT_REGISTER(rte_flow_trace_push, lib.ethdev.flow.push) diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 69e6e749f7..baef9e92ea 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -441,6 +441,32 @@ rte_flow_destroy(uint16_t port_id, NULL, rte_strerror(ENOSYS)); } +int +rte_flow_update(uint16_t port_id, + struct rte_flow *flow, + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + int ret; + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->update)) { + fts_enter(dev); + ret = ops->update(dev, flow, actions, error); + fts_exit(dev); + + rte_flow_trace_update(port_id, flow, actions, ret); + + return flow_err(port_id, ret, error); + } + return rte_flow_error_set(error, ENOSYS, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOSYS)); +} + /* Destroy all flow rules associated with a port. */ int rte_flow_flush(uint16_t port_id, @@ -1985,6 +2011,33 @@ rte_flow_async_destroy(uint16_t port_id, return ret; } +int +rte_flow_async_update(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow *flow, + const struct rte_flow_action actions[], + uint8_t actions_template_index, + void *user_data, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + int ret; + + ret = flow_err(port_id, + ops->async_update(dev, queue_id, op_attr, flow, + actions, actions_template_index, + user_data, error), + error); + + rte_flow_trace_async_update(port_id, queue_id, op_attr, flow, + actions, actions_template_index, + user_data, ret); + + return ret; +} + int rte_flow_push(uint16_t port_id, uint32_t queue_id, diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 713ba8b65c..79bfc07a1c 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -4343,6 +4343,29 @@ rte_flow_destroy(uint16_t port_id, struct rte_flow *flow, struct rte_flow_error *error); +/** + * Update a flow rule with new actions on a given port. + * + * @param port_id + * Port identifier of Ethernet device. + * @param flow + * Flow rule handle to update. + * @param[in] actions + * Associated actions (list terminated by the END action). + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_update(uint16_t port_id, + struct rte_flow *flow, + const struct rte_flow_action actions[], + struct rte_flow_error *error); + /** * Destroy all flow rules associated with a port. * @@ -5770,6 +5793,45 @@ rte_flow_async_destroy(uint16_t port_id, void *user_data, struct rte_flow_error *error); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue rule update operation. + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue_id + * Flow queue used to insert the rule. + * @param[in] op_attr + * Rule creation operation attributes. + * @param[in] flow + * Flow rule to be updated. + * @param[in] actions + * List of actions to be used. + * The list order should match the order in the actions template. + * @param[in] actions_template_index + * Actions template index in the table. + * @param[in] user_data + * The user data that will be returned on the completion events. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_async_update(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow *flow, + const struct rte_flow_action actions[], + uint8_t actions_template_index, + void *user_data, + struct rte_flow_error *error); + /** * @warning * @b EXPERIMENTAL: this API may change without prior notice. diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h index a129a4605d..193b09a7d3 100644 --- a/lib/ethdev/rte_flow_driver.h +++ b/lib/ethdev/rte_flow_driver.h @@ -302,6 +302,22 @@ struct rte_flow_ops { const void *update, void *query, enum rte_flow_query_update_mode qu_mode, void *user_data, struct rte_flow_error *error); + /** See rte_flow_update(). */ + int (*update) + (struct rte_eth_dev *dev, + struct rte_flow *flow, + const struct rte_flow_action actions[], + struct rte_flow_error *error); + /** See rte_flow_async_update() */ + int (*async_update) + (struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow *flow, + const struct rte_flow_action actions[], + uint8_t actions_template_index, + void *user_data, + struct rte_flow_error *error); }; /** diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 357d1a88c0..d4f49cb918 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -299,6 +299,8 @@ EXPERIMENTAL { rte_flow_action_handle_query_update; rte_flow_async_action_handle_query_update; rte_flow_async_create_by_index; + rte_flow_update; + rte_flow_async_update; }; INTERNAL { From patchwork Fri May 19 11:59:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 127112 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 16C2042B49; Fri, 19 May 2023 14:00:49 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 08AFC4161A; Fri, 19 May 2023 14:00:49 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2041.outbound.protection.outlook.com [40.107.236.41]) by mails.dpdk.org (Postfix) with ESMTP id B6D4541148 for ; Fri, 19 May 2023 14:00:46 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CFYDDYcKroD+aF0t3m6R3643+wr/KzIhF0zPHSM/eYyMzKRkUOuSQPZUNEU4u07zjAq8GgrY+A3BS1G9JsAAqGPaMIvv8Hk4BMvYKTfq3PNxi+6cvpwCnZJnOEVbap2OmlZObSBY9lwoSyuHOOHlwn+H9KVV7PSPWH5sSqaxLUcSGfNPQ/e783voE0POglVTvOPHQd2E1hriLjnSdRhM0F/l4g8GsY81MeYUrCHdoJbGkNqYAwO2Its7MfPltAoc2w1D9OK8cFpN74GfOSOrLNdl+4rujuc6HOIe+gnpmM4HiCp+5Svk5MzFGm1olTvP6x8cGbDnmpYGUL/ktpkbYQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=QvSu+sVVO6xS6LQawNw2TNBIJH7bee7jy8/0Hyov7Bw=; b=OC7DD+nYUiPv4eMEGhKdfJ2OGRPFRlqlLGQeolDo7qrrh2sLqFVU9Hp0xsdHjUr12wKbNooO7bWJOYFg6lOViW+tvQEZ5jR2v8B11WmJe5pzR/sKWitBm3IG0d+joTPocIXFCRTs1MbmT4D91j7X3nDMJpdj093w8vWSorDghu5tBa2iEZ5LiZFqF0F/MEefpjQu8yf2yOv2gmql1dLdHcrNaPCUOrT/l4Nq1ZGp1mFG+64zFNrunJYhwEd6R0ctB46dtB2gT6jVkVBUlGe3SY6NCPVBqDTvYkJfsXoK/phUujLHX3UpJ/fKmg0vI2nwgVeEnUr9/B3W5VMxUDX1qQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=QvSu+sVVO6xS6LQawNw2TNBIJH7bee7jy8/0Hyov7Bw=; b=VxxgUoUgUiPpFI6upGETTbRWvHG4paU8ceK7K40IiXPOFMAa50urV0qzDX6dYAdOUiRynorg6Peew/nk1mcie0Rb4QeOWS/h27a6ik66YG5olxXUrVeb/FT/gJn+G2qANE02SiGFmcVIbQulRYgdKQ3IqyEhn4hBHuhhLYj+MeDl4QRBh/HN1Aje064bEkbRMWcfiQqYsjzpw+R8g1KcrZuHgnY9HChGLmLk2ZGOJWuhe32Obegyq8Ax20DWuEH2AforilJoH/FAMkAyqx0YBIEvbIbFlnBgg6cWZfCZT5x/4OXea2LnIeS7ixwaickFg/ZztElm2Y/o3wypKlx4eg== Received: from MN2PR05CA0013.namprd05.prod.outlook.com (2603:10b6:208:c0::26) by BN9PR12MB5132.namprd12.prod.outlook.com (2603:10b6:408:119::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.21; Fri, 19 May 2023 12:00:43 +0000 Received: from BL02EPF000145B9.namprd05.prod.outlook.com (2603:10b6:208:c0:cafe::b4) by MN2PR05CA0013.outlook.office365.com (2603:10b6:208:c0::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.14 via Frontend Transport; Fri, 19 May 2023 12:00:43 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BL02EPF000145B9.mail.protection.outlook.com (10.167.241.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.8 via Frontend Transport; Fri, 19 May 2023 12:00:42 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Fri, 19 May 2023 05:00:18 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Fri, 19 May 2023 05:00:14 -0700 From: Gregory Etelson To: CC: , , , "Ori Kam" , Aman Singh , Yuying Zhang , Ferruh Yigit , "Thomas Monjalon" , Andrew Rybchenko Subject: [PATCH v4] ethdev: add indirect list flow action Date: Fri, 19 May 2023 14:59:47 +0300 Message-ID: <20230519115947.331304-1-getelson@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230418172144.24365-1-getelson@nvidia.com> References: <20230418172144.24365-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BL02EPF000145B9:EE_|BN9PR12MB5132:EE_ X-MS-Office365-Filtering-Correlation-Id: 9e85ec81-c26a-43ed-5729-08db5860ac0c X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 7ejbSxaMhchDEBW0pSt/t0LH6dtyRN5xCb08GTNii/EoMS7X8pRwFfNoRdWB/+A7gj62r00G/azXnSIXJ7si0b754kroAv2VKsyUX3pzeK1rmRCy7WTApD7oOMcAf/BfTd6TjsQsmrt6LKUO6rGjjTbHPXVLonuUEOVzO1tT6TsXAK+du07vEZH85li6Uic/I8/soPG/PYk9wHxzEp5cPxhqtG2eWoxrRf1qBmoBHPHOvYQcLiEQDU3jBJN02iTl6F61UFjfCwPyVlu+DHdc7cGNuPcWuCQPAff+oZm0LNlEWb9sRuUuQQklps9TAvZPWS/Tgqb5slI2UyPDDN25PgC6CEpvWkHROsFGNrbMVtkDAjLPSs+VIgeQuKWeTDW5f9sve2Vy+IpxZHqjrvTb2q72VyRSY1Idv5VO76psW1XLrwK2jzwtiJU00JAoHWceLAsRO7rg8umR+1KwVW/bwymWAyIPq1AIm1b7hqAfiTe3KmQWVSeUQpP46gvlrREO/I5ClJU/pljyplR1E9OEkOkjAP0S044j430hg6fDlCf60gronJL3qB3aiZKtMQoOTq96GHfV0jk7GAr0qt+g7ExYVgae747pJT3H1LW9g0ciSkxp01TozB4+yj7QZjF+I3E14IMx07U+j/MZP0X03PIIuhGIRpl/seRBEvTbJ/WgHlQM18O+OsglZMgoVifFQuw+d7NhCB6JCAS3bpOODfV79DAq/DrwewUX38VPnka5hg1M5M/tg1uIqIbjDo+v X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(376002)(39860400002)(396003)(136003)(346002)(451199021)(40470700004)(46966006)(36840700001)(54906003)(30864003)(478600001)(316002)(70206006)(6916009)(186003)(6666004)(70586007)(2906002)(4326008)(8676002)(5660300002)(8936002)(26005)(1076003)(16526019)(6286002)(2616005)(47076005)(336012)(426003)(83380400001)(36860700001)(7696005)(41300700001)(55016003)(356005)(82740400003)(7636003)(82310400005)(40460700003)(86362001)(40480700001)(36756003)(579004)(559001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 12:00:42.7776 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9e85ec81-c26a-43ed-5729-08db5860ac0c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BL02EPF000145B9.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR12MB5132 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Indirect API creates a shared flow action with unique action handle. Flow rules can access the shared flow action and resources related to that action through the indirect action handle. In addition, the API allows to update existing shared flow action configuration. After the update completes, new action configuration is available to all flows that reference that shared action. Indirect actions list expands the indirect action API: • Indirect action list creates a handle for one or several flow actions, while legacy indirect action handle references single action only. Input flow actions arranged in END terminated list. • Flow rule can provide rule specific configuration parameters to existing shared handle. Updates of flow rule specific configuration will not change the base action configuration. Base action configuration was set during the action creation. Indirect action list handle defines 2 types of resources: • Mutable handle resource can be changed during handle lifespan. • Immutable handle resource value is set during handle creation and cannot be changed. There are 2 types of mutable indirect handle contexts: • Action mutable context is always shared between all flows that referenced indirect actions list handle. Action mutable context can be changed by explicit invocation of indirect handle update function. • Flow mutable context is private to a flow. Flow mutable context can be updated by indirect list handle flow rule configuration. flow 1: / indirect handle H conf C1 / | | | | | | flow 2: | | / indirect handle H conf C2 / | | | | | | | | | | | | ========================================================= ^ | | | | | | V | V | ~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~ | flow mutable flow mutable | context 1 context 2 | ~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~ indirect | | | action | | | context | V V | ----------------------------------------------------- | action mutable context | ----------------------------------------------------- v action immutable context ========================================================= Indirect action types - immutable, action / flow mutable, are mutually exclusive and depend on the action definition. For example: • Indirect METER_MARK policy is immutable action member and profile is action mutable action member. • Indirect METER_MARK flow action defines init_color as flow mutable member. • Indirect QUOTA flow action does not define flow mutable members. If indirect list handle was created from a list of actions A1 / A2 ... An / END indirect list flow action can update Ai flow mutable context in the action configuration parameter. Indirect list action configuration is and array [C1, C2, .., Cn] where Ci corresponds to Ai in the action handle source. Ci configuration element points Ai flow mutable update, or it's NULL if Ai has no flow mutable update. Indirect list action configuration can be NULL if the action has no flow mutable updates. Template API: Action template format: template .. indirect_list handle Htmpl conf Ctmpl .. mask .. indirect_list handle Hmask conf Cmask .. 1 If Htmpl was masked (Hmask != 0), it will be fixed in that template. Otherwise, indirect action value is set in a flow rule. 2 If Htmpl and Ctmpl[i] were masked (Hmask !=0 and Cmask[i] != 0), Htmpl's Ai action flow mutable context fill be updated to Ctmpl[i] values and will be fixed in that template. Flow rule format: actions .. indirect_list handle Hflow conf Cflow .. 3 If Htmpl was not masked in actions template, Hflow references an action of the same type as Htmpl. 4 Cflow[i] updates handle's Ai flow mutable configuration if the Ci was not masked in action template. Signed-off-by: Gregory Etelson --- app/test-pmd/cmdline_flow.c | 207 ++++++++++++++++++- app/test-pmd/config.c | 163 +++++++++++---- app/test-pmd/testpmd.h | 9 +- doc/guides/nics/features/default.ini | 1 + doc/guides/prog_guide/rte_flow.rst | 119 +++++++++++ doc/guides/rel_notes/release_23_07.rst | 2 + lib/ethdev/ethdev_trace.h | 88 ++++++++ lib/ethdev/ethdev_trace_points.c | 18 ++ lib/ethdev/rte_flow.c | 171 ++++++++++++++++ lib/ethdev/rte_flow.h | 269 +++++++++++++++++++++++++ lib/ethdev/rte_flow_driver.h | 41 ++++ lib/ethdev/version.map | 8 + 12 files changed, 1054 insertions(+), 42 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 58939ec321..4663b6217f 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -145,6 +145,7 @@ enum index { /* Queue indirect action arguments */ QUEUE_INDIRECT_ACTION_CREATE, + QUEUE_INDIRECT_ACTION_LIST_CREATE, QUEUE_INDIRECT_ACTION_UPDATE, QUEUE_INDIRECT_ACTION_DESTROY, QUEUE_INDIRECT_ACTION_QUERY, @@ -157,6 +158,7 @@ enum index { QUEUE_INDIRECT_ACTION_TRANSFER, QUEUE_INDIRECT_ACTION_CREATE_POSTPONE, QUEUE_INDIRECT_ACTION_SPEC, + QUEUE_INDIRECT_ACTION_LIST, /* Queue indirect action update arguments */ QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE, @@ -242,6 +244,8 @@ enum index { /* Indirect action arguments */ INDIRECT_ACTION_CREATE, + INDIRECT_ACTION_LIST_CREATE, + INDIRECT_ACTION_FLOW_CONF_CREATE, INDIRECT_ACTION_UPDATE, INDIRECT_ACTION_DESTROY, INDIRECT_ACTION_QUERY, @@ -253,6 +257,8 @@ enum index { INDIRECT_ACTION_EGRESS, INDIRECT_ACTION_TRANSFER, INDIRECT_ACTION_SPEC, + INDIRECT_ACTION_LIST, + INDIRECT_ACTION_FLOW_CONF, /* Indirect action destroy arguments */ INDIRECT_ACTION_DESTROY_ID, @@ -626,6 +632,11 @@ enum index { ACTION_SAMPLE_INDEX, ACTION_SAMPLE_INDEX_VALUE, ACTION_INDIRECT, + ACTION_INDIRECT_LIST, + ACTION_INDIRECT_LIST_HANDLE, + ACTION_INDIRECT_LIST_CONF, + INDIRECT_LIST_ACTION_ID2PTR_HANDLE, + INDIRECT_LIST_ACTION_ID2PTR_CONF, ACTION_SHARED_INDIRECT, INDIRECT_ACTION_PORT, INDIRECT_ACTION_ID2PTR, @@ -1266,6 +1277,7 @@ static const enum index next_qia_create_attr[] = { QUEUE_INDIRECT_ACTION_TRANSFER, QUEUE_INDIRECT_ACTION_CREATE_POSTPONE, QUEUE_INDIRECT_ACTION_SPEC, + QUEUE_INDIRECT_ACTION_LIST, ZERO, }; @@ -1294,6 +1306,8 @@ static const enum index next_ia_create_attr[] = { INDIRECT_ACTION_EGRESS, INDIRECT_ACTION_TRANSFER, INDIRECT_ACTION_SPEC, + INDIRECT_ACTION_LIST, + INDIRECT_ACTION_FLOW_CONF, ZERO, }; @@ -1303,6 +1317,13 @@ static const enum index next_ia[] = { ZERO }; +static const enum index next_ial[] = { + ACTION_INDIRECT_LIST_HANDLE, + ACTION_INDIRECT_LIST_CONF, + ACTION_NEXT, + ZERO +}; + static const enum index next_qia_qu_attr[] = { QUEUE_INDIRECT_ACTION_QU_MODE, QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE, @@ -2013,6 +2034,7 @@ static const enum index next_action[] = { ACTION_AGE_UPDATE, ACTION_SAMPLE, ACTION_INDIRECT, + ACTION_INDIRECT_LIST, ACTION_SHARED_INDIRECT, ACTION_MODIFY_FIELD, ACTION_CONNTRACK, @@ -2289,6 +2311,7 @@ static const enum index next_action_sample[] = { ACTION_RAW_ENCAP, ACTION_VXLAN_ENCAP, ACTION_NVGRE_ENCAP, + ACTION_REPRESENTED_PORT, ACTION_NEXT, ZERO, }; @@ -2539,6 +2562,10 @@ static int parse_ia_destroy(struct context *ctx, const struct token *token, static int parse_ia_id2ptr(struct context *ctx, const struct token *token, const char *str, unsigned int len, void *buf, unsigned int size); + +static int parse_indlst_id2ptr(struct context *ctx, const struct token *token, + const char *str, unsigned int len, void *buf, + unsigned int size); static int parse_ia_port(struct context *ctx, const struct token *token, const char *str, unsigned int len, void *buf, unsigned int size); @@ -2627,6 +2654,16 @@ static int comp_qu_mode_name(struct context *ctx, const struct token *token, unsigned int ent, char *buf, unsigned int size); +struct indlst_conf { + uint32_t id; + uint32_t conf_num; + struct rte_flow_action *actions; + const void **conf; + SLIST_ENTRY(indlst_conf) next; +}; + +static const struct indlst_conf *indirect_action_list_conf_get(uint32_t conf_id); + /** Token definitions. */ static const struct token token_list[] = { /* Special tokens. */ @@ -3426,6 +3463,12 @@ static const struct token token_list[] = { .help = "specify action to create indirect handle", .next = NEXT(next_action), }, + [QUEUE_INDIRECT_ACTION_LIST] = { + .name = "list", + .help = "specify actions for indirect handle list", + .next = NEXT(NEXT_ENTRY(ACTIONS, END)), + .call = parse_qia, + }, /* Top-level command. */ [PUSH] = { .name = "push", @@ -6775,6 +6818,37 @@ static const struct token token_list[] = { .args = ARGS(ARGS_ENTRY_ARB(0, sizeof(uint32_t))), .call = parse_vc, }, + [ACTION_INDIRECT_LIST] = { + .name = "indirect_list", + .help = "apply indirect list action by id", + .priv = PRIV_ACTION(INDIRECT_LIST, + sizeof(struct + rte_flow_action_indirect_list)), + .next = NEXT(next_ial), + .call = parse_vc, + }, + [ACTION_INDIRECT_LIST_HANDLE] = { + .name = "handle", + .help = "indirect list handle", + .next = NEXT(next_ial, NEXT_ENTRY(INDIRECT_LIST_ACTION_ID2PTR_HANDLE)), + .args = ARGS(ARGS_ENTRY_ARB(0, sizeof(uintptr_t))), + }, + [ACTION_INDIRECT_LIST_CONF] = { + .name = "conf", + .help = "indirect list configuration", + .next = NEXT(next_ial, NEXT_ENTRY(INDIRECT_LIST_ACTION_ID2PTR_CONF)), + .args = ARGS(ARGS_ENTRY_ARB(0, sizeof(uintptr_t))), + }, + [INDIRECT_LIST_ACTION_ID2PTR_HANDLE] = { + .type = "UNSIGNED", + .help = "unsigned integer value", + .call = parse_indlst_id2ptr, + }, + [INDIRECT_LIST_ACTION_ID2PTR_CONF] = { + .type = "UNSIGNED", + .help = "unsigned integer value", + .call = parse_indlst_id2ptr, + }, [ACTION_SHARED_INDIRECT] = { .name = "shared_indirect", .help = "apply indirect action by id and port", @@ -6823,6 +6897,18 @@ static const struct token token_list[] = { .help = "specify action to create indirect handle", .next = NEXT(next_action), }, + [INDIRECT_ACTION_LIST] = { + .name = "list", + .help = "specify actions for indirect handle list", + .next = NEXT(NEXT_ENTRY(ACTIONS, END)), + .call = parse_ia, + }, + [INDIRECT_ACTION_FLOW_CONF] = { + .name = "flow_conf", + .help = "specify actions configuration for indirect handle list", + .next = NEXT(NEXT_ENTRY(ACTIONS, END)), + .call = parse_ia, + }, [ACTION_POL_G] = { .name = "g_actions", .help = "submit a list of associated actions for green", @@ -7181,6 +7267,12 @@ parse_ia(struct context *ctx, const struct token *token, return len; case INDIRECT_ACTION_QU_MODE: return len; + case INDIRECT_ACTION_LIST: + out->command = INDIRECT_ACTION_LIST_CREATE; + return len; + case INDIRECT_ACTION_FLOW_CONF: + out->command = INDIRECT_ACTION_FLOW_CONF_CREATE; + return len; default: return -1; } @@ -7278,6 +7370,9 @@ parse_qia(struct context *ctx, const struct token *token, return len; case QUEUE_INDIRECT_ACTION_QU_MODE: return len; + case QUEUE_INDIRECT_ACTION_LIST: + out->command = QUEUE_INDIRECT_ACTION_LIST_CREATE; + return len; default: return -1; } @@ -7454,10 +7549,12 @@ parse_vc(struct context *ctx, const struct token *token, return -1; break; case ACTIONS: - out->args.vc.actions = + out->args.vc.actions = out->args.vc.pattern ? (void *)RTE_ALIGN_CEIL((uintptr_t) (out->args.vc.pattern + out->args.vc.pattern_n), + sizeof(double)) : + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), sizeof(double)); ctx->object = out->args.vc.actions; ctx->objmask = NULL; @@ -10412,6 +10509,49 @@ parse_ia_id2ptr(struct context *ctx, const struct token *token, return ret; } +static int +parse_indlst_id2ptr(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void __rte_unused *buf, unsigned int __rte_unused size) +{ + struct rte_flow_action *action = ctx->object; + struct rte_flow_action_indirect_list *action_conf; + const struct indlst_conf *indlst_conf; + uint32_t id; + int ret; + + if (!action) + return -1; + ctx->objdata = 0; + ctx->object = &id; + ctx->objmask = NULL; + ret = parse_int(ctx, token, str, len, ctx->object, sizeof(id)); + if (ret != (int)len) + return ret; + ctx->object = action; + action_conf = (void *)(uintptr_t)action->conf; + action_conf->conf = NULL; + switch (ctx->curr) { + case INDIRECT_LIST_ACTION_ID2PTR_HANDLE: + action_conf->handle = (typeof(action_conf->handle)) + port_action_handle_get_by_id(ctx->port, id); + if (!action_conf->handle) { + printf("no indirect list handle for id %u\n", id); + return -1; + } + break; + case INDIRECT_LIST_ACTION_ID2PTR_CONF: + indlst_conf = indirect_action_list_conf_get(id); + if (!indlst_conf) + return -1; + action_conf->conf = (const void **)indlst_conf->conf; + break; + default: + break; + } + return ret; +} + static int parse_meter_profile_id2ptr(struct context *ctx, const struct token *token, const char *str, unsigned int len, @@ -11453,6 +11593,64 @@ cmd_flow_tok(cmdline_parse_token_hdr_t **hdr, *hdr = &cmd_flow_token_hdr; } +static SLIST_HEAD(, indlst_conf) indlst_conf_head = + SLIST_HEAD_INITIALIZER(); + +static void +indirect_action_flow_conf_create(const struct buffer *in) +{ + int len, ret; + uint32_t i; + struct indlst_conf *indlst_conf = NULL; + size_t base = RTE_ALIGN(sizeof(*indlst_conf), 8); + struct rte_flow_action *src = in->args.vc.actions; + + if (!in->args.vc.actions_n) + goto end; + len = rte_flow_conv(RTE_FLOW_CONV_OP_ACTIONS, NULL, 0, src, NULL); + if (len <= 0) + goto end; + len = RTE_ALIGN(len, 16); + + indlst_conf = calloc(1, base + len + + in->args.vc.actions_n * sizeof(uintptr_t)); + if (!indlst_conf) + goto end; + indlst_conf->id = in->args.vc.attr.group; + indlst_conf->conf_num = in->args.vc.actions_n - 1; + indlst_conf->actions = RTE_PTR_ADD(indlst_conf, base); + ret = rte_flow_conv(RTE_FLOW_CONV_OP_ACTIONS, indlst_conf->actions, + len, src, NULL); + if (ret <= 0) { + free(indlst_conf); + indlst_conf = NULL; + goto end; + } + indlst_conf->conf = RTE_PTR_ADD(indlst_conf, base + len); + for (i = 0; i < indlst_conf->conf_num; i++) + indlst_conf->conf[i] = indlst_conf->actions[i].conf; + SLIST_INSERT_HEAD(&indlst_conf_head, indlst_conf, next); +end: + if (indlst_conf) + printf("created indirect action list configuration %u\n", + in->args.vc.attr.group); + else + printf("cannot create indirect action list configuration %u\n", + in->args.vc.attr.group); +} + +static const struct indlst_conf * +indirect_action_list_conf_get(uint32_t conf_id) +{ + const struct indlst_conf *conf; + + SLIST_FOREACH(conf, &indlst_conf_head, next) { + if (conf->id == conf_id) + return conf; + } + return NULL; +} + /** Dispatch parsed buffer to function calls. */ static void cmd_flow_parsed(const struct buffer *in) @@ -11532,6 +11730,7 @@ cmd_flow_parsed(const struct buffer *in) in->args.aged.destroy); break; case QUEUE_INDIRECT_ACTION_CREATE: + case QUEUE_INDIRECT_ACTION_LIST_CREATE: port_queue_action_handle_create( in->port, in->queue, in->postpone, in->args.vc.attr.group, @@ -11567,8 +11766,10 @@ cmd_flow_parsed(const struct buffer *in) in->args.vc.actions); break; case INDIRECT_ACTION_CREATE: + case INDIRECT_ACTION_LIST_CREATE: port_action_handle_create( in->port, in->args.vc.attr.group, + in->command == INDIRECT_ACTION_LIST_CREATE, &((const struct rte_flow_indir_action_conf) { .ingress = in->args.vc.attr.ingress, .egress = in->args.vc.attr.egress, @@ -11576,6 +11777,9 @@ cmd_flow_parsed(const struct buffer *in) }), in->args.vc.actions); break; + case INDIRECT_ACTION_FLOW_CONF_CREATE: + indirect_action_flow_conf_create(in); + break; case INDIRECT_ACTION_DESTROY: port_action_handle_destroy(in->port, in->args.ia_destroy.action_id_n, @@ -11653,6 +11857,7 @@ cmd_flow_parsed(const struct buffer *in) default: break; } + fflush(stdout); } /** Token generator and output processing callback (cmdline API). */ diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 096c218c12..f35088a6e9 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -1764,19 +1764,13 @@ port_flow_configure(portid_t port_id, return 0; } -/** Create indirect action */ -int -port_action_handle_create(portid_t port_id, uint32_t id, - const struct rte_flow_indir_action_conf *conf, - const struct rte_flow_action *action) +static int +action_handle_create(portid_t port_id, + struct port_indirect_action *pia, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *action, + struct rte_flow_error *error) { - struct port_indirect_action *pia; - int ret; - struct rte_flow_error error; - - ret = action_alloc(port_id, id, &pia); - if (ret) - return ret; if (action->type == RTE_FLOW_ACTION_TYPE_AGE) { struct rte_flow_action_age *age = (struct rte_flow_action_age *)(uintptr_t)(action->conf); @@ -1785,20 +1779,52 @@ port_action_handle_create(portid_t port_id, uint32_t id, age->context = &pia->age_type; } else if (action->type == RTE_FLOW_ACTION_TYPE_CONNTRACK) { struct rte_flow_action_conntrack *ct = - (struct rte_flow_action_conntrack *)(uintptr_t)(action->conf); + (struct rte_flow_action_conntrack *)(uintptr_t)(action->conf); memcpy(ct, &conntrack_context, sizeof(*ct)); } + pia->type = action->type; + pia->handle = rte_flow_action_handle_create(port_id, conf, action, + error); + return pia->handle ? 0 : -1; +} + +static int +action_list_handle_create(portid_t port_id, + struct port_indirect_action *pia, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *actions, + struct rte_flow_error *error) +{ + pia->type = RTE_FLOW_ACTION_TYPE_INDIRECT_LIST; + pia->list_handle = + rte_flow_action_list_handle_create(port_id, conf, + actions, error); + return pia->list_handle ? 0 : -1; +} +/** Create indirect action */ +int +port_action_handle_create(portid_t port_id, uint32_t id, bool indirect_list, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *action) +{ + struct port_indirect_action *pia; + int ret; + struct rte_flow_error error; + + ret = action_alloc(port_id, id, &pia); + if (ret) + return ret; /* Poisoning to make sure PMDs update it in case of error. */ memset(&error, 0x22, sizeof(error)); - pia->handle = rte_flow_action_handle_create(port_id, conf, action, - &error); - if (!pia->handle) { + ret = indirect_list ? + action_list_handle_create(port_id, pia, conf, action, &error) : + action_handle_create(port_id, pia, conf, action, &error); + if (ret) { uint32_t destroy_id = pia->id; port_action_handle_destroy(port_id, 1, &destroy_id); return port_flow_complain(&error); } - pia->type = action->type; printf("Indirect action #%u created\n", pia->id); return 0; } @@ -1833,10 +1859,17 @@ port_action_handle_destroy(portid_t port_id, */ memset(&error, 0x33, sizeof(error)); - if (pia->handle && rte_flow_action_handle_destroy( - port_id, pia->handle, &error)) { - ret = port_flow_complain(&error); - continue; + if (pia->handle) { + ret = pia->type == + RTE_FLOW_ACTION_TYPE_INDIRECT_LIST ? + rte_flow_action_list_handle_destroy + (port_id, pia->list_handle, &error) : + rte_flow_action_handle_destroy + (port_id, pia->handle, &error); + if (ret) { + ret = port_flow_complain(&error); + continue; + } } *tmp = pia->next; printf("Indirect action #%u destroyed\n", pia->id); @@ -1867,11 +1900,18 @@ port_action_handle_flush(portid_t port_id) /* Poisoning to make sure PMDs update it in case of error. */ memset(&error, 0x44, sizeof(error)); - if (pia->handle != NULL && - rte_flow_action_handle_destroy - (port_id, pia->handle, &error) != 0) { - printf("Indirect action #%u not destroyed\n", pia->id); - ret = port_flow_complain(&error); + if (pia->handle != NULL) { + ret = pia->type == + RTE_FLOW_ACTION_TYPE_INDIRECT_LIST ? + rte_flow_action_list_handle_destroy + (port_id, pia->list_handle, &error) : + rte_flow_action_handle_destroy + (port_id, pia->handle, &error); + if (ret) { + printf("Indirect action #%u not destroyed\n", + pia->id); + ret = port_flow_complain(&error); + } tmp = &pia->next; } else { *tmp = pia->next; @@ -2822,6 +2862,45 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, return ret; } +static void +queue_action_handle_create(portid_t port_id, uint32_t queue_id, + struct port_indirect_action *pia, + struct queue_job *job, + const struct rte_flow_op_attr *attr, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *action, + struct rte_flow_error *error) +{ + if (action->type == RTE_FLOW_ACTION_TYPE_AGE) { + struct rte_flow_action_age *age = + (struct rte_flow_action_age *)(uintptr_t)(action->conf); + + pia->age_type = ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION; + age->context = &pia->age_type; + } + /* Poisoning to make sure PMDs update it in case of error. */ + pia->handle = rte_flow_async_action_handle_create(port_id, queue_id, + attr, conf, action, + job, error); + pia->type = action->type; +} + +static void +queue_action_list_handle_create(portid_t port_id, uint32_t queue_id, + struct port_indirect_action *pia, + struct queue_job *job, + const struct rte_flow_op_attr *attr, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *action, + struct rte_flow_error *error) +{ + /* Poisoning to make sure PMDs update it in case of error. */ + pia->type = RTE_FLOW_ACTION_TYPE_INDIRECT_LIST; + pia->list_handle = rte_flow_async_action_list_handle_create + (port_id, queue_id, attr, conf, action, + job, error); +} + /** Enqueue indirect action create operation. */ int port_queue_action_handle_create(portid_t port_id, uint32_t queue_id, @@ -2835,6 +2914,8 @@ port_queue_action_handle_create(portid_t port_id, uint32_t queue_id, int ret; struct rte_flow_error error; struct queue_job *job; + bool is_indirect_list = action[1].type != RTE_FLOW_ACTION_TYPE_END; + ret = action_alloc(port_id, id, &pia); if (ret) @@ -2853,17 +2934,16 @@ port_queue_action_handle_create(portid_t port_id, uint32_t queue_id, job->type = QUEUE_JOB_TYPE_ACTION_CREATE; job->pia = pia; - if (action->type == RTE_FLOW_ACTION_TYPE_AGE) { - struct rte_flow_action_age *age = - (struct rte_flow_action_age *)(uintptr_t)(action->conf); - - pia->age_type = ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION; - age->context = &pia->age_type; - } /* Poisoning to make sure PMDs update it in case of error. */ memset(&error, 0x88, sizeof(error)); - pia->handle = rte_flow_async_action_handle_create(port_id, queue_id, - &attr, conf, action, job, &error); + + if (is_indirect_list) + queue_action_list_handle_create(port_id, queue_id, pia, job, + &attr, conf, action, &error); + else + queue_action_handle_create(port_id, queue_id, pia, job, &attr, + conf, action, &error); + if (!pia->handle) { uint32_t destroy_id = pia->id; port_queue_action_handle_destroy(port_id, queue_id, @@ -2871,7 +2951,6 @@ port_queue_action_handle_create(portid_t port_id, uint32_t queue_id, free(job); return port_flow_complain(&error); } - pia->type = action->type; printf("Indirect action #%u creation queued\n", pia->id); return 0; } @@ -2920,9 +2999,15 @@ port_queue_action_handle_destroy(portid_t port_id, } job->type = QUEUE_JOB_TYPE_ACTION_DESTROY; job->pia = pia; - - if (rte_flow_async_action_handle_destroy(port_id, - queue_id, &attr, pia->handle, job, &error)) { + ret = pia->type == RTE_FLOW_ACTION_TYPE_INDIRECT_LIST ? + rte_flow_async_action_list_handle_destroy + (port_id, queue_id, + &attr, pia->list_handle, + job, &error) : + rte_flow_async_action_handle_destroy + (port_id, queue_id, &attr, pia->handle, + job, &error); + if (ret) { free(job); ret = port_flow_complain(&error); continue; diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index bdfbfd36d3..5c43b4db0b 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -228,7 +228,12 @@ struct port_indirect_action { struct port_indirect_action *next; /**< Next flow in list. */ uint32_t id; /**< Indirect action ID. */ enum rte_flow_action_type type; /**< Action type. */ - struct rte_flow_action_handle *handle; /**< Indirect action handle. */ + union { + struct rte_flow_action_handle *handle; + /**< Indirect action handle. */ + struct rte_flow_action_list_handle *list_handle; + /**< Indirect action list handle*/ + }; enum age_action_context_type age_type; /**< Age action context type. */ }; @@ -921,7 +926,7 @@ void update_fwd_ports(portid_t new_pid); void set_fwd_eth_peer(portid_t port_id, char *peer_addr); void port_mtu_set(portid_t port_id, uint16_t mtu); -int port_action_handle_create(portid_t port_id, uint32_t id, +int port_action_handle_create(portid_t port_id, uint32_t id, bool indirect_list, const struct rte_flow_indir_action_conf *conf, const struct rte_flow_action *action); int port_action_handle_destroy(portid_t port_id, diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini index 1a5087abad..10a1c1af77 100644 --- a/doc/guides/nics/features/default.ini +++ b/doc/guides/nics/features/default.ini @@ -158,6 +158,7 @@ drop = flag = inc_tcp_ack = inc_tcp_seq = +indirect_list = jump = mac_swap = mark = diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 32fc45516a..25699ebaec 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -3300,6 +3300,125 @@ The ``quota`` value is reduced according to ``mode`` setting. | ``RTE_FLOW_QUOTA_MODE_L3`` | Count packet bytes starting from L3 | +------------------+----------------------------------------------------+ +Action: ``INDIRECT_LIST`` +^^^^^^^^^^^^^^^^^^^^^^^^^ + +Indirect API creates a shared flow action with unique action handle. +Flow rules can access the shared flow action and resources related to +that action through the indirect action handle. +In addition, the API allows to update existing shared flow action +configuration. After the update completes, new action configuration +is available to all flows that reference that shared action. + +Indirect actions list expands the indirect action API: + +- Indirect action list creates a handle for one or several + flow actions, while legacy indirect action handle references + single action only. + Input flow actions arranged in END terminated list. + +- Flow rule can provide rule specific configuration parameters to + existing shared handle. + Updates of flow rule specific configuration will not change the base + action configuration. + Base action configuration was set during the action creation. + +Indirect action list handle defines 2 types of resources: + +- Mutable handle resource can be changed during handle lifespan. + +- Immutable handle resource value is set during handle creation + and cannot be changed. + +There are 2 types of mutable indirect handle contexts: + +- Action mutable context is always shared between all flows + that referenced indirect actions list handle. + Action mutable context can be changed by explicit invocation + of indirect handle update function. + +- Flow mutable context is private to a flow. + Flow mutable context can be updated by indirect list handle + flow rule configuration. + +Indirect action types - immutable, action / flow mutable, are mutually +exclusive and depend on the action definition. + +If indirect list handle was created from a list of actions A1 / A2 ... An / END +indirect list flow action can update Ai flow mutable context in the +action configuration parameter. +Indirect list action configuration is and array [C1, C2, .., Cn] +where Ci corresponds to Ai in the action handle source. +Ci configuration element points Ai flow mutable update, or it's NULL +if Ai has no flow mutable update. +Indirect list action configuration is NULL if the action has no flow +mutable updates. Otherwise it points to an array of n flow mutable +configuration pointers. + +**Template API:** + +*Action template format:* + +``template .. indirect_list handle Htmpl conf Ctmpl ..`` + +``mask .. indirect_list handle Hmask conf Cmask ..`` + +- If Htmpl was masked (Hmask != 0), it will be fixed in that template. + Otherwise, indirect action value is set in a flow rule. + +- If Htmpl and Ctmpl[i] were masked (Hmask !=0 and Cmask[i] != 0), + Htmpl's Ai action flow mutable context fill be updated to + Ctmpl[i] values and will be fixed in that template. + +*Flow rule format:* + +``actions .. indirect_list handle Hflow conf Cflow ..`` + +- If Htmpl was not masked in actions template, Hflow references an + action of the same type as Htmpl. + +- Cflow[i] updates handle's Ai flow mutable configuration if + the Ci was not masked in action template. + +.. _table_rte_flow_action_indirect_list: + +.. table:: INDIRECT_LIST + + +------------------+----------------------------------+ + | Field | Value | + +==================+==================================+ + | ``handle`` | Indirect action list handle | + +------------------+----------------------------------+ + | ``conf`` | Flow mutable configuration array | + +------------------+----------------------------------+ + +.. code-block:: text + + flow 1: + / indirect handle H conf C1 / + | | + | | + | | flow 2: + | | / indirect handle H conf C2 / + | | | | + | | | | + | | | | + ========================================================= + ^ | | | | + | | V | V + | ~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~ + | flow mutable flow mutable + | context 1 context 2 + | ~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~ + indirect | | | + action | | | + context | V V + | ----------------------------------------------------- + | action mutable context + | ----------------------------------------------------- + v action immutable context + ========================================================= + Negative types ~~~~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst index a9b1293689..d9643a35ef 100644 --- a/doc/guides/rel_notes/release_23_07.rst +++ b/doc/guides/rel_notes/release_23_07.rst @@ -55,6 +55,8 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= + * **Added INDIRECT_LIST flow action.** + Removed Items ------------- diff --git a/lib/ethdev/ethdev_trace.h b/lib/ethdev/ethdev_trace.h index 3dc7d028b8..7577faf8ed 100644 --- a/lib/ethdev/ethdev_trace.h +++ b/lib/ethdev/ethdev_trace.h @@ -2447,6 +2447,94 @@ RTE_TRACE_POINT_FP( rte_trace_point_emit_int(ret); ) +RTE_TRACE_POINT_FP( + rte_flow_trace_action_list_handle_create, + RTE_TRACE_POINT_ARGS + (uint16_t port_id, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *actions, int ret), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(conf); + rte_trace_point_emit_ptr(actions); + rte_trace_point_emit_int(ret); +) + +RTE_TRACE_POINT_FP( + rte_flow_trace_action_list_handle_destroy, + RTE_TRACE_POINT_ARGS + (uint16_t port_id, + const struct rte_flow_action_list_handle *handle, int ret), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(handle); + rte_trace_point_emit_int(ret); +) + +RTE_TRACE_POINT_FP( + rte_flow_trace_async_action_list_handle_create, + RTE_TRACE_POINT_ARGS + (uint16_t port_id, uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *action, + const void *user_data, int ret), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(queue_id); + rte_trace_point_emit_ptr(op_attr); + rte_trace_point_emit_ptr(conf); + rte_trace_point_emit_ptr(action); + rte_trace_point_emit_ptr(user_data); + rte_trace_point_emit_int(ret); +) + +RTE_TRACE_POINT_FP( + rte_flow_trace_async_action_list_handle_destroy, + RTE_TRACE_POINT_ARGS + (uint16_t port_id, uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + const struct rte_flow_action_list_handle *handle, + const void *user_data, int ret), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(queue_id); + rte_trace_point_emit_ptr(op_attr); + rte_trace_point_emit_ptr(handle); + rte_trace_point_emit_ptr(user_data); + rte_trace_point_emit_int(ret); +) + +RTE_TRACE_POINT_FP( + rte_flow_trace_action_list_handle_query_update, + RTE_TRACE_POINT_ARGS + (uint16_t port_id, + const struct rte_flow_action_list_handle *handle, + const void **update, void **query, + enum rte_flow_query_update_mode mode, int ret), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_ptr(handle); + rte_trace_point_emit_ptr(update); + rte_trace_point_emit_ptr(query); + rte_trace_point_emit_int(mode); + rte_trace_point_emit_int(ret); +) + +RTE_TRACE_POINT_FP( + rte_flow_trace_async_action_list_handle_query_update, + RTE_TRACE_POINT_ARGS + (uint16_t port_id, uint32_t queue_id, + const struct rte_flow_op_attr *attr, + const struct rte_flow_action_list_handle *handle, + const void **update, void **query, + enum rte_flow_query_update_mode mode, + void *user_data, int ret), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u32(queue_id); + rte_trace_point_emit_ptr(attr); + rte_trace_point_emit_ptr(handle); + rte_trace_point_emit_ptr(update); + rte_trace_point_emit_ptr(query); + rte_trace_point_emit_int(mode); + rte_trace_point_emit_ptr(user_data); + rte_trace_point_emit_int(ret); +) #ifdef __cplusplus } #endif diff --git a/lib/ethdev/ethdev_trace_points.c b/lib/ethdev/ethdev_trace_points.c index 61010cae56..5e1462a6fb 100644 --- a/lib/ethdev/ethdev_trace_points.c +++ b/lib/ethdev/ethdev_trace_points.c @@ -750,3 +750,21 @@ RTE_TRACE_POINT_REGISTER(rte_tm_trace_wred_profile_add, RTE_TRACE_POINT_REGISTER(rte_tm_trace_wred_profile_delete, lib.ethdev.tm.wred_profile_delete) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_action_list_handle_create, + lib.ethdev.flow.action_list_handle_create) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_action_list_handle_destroy, + lib.ethdev.flow.action_list_handle_destroy) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_action_list_handle_query_update, + lib.ethdev.flow.action_list_handle_query_update) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_async_action_list_handle_create, + lib.ethdev.flow.async_action_list_handle_create) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_async_action_list_handle_destroy, + lib.ethdev.flow.async_action_list_handle_destroy) + +RTE_TRACE_POINT_REGISTER(rte_flow_trace_async_action_list_handle_query_update, + lib.ethdev.flow.async_action_list_handle_query_update) diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 69e6e749f7..71c9b6cb84 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -259,6 +259,8 @@ static const struct rte_flow_desc_data rte_flow_desc_action[] = { MK_FLOW_ACTION(METER_MARK, sizeof(struct rte_flow_action_meter_mark)), MK_FLOW_ACTION(SEND_TO_KERNEL, 0), MK_FLOW_ACTION(QUOTA, sizeof(struct rte_flow_action_quota)), + MK_FLOW_ACTION(INDIRECT_LIST, + sizeof(struct rte_flow_action_indirect_list)), }; int @@ -2171,3 +2173,172 @@ rte_flow_async_action_handle_query_update(uint16_t port_id, uint32_t queue_id, user_data, error); return flow_err(port_id, ret, error); } + +struct rte_flow_action_list_handle * +rte_flow_action_list_handle_create(uint16_t port_id, + const + struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *actions, + struct rte_flow_error *error) +{ + int ret; + struct rte_eth_dev *dev; + const struct rte_flow_ops *ops; + struct rte_flow_action_list_handle *handle; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, NULL); + ops = rte_flow_ops_get(port_id, error); + if (!ops || !ops->action_list_handle_create) { + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "action_list handle not supported"); + return NULL; + } + dev = &rte_eth_devices[port_id]; + handle = ops->action_list_handle_create(dev, conf, actions, error); + ret = flow_err(port_id, -rte_errno, error); + rte_flow_trace_action_list_handle_create(port_id, conf, actions, ret); + return handle; +} + +int +rte_flow_action_list_handle_destroy(uint16_t port_id, + struct rte_flow_action_list_handle *handle, + struct rte_flow_error *error) +{ + int ret; + struct rte_eth_dev *dev; + const struct rte_flow_ops *ops; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + ops = rte_flow_ops_get(port_id, error); + if (!ops || !ops->action_list_handle_destroy) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "action_list handle not supported"); + dev = &rte_eth_devices[port_id]; + ret = ops->action_list_handle_destroy(dev, handle, error); + ret = flow_err(port_id, ret, error); + rte_flow_trace_action_list_handle_destroy(port_id, handle, ret); + return ret; +} + +struct rte_flow_action_list_handle * +rte_flow_async_action_list_handle_create(uint16_t port_id, uint32_t queue_id, + const struct rte_flow_op_attr *attr, + const struct + rte_flow_indir_action_conf *conf, + const struct rte_flow_action *actions, + void *user_data, + struct rte_flow_error *error) +{ + int ret; + struct rte_eth_dev *dev; + const struct rte_flow_ops *ops; + struct rte_flow_action_list_handle *handle; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, NULL); + ops = rte_flow_ops_get(port_id, error); + if (!ops || !ops->async_action_list_handle_create) { + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "action_list handle not supported"); + return NULL; + } + dev = &rte_eth_devices[port_id]; + handle = ops->async_action_list_handle_create(dev, queue_id, attr, conf, + actions, user_data, + error); + ret = flow_err(port_id, -rte_errno, error); + rte_flow_trace_async_action_list_handle_create(port_id, queue_id, attr, + conf, actions, user_data, + ret); + return handle; +} + +int +rte_flow_async_action_list_handle_destroy + (uint16_t port_id, uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_action_list_handle *handle, + void *user_data, struct rte_flow_error *error) +{ + int ret; + struct rte_eth_dev *dev; + const struct rte_flow_ops *ops; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + ops = rte_flow_ops_get(port_id, error); + if (!ops || !ops->async_action_list_handle_destroy) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "async action_list handle not supported"); + dev = &rte_eth_devices[port_id]; + ret = ops->async_action_list_handle_destroy(dev, queue_id, op_attr, + handle, user_data, error); + ret = flow_err(port_id, ret, error); + rte_flow_trace_async_action_list_handle_destroy(port_id, queue_id, + op_attr, handle, + user_data, ret); + return ret; +} + +int +rte_flow_action_list_handle_query_update + (uint16_t port_id, + const struct rte_flow_action_list_handle *handle, + const void **update, void **query, + enum rte_flow_query_update_mode mode, + struct rte_flow_error *error) +{ + int ret; + struct rte_eth_dev *dev; + const struct rte_flow_ops *ops; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + ops = rte_flow_ops_get(port_id, error); + if (!ops || !ops->action_list_handle_query_update) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "action_list query_update not supported"); + dev = &rte_eth_devices[port_id]; + ret = ops->action_list_handle_query_update(dev, handle, update, query, + mode, error); + ret = flow_err(port_id, ret, error); + rte_flow_trace_action_list_handle_query_update(port_id, handle, update, + query, mode, ret); + return ret; +} + +int +rte_flow_async_action_list_handle_query_update + (uint16_t port_id, uint32_t queue_id, + const struct rte_flow_op_attr *attr, + const struct rte_flow_action_list_handle *handle, + const void **update, void **query, + enum rte_flow_query_update_mode mode, + void *user_data, struct rte_flow_error *error) +{ + int ret; + struct rte_eth_dev *dev; + const struct rte_flow_ops *ops; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + ops = rte_flow_ops_get(port_id, error); + if (!ops || !ops->async_action_list_handle_query_update) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "action_list async query_update not supported"); + dev = &rte_eth_devices[port_id]; + ret = ops->async_action_list_handle_query_update(dev, queue_id, attr, + handle, update, query, + mode, user_data, + error); + ret = flow_err(port_id, ret, error); + rte_flow_trace_async_action_list_handle_query_update(port_id, queue_id, + attr, handle, + update, query, + mode, user_data, + ret); + return ret; +} diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 713ba8b65c..b65096b3b7 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -2912,6 +2912,13 @@ enum rte_flow_action_type { * applied to the given ethdev Rx queue. */ RTE_FLOW_ACTION_TYPE_SKIP_CMAN, + + /** + * Action handle to reference flow actions list. + * + * @see struct rte_flow_action_indirect_list + */ + RTE_FLOW_ACTION_TYPE_INDIRECT_LIST, }; /** @@ -6118,6 +6125,268 @@ rte_flow_async_action_handle_query_update(uint16_t port_id, uint32_t queue_id, void *user_data, struct rte_flow_error *error); +struct rte_flow_action_list_handle; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Configure INDIRECT_LIST flow action. + * + * @see RTE_FLOW_ACTION_TYPE_INDIRECT_LIST + */ +struct rte_flow_action_indirect_list { + /** Indirect action list handle */ + struct rte_flow_action_list_handle *handle; + /** + * Flow mutable configuration array. + * NULL if the handle has no flow mutable configuration update. + * Otherwise, if the handle was created with list A1 / A2 .. An / END + * size of conf is n. + * conf[i] points to flow mutable update of Ai in the handle + * actions list or NULL if Ai has no update. + */ + const void **conf; +}; + + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Create an indirect flow action object from flow actions list. + * The object is identified by a unique handle. + * The handle has single state and configuration + * across all the flow rules using it. + * + * @param[in] port_id + * The port identifier of the Ethernet device. + * @param[in] conf + * Action configuration for the indirect action list creation. + * @param[in] actions + * Specific configuration of the indirect action lists. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * @return + * A valid handle in case of success, NULL otherwise and rte_errno is set + * to one of the error codes defined: + * - (-ENODEV) if *port_id* invalid. + * - (-ENOSYS) if underlying device does not support this functionality. + * - (-EIO) if underlying device is removed. + * - (-EINVAL) if *actions* list invalid. + * - (-ENOTSUP) if *action* list element valid but unsupported. + */ +__rte_experimental +struct rte_flow_action_list_handle * +rte_flow_action_list_handle_create(uint16_t port_id, + const + struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *actions, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Async function call to create an indirect flow action object + * from flow actions list. + * The object is identified by a unique handle. + * The handle has single state and configuration + * across all the flow rules using it. + * + * @param[in] port_id + * The port identifier of the Ethernet device. + * @param[in] queue_id + * Flow queue which is used to update the rule. + * @param[in] attr + * Indirect action update operation attributes. + * @param[in] conf + * Action configuration for the indirect action list creation. + * @param[in] actions + * Specific configuration of the indirect action list. + * @param[in] user_data + * The user data that will be returned on async completion event. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * @return + * A valid handle in case of success, NULL otherwise and rte_errno is set + * to one of the error codes defined: + * - (-ENODEV) if *port_id* invalid. + * - (-ENOSYS) if underlying device does not support this functionality. + * - (-EIO) if underlying device is removed. + * - (-EINVAL) if *actions* list invalid. + * - (-ENOTSUP) if *action* list element valid but unsupported. + */ +__rte_experimental +struct rte_flow_action_list_handle * +rte_flow_async_action_list_handle_create(uint16_t port_id, uint32_t queue_id, + const struct rte_flow_op_attr *attr, + const struct + rte_flow_indir_action_conf *conf, + const struct rte_flow_action *actions, + void *user_data, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Destroy indirect actions list by handle. + * + * @param[in] port_id + * The port identifier of the Ethernet device. + * @param[in] handle + * Handle for the indirect actions list to be destroyed. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * @return + * - (0) if success. + * - (-ENODEV) if *port_id* invalid. + * - (-ENOSYS) if underlying device does not support this functionality. + * - (-EIO) if underlying device is removed. + * - (-ENOENT) if actions list pointed by *action* handle was not found. + * - (-EBUSY) if actions list pointed by *action* handle still used + */ +__rte_experimental +int +rte_flow_action_list_handle_destroy(uint16_t port_id, + struct rte_flow_action_list_handle *handle, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue indirect action list destruction operation. + * The destroy queue must be the same + * as the queue on which the action was created. + * + * @param[in] port_id + * Port identifier of Ethernet device. + * @param[in] queue_id + * Flow queue which is used to destroy the rule. + * @param[in] op_attr + * Indirect action destruction operation attributes. + * @param[in] handle + * Handle for the indirect action object to be destroyed. + * @param[in] user_data + * The user data that will be returned on the completion events. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * - (0) if success. + * - (-ENODEV) if *port_id* invalid. + * - (-ENOSYS) if underlying device does not support this functionality. + * - (-EIO) if underlying device is removed. + * - (-ENOENT) if actions list pointed by *action* handle was not found. + * - (-EBUSY) if actions list pointed by *action* handle still used + */ +__rte_experimental +int +rte_flow_async_action_list_handle_destroy + (uint16_t port_id, uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_action_list_handle *handle, + void *user_data, struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Query and/or update indirect flow actions list. + * If both query and update not NULL, the function atomically + * queries and updates indirect action. Query and update are carried in order + * specified in the mode parameter. + * If ether query or update is NULL, the function executes + * complementing operation. + * + * @param port_id + * Port identifier of Ethernet device. + * @param handle + * Handle for the indirect actions list object to be updated. + * @param update + * If not NULL, update profile specification used to modify the action + * pointed by handle. + * @see struct rte_flow_action_indirect_list + * @param query + * If not NULL pointer to storage for the associated query data type. + * @see struct rte_flow_action_indirect_list + * @param mode + * Operational mode. + * @param error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * - (0) if success. + * - (-ENODEV) if *port_id* invalid. + * - (-ENOTSUP) if underlying device does not support this functionality. + * - (-EINVAL) if *handle* or *mode* invalid or + * both *query* and *update* are NULL. + */ +__rte_experimental +int +rte_flow_action_list_handle_query_update(uint16_t port_id, + const struct + rte_flow_action_list_handle *handle, + const void **update, void **query, + enum rte_flow_query_update_mode mode, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue async indirect flow actions list query and/or update + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue_id + * Flow queue which is used to update the rule. + * @param attr + * Indirect action update operation attributes. + * @param handle + * Handle for the indirect actions list object to be updated. + * @param update + * If not NULL, update profile specification used to modify the action + * pointed by handle. + * @see struct rte_flow_action_indirect_list + * @param query + * If not NULL, pointer to storage for the associated query data type. + * Query result returned on async completion event. + * @see struct rte_flow_action_indirect_list + * @param mode + * Operational mode. + * @param user_data + * The user data that will be returned on async completion event. + * @param error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * - (0) if success. + * - (-ENODEV) if *port_id* invalid. + * - (-ENOTSUP) if underlying device does not support this functionality. + * - (-EINVAL) if *handle* or *mode* invalid or + * both *update* and *query* are NULL. + */ +__rte_experimental +int +rte_flow_async_action_list_handle_query_update(uint16_t port_id, uint32_t queue_id, + const struct rte_flow_op_attr *attr, + const struct + rte_flow_action_list_handle *handle, + const void **update, void **query, + enum rte_flow_query_update_mode mode, + void *user_data, + struct rte_flow_error *error); + + #ifdef __cplusplus } #endif diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h index a129a4605d..af63ef9b5c 100644 --- a/lib/ethdev/rte_flow_driver.h +++ b/lib/ethdev/rte_flow_driver.h @@ -121,6 +121,17 @@ struct rte_flow_ops { const void *update, void *query, enum rte_flow_query_update_mode qu_mode, struct rte_flow_error *error); + /** @see rte_flow_action_list_handle_create() */ + struct rte_flow_action_list_handle *(*action_list_handle_create) + (struct rte_eth_dev *dev, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action actions[], + struct rte_flow_error *error); + /** @see rte_flow_action_list_handle_destroy() */ + int (*action_list_handle_destroy) + (struct rte_eth_dev *dev, + struct rte_flow_action_list_handle *handle, + struct rte_flow_error *error); /** See rte_flow_tunnel_decap_set() */ int (*tunnel_decap_set) (struct rte_eth_dev *dev, @@ -302,6 +313,36 @@ struct rte_flow_ops { const void *update, void *query, enum rte_flow_query_update_mode qu_mode, void *user_data, struct rte_flow_error *error); + /** @see rte_flow_async_action_list_handle_create() */ + struct rte_flow_action_list_handle * + (*async_action_list_handle_create) + (struct rte_eth_dev *dev, uint32_t queue_id, + const struct rte_flow_op_attr *attr, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *actions, + void *user_data, struct rte_flow_error *error); + /** @see rte_flow_async_action_list_handle_destroy() */ + int (*async_action_list_handle_destroy) + (struct rte_eth_dev *dev, uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_action_list_handle *action_handle, + void *user_data, struct rte_flow_error *error); + /** @see rte_flow_action_list_handle_query_update() */ + int (*action_list_handle_query_update) + (struct rte_eth_dev *dev, + const struct rte_flow_action_list_handle *handle, + const void **update, void **query, + enum rte_flow_query_update_mode mode, + struct rte_flow_error *error); + /** @see rte_flow_async_action_list_handle_query_update() */ + int (*async_action_list_handle_query_update) + (struct rte_eth_dev *dev, uint32_t queue_id, + const struct rte_flow_op_attr *attr, + const struct rte_flow_action_list_handle *handle, + const void **update, void **query, + enum rte_flow_query_update_mode mode, + void *user_data, struct rte_flow_error *error); + }; /** diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 357d1a88c0..02372b3a7e 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -299,6 +299,14 @@ EXPERIMENTAL { rte_flow_action_handle_query_update; rte_flow_async_action_handle_query_update; rte_flow_async_create_by_index; + + # added in 23.07 + rte_flow_action_list_handle_create; + rte_flow_action_list_handle_destroy; + rte_flow_action_list_handle_query_update; + rte_flow_async_action_list_handle_create; + rte_flow_async_action_list_handle_destroy; + rte_flow_async_action_list_handle_query_update; }; INTERNAL { From patchwork Tue May 30 03:06:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dong Zhou X-Patchwork-Id: 127681 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A0B6F42BDA; Tue, 30 May 2023 05:06:56 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 29B8F40F18; Tue, 30 May 2023 05:06:56 +0200 (CEST) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2057.outbound.protection.outlook.com [40.107.243.57]) by mails.dpdk.org (Postfix) with ESMTP id CB432406BC for ; Tue, 30 May 2023 05:06:54 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lM+vlEYsQfwHm24ZP1P0l8m4HvnTKDtwuTHVU5EYIIyIJRHdWZdauBqJDfXV8vdC/LV8NaUdeEa0n6lsjnC5fgaiLyP7NfVua7688XVcuq3ParPCFY+UH9mSXmyv7b3cEEO+SZ6ros3Fe5VViIuolyk2UB6AO8kQsatc62apfnNryz1nB/EMEJUaU2i8xufAEt7WP7Ot3TBHzEntqh2mcYVRRi0fz4RG9BHK4VyO6bQyGOAr5viLgeQif1HtZ1cdP/JeHoACz2Rwm15ikeDZnlrFeaKuURJ2/ClFeq3ylZenV8atwPS6lt2ooiVZHBqJ71IvIsRp1aQopJs8nHW0gg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=tnxnizUkLVvlpatobU8HKJ3Z6Vsv11DWaGg7ySHC6yE=; b=eVfoh5pHNShu05D+PeeVPxpuZAQbMfd1C7zleaZtjsUS9hd6pog9YU6DlkHMHBlAUnSk28gF6SDV4ESF7nlMwt1bSHozocaRLiJBMH+X/pgD3zlCqbCSgpA609e8+wSmHIfbAoTJ88C5PLwDeMtXovvlepJfwZGYX2qfQY7Y2Yc018mZQ4fo6yZNctNbKO0GEI+rTRSdPzHCpiyMI5ummBDcNURvDfHdd0nj8up6h143xgV2Y7FeKjBx2ZuaZN6uwEO6UnEV3pCP+BHLi28ZoSlOhWQ5W7JJZYBHeQSPa7BPiIp/PgvYvTzcw3vyoNrNASohaYg0a82Vl+0f0aYbrg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=tnxnizUkLVvlpatobU8HKJ3Z6Vsv11DWaGg7ySHC6yE=; b=lmreNIUlhJ3ZR9dd5gKXvtbtIMWS0lkB7EFGWynago5YlfvOtEQTEoQ7MSEmeQbb4cA+L4Xs9e60+SppRjYx8tUC3dJZt1E8BYkwfu3GeKt9CteRxRKbQ3ScZBzZSeGXUMdJv6g8KS21czymdZGYisNoGBoNnUOH2NX6xvgGBiA/Cg/ibGzX2CJf4GX6QSum0c3iaNEA3WC6OlnqXywTpGi+esU6Pqcj8qlnLaA7Gb/WMhpdlWGjYSNLARWpvyNVnG6n5iymQLjtCx9exwGJhrOETlVV4fq+UFLZ7w873WdEWZrYGus0lSy1GHkMMkkTwTk9IdB0EDro/Jxn3Z8sVA== Received: from SJ0PR13CA0067.namprd13.prod.outlook.com (2603:10b6:a03:2c4::12) by CYYPR12MB8750.namprd12.prod.outlook.com (2603:10b6:930:be::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.22; Tue, 30 May 2023 03:06:52 +0000 Received: from CO1NAM11FT115.eop-nam11.prod.protection.outlook.com (2603:10b6:a03:2c4:cafe::1d) by SJ0PR13CA0067.outlook.office365.com (2603:10b6:a03:2c4::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6455.20 via Frontend Transport; Tue, 30 May 2023 03:06:52 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT115.mail.protection.outlook.com (10.13.174.211) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6455.22 via Frontend Transport; Tue, 30 May 2023 03:06:52 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 29 May 2023 20:06:34 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 29 May 2023 20:06:31 -0700 From: Dong Zhou To: , , Aman Singh , Yuying Zhang , "Ferruh Yigit" , Andrew Rybchenko , Olivier Matz CC: Subject: [PATCH v4] ethdev: add flow item for RoCE infiniband BTH Date: Tue, 30 May 2023 06:06:19 +0300 Message-ID: <20230530030620.2935042-1-dongzhou@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230525074041.2370704-1-dongzhou@nvidia.com> References: <20230525074041.2370704-1-dongzhou@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT115:EE_|CYYPR12MB8750:EE_ X-MS-Office365-Filtering-Correlation-Id: f65f5c8a-67d3-4a24-56af-08db60baeac9 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: BLZPr65JpI+HW+neGAGWLeRvXROVrXdv9nXyHW2eDLlmPMkdfI6RDkeTYq6Zd835jxM2XyGp4x4DxUh4mlg46iTeAGh9FBx950wHtgHgyC5mgmGvKjGGyRDcGJGvArLeX6JKRjOz2oSx+/tUS9juRW+JkhyePzk74+RBPl73vu+NrTWVdmyzqIsDT90jX+YvzWmDv86YZ9uG75b8of7JJoT6rxvyAigIK1k8qTCQGNAg6jKtmS5+t/0vBm1BJJzbi8OpvTIL0zC0RkGHhHU+CVTPGKbtSM9Es+E+0jEwIt4rvrFiPSYRzLSn0YgVIjissCI09idV/+3hfNMCEfJ8UgF2uEtbGTFOX3j4IcKu1o4XR7xJn8rZKED2+yiSm60N0k7j1h9c7Pw7pZ4TD7GuzS9Jt3S+tKeaEIXVZBH/KpOYyTc4ay2t6OCJoxDHWqAolMZZcOMEQFuxrhQ4d9Cor89Qtm648iOg/e+NDs+iQ7jZm5jv9IGdVZzadLsW8APps0Jy0/IgjZv6YNdaD0+b+EqzqjE/gqJ49fwpzrHWofNueT22ea+La073zMpS1LqTp3WA2JktK7I99lvA43WyhT4sNBrH5P5LX2P/8kqnAcNjO42gCT8Irm72x48LO+8yaY8tJUq7cJFNSalHN6sfN9gCaD6PRvhTYL/uA99bYYReR6qVqubpAf0+WDA+HYjkAb1eGoZnG64comG30wlvW879lkkpqShXsB5rbYTA2R5M0HO5xfmEBXBhR/6qF1bU X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(39860400002)(136003)(376002)(396003)(346002)(451199021)(40470700004)(46966006)(36840700001)(478600001)(110136005)(40460700003)(5660300002)(8676002)(8936002)(30864003)(2906002)(86362001)(36756003)(82310400005)(70206006)(70586007)(4326008)(356005)(82740400003)(7636003)(316002)(40480700001)(55016003)(41300700001)(16526019)(6286002)(186003)(26005)(1076003)(36860700001)(47076005)(66574015)(6666004)(83380400001)(336012)(426003)(7696005)(2616005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 May 2023 03:06:52.0898 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f65f5c8a-67d3-4a24-56af-08db60baeac9 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT115.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CYYPR12MB8750 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org IB(InfiniBand) is one type of networking used in high-performance computing with high throughput and low latency. Like Ethernet, IB defines a layered protocol (Physical, Link, Network, Transport Layers). IB provides native support for RDMA(Remote DMA), an extension of the DMA that allows direct access to remote host memory without CPU intervention. IB network requires NICs and switches to support the IB protocol. RoCE(RDMA over Converged Ethernet) is a network protocol that allows RDMA to run on Ethernet. RoCE encapsulates IB packets on Ethernet and has two versions, RoCEv1 and RoCEv2. RoCEv1 is an Ethernet link layer protocol, IB packets are encapsulated in the Ethernet layer and use Ethernet type 0x8915. RoCEv2 is an internet layer protocol, IB packets are encapsulated in UDP payload and use a destination port 4791, The format of the RoCEv2 packet is as follows: ETH + IP + UDP(dport 4791) + IB(BTH + ExtHDR + PAYLOAD + CRC) BTH(Base Transport Header) is the IB transport layer header, RoCEv1 and RoCEv2 both contain this header. This patch introduces a new RTE item to match the IB BTH in RoCE packets. One use of this match is that the user can monitor RoCEv2's CNP(Congestion Notification Packet) by matching BTH opcode 0x81. This patch also adds the testpmd command line to match the RoCEv2 BTH. Usage example: testpmd> flow create 0 group 1 ingress pattern eth / ipv4 / udp dst is 4791 / ib_bth opcode is 0x81 dst_qp is 0xd3 / end actions queue index 0 / end Signed-off-by: Dong Zhou Acked-by: Ori Kam Acked-by: Andrew Rybchenko v2: - Change "ethernet" name to "Ethernet" in the commit log. - Add "RoCE" and "IB" 2 words to words-case.txt. - Add "rte_byteorder.h" header file in "rte_ib.h" to fix compile errors. - Add "Acked-by" labels in the first ethdev patch. v3: - Do rebase to fix the patch apply failure. - Add "Acked-by" label in the second net/mlx5 patch. v4: - Split this series of patches, only keep the first ethdev patch. --- app/test-pmd/cmdline_flow.c | 58 +++++++++++++++++ devtools/words-case.txt | 2 + doc/guides/nics/features/default.ini | 1 + doc/guides/prog_guide/rte_flow.rst | 7 +++ doc/guides/testpmd_app_ug/testpmd_funcs.rst | 7 +++ lib/ethdev/rte_flow.c | 1 + lib/ethdev/rte_flow.h | 27 ++++++++ lib/net/meson.build | 1 + lib/net/rte_ib.h | 70 +++++++++++++++++++++ 9 files changed, 174 insertions(+) create mode 100644 lib/net/rte_ib.h diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 58939ec321..3ade229ffc 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -496,6 +496,11 @@ enum index { ITEM_QUOTA_STATE_NAME, ITEM_AGGR_AFFINITY, ITEM_AGGR_AFFINITY_VALUE, + ITEM_IB_BTH, + ITEM_IB_BTH_OPCODE, + ITEM_IB_BTH_PKEY, + ITEM_IB_BTH_DST_QPN, + ITEM_IB_BTH_PSN, /* Validate/create actions. */ ACTIONS, @@ -1452,6 +1457,7 @@ static const enum index next_item[] = { ITEM_METER, ITEM_QUOTA, ITEM_AGGR_AFFINITY, + ITEM_IB_BTH, END_SET, ZERO, }; @@ -1953,6 +1959,15 @@ static const enum index item_aggr_affinity[] = { ZERO, }; +static const enum index item_ib_bth[] = { + ITEM_IB_BTH_OPCODE, + ITEM_IB_BTH_PKEY, + ITEM_IB_BTH_DST_QPN, + ITEM_IB_BTH_PSN, + ITEM_NEXT, + ZERO, +}; + static const enum index next_action[] = { ACTION_END, ACTION_VOID, @@ -5523,6 +5538,46 @@ static const struct token token_list[] = { .call = parse_quota_state_name, .comp = comp_quota_state_name }, + [ITEM_IB_BTH] = { + .name = "ib_bth", + .help = "match ib bth fields", + .priv = PRIV_ITEM(IB_BTH, + sizeof(struct rte_flow_item_ib_bth)), + .next = NEXT(item_ib_bth), + .call = parse_vc, + }, + [ITEM_IB_BTH_OPCODE] = { + .name = "opcode", + .help = "match ib bth opcode", + .next = NEXT(item_ib_bth, NEXT_ENTRY(COMMON_UNSIGNED), + item_param), + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ib_bth, + hdr.opcode)), + }, + [ITEM_IB_BTH_PKEY] = { + .name = "pkey", + .help = "partition key", + .next = NEXT(item_ib_bth, NEXT_ENTRY(COMMON_UNSIGNED), + item_param), + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ib_bth, + hdr.pkey)), + }, + [ITEM_IB_BTH_DST_QPN] = { + .name = "dst_qp", + .help = "destination qp", + .next = NEXT(item_ib_bth, NEXT_ENTRY(COMMON_UNSIGNED), + item_param), + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ib_bth, + hdr.dst_qp)), + }, + [ITEM_IB_BTH_PSN] = { + .name = "psn", + .help = "packet sequence number", + .next = NEXT(item_ib_bth, NEXT_ENTRY(COMMON_UNSIGNED), + item_param), + .args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_ib_bth, + hdr.psn)), + }, /* Validate/create actions. */ [ACTIONS] = { .name = "actions", @@ -11849,6 +11904,9 @@ flow_item_default_mask(const struct rte_flow_item *item) case RTE_FLOW_ITEM_TYPE_AGGR_AFFINITY: mask = &rte_flow_item_aggr_affinity_mask; break; + case RTE_FLOW_ITEM_TYPE_IB_BTH: + mask = &rte_flow_item_ib_bth_mask; + break; default: break; } diff --git a/devtools/words-case.txt b/devtools/words-case.txt index 42c7861b68..5bd34e8b88 100644 --- a/devtools/words-case.txt +++ b/devtools/words-case.txt @@ -27,6 +27,7 @@ GENEVE GTPU GUID HW +IB ICMP ID IO @@ -74,6 +75,7 @@ QinQ RDMA RETA ROC +RoCE RQ RSS RVU diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini index 1a5087abad..1738715e26 100644 --- a/doc/guides/nics/features/default.ini +++ b/doc/guides/nics/features/default.ini @@ -104,6 +104,7 @@ gtpc = gtpu = gtp_psc = higig2 = +ib_bth = icmp = icmp6 = icmp6_echo_request = diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 32fc45516a..e2957df71c 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -1551,6 +1551,13 @@ Matches flow quota state set by quota action. - ``state``: Flow quota state +Item: ``IB_BTH`` +^^^^^^^^^^^^^^^^ + +Matches an InfiniBand base transport header in RoCE packet. + +- ``hdr``: InfiniBand base transport header definition (``rte_ib.h``). + Actions ~~~~~~~ diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 8f23847859..4bad244029 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3781,6 +3781,13 @@ This section lists supported pattern items and their attributes, if any. - ``send_to_kernel``: send packets to kernel. +- ``ib_bth``: match InfiniBand BTH(base transport header). + + - ``opcode {unsigned}``: Opcode. + - ``pkey {unsigned}``: Partition key. + - ``dst_qp {unsigned}``: Destination Queue Pair. + - ``psn {unsigned}``: Packet Sequence Number. + Actions list ^^^^^^^^^^^^ diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 69e6e749f7..6e099deca3 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -164,6 +164,7 @@ static const struct rte_flow_desc_data rte_flow_desc_item[] = { MK_FLOW_ITEM(IPV6_ROUTING_EXT, sizeof(struct rte_flow_item_ipv6_routing_ext)), MK_FLOW_ITEM(QUOTA, sizeof(struct rte_flow_item_quota)), MK_FLOW_ITEM(AGGR_AFFINITY, sizeof(struct rte_flow_item_aggr_affinity)), + MK_FLOW_ITEM(IB_BTH, sizeof(struct rte_flow_item_ib_bth)), }; /** Generate flow_action[] entry. */ diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 713ba8b65c..2b7f144c27 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -38,6 +38,7 @@ #include #include #include +#include #ifdef __cplusplus extern "C" { @@ -672,6 +673,13 @@ enum rte_flow_item_type { * @see struct rte_flow_item_aggr_affinity. */ RTE_FLOW_ITEM_TYPE_AGGR_AFFINITY, + + /** + * Matches an InfiniBand base transport header in RoCE packet. + * + * See struct rte_flow_item_ib_bth. + */ + RTE_FLOW_ITEM_TYPE_IB_BTH, }; /** @@ -2260,6 +2268,25 @@ rte_flow_item_aggr_affinity_mask = { }; #endif +/** + * RTE_FLOW_ITEM_TYPE_IB_BTH. + * + * Matches an InfiniBand base transport header in RoCE packet. + */ +struct rte_flow_item_ib_bth { + struct rte_ib_bth hdr; /**< InfiniBand base transport header definition. */ +}; + +/** Default mask for RTE_FLOW_ITEM_TYPE_IB_BTH. */ +#ifndef __cplusplus +static const struct rte_flow_item_ib_bth rte_flow_item_ib_bth_mask = { + .hdr = { + .opcode = 0xff, + .dst_qp = "\xff\xff\xff", + }, +}; +#endif + /** * Action types. * diff --git a/lib/net/meson.build b/lib/net/meson.build index 379d161ee0..b7a0684101 100644 --- a/lib/net/meson.build +++ b/lib/net/meson.build @@ -22,6 +22,7 @@ headers = files( 'rte_geneve.h', 'rte_l2tpv2.h', 'rte_ppp.h', + 'rte_ib.h', ) sources = files( diff --git a/lib/net/rte_ib.h b/lib/net/rte_ib.h new file mode 100644 index 0000000000..9eab5f9e15 --- /dev/null +++ b/lib/net/rte_ib.h @@ -0,0 +1,70 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 NVIDIA Corporation & Affiliates + */ + +#ifndef RTE_IB_H +#define RTE_IB_H + +/** + * @file + * + * InfiniBand headers definitions + * + * The infiniBand headers are used by RoCE (RDMA over Converged Ethernet). + */ + +#include + +#include + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * InfiniBand Base Transport Header according to + * IB Specification Vol 1-Release-1.4. + */ +__extension__ +struct rte_ib_bth { + uint8_t opcode; /**< Opcode. */ +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + uint8_t tver:4; /**< Transport Header Version. */ + uint8_t padcnt:2; /**< Pad Count. */ + uint8_t m:1; /**< MigReq. */ + uint8_t se:1; /**< Solicited Event. */ +#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint8_t se:1; /**< Solicited Event. */ + uint8_t m:1; /**< MigReq. */ + uint8_t padcnt:2; /**< Pad Count. */ + uint8_t tver:4; /**< Transport Header Version. */ +#endif + rte_be16_t pkey; /**< Partition key. */ +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + uint8_t rsvd0:6; /**< Reserved. */ + uint8_t b:1; /**< BECN. */ + uint8_t f:1; /**< FECN. */ +#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint8_t f:1; /**< FECN. */ + uint8_t b:1; /**< BECN. */ + uint8_t rsvd0:6; /**< Reserved. */ +#endif + uint8_t dst_qp[3]; /**< Destination QP */ +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + uint8_t rsvd1:7; /**< Reserved. */ + uint8_t a:1; /**< Acknowledge Request. */ +#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN + uint8_t a:1; /**< Acknowledge Request. */ + uint8_t rsvd1:7; /**< Reserved. */ +#endif + uint8_t psn[3]; /**< Packet Sequence Number */ +} __rte_packed; + +/** RoCEv2 default port. */ +#define RTE_ROCEV2_DEFAULT_PORT 4791 + +#ifdef __cplusplus +} +#endif + +#endif /* RTE_IB_H */