From patchwork Sun Feb 6 03:25:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 106894 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DFF5FA034E; Sun, 6 Feb 2022 04:26:00 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E1084410EA; Sun, 6 Feb 2022 04:25:54 +0100 (CET) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2069.outbound.protection.outlook.com [40.107.243.69]) by mails.dpdk.org (Postfix) with ESMTP id BBBE840042 for ; Sun, 6 Feb 2022 04:25:52 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=cDD8FGtOMcZFVTnuSVCaqLw93MPxQNI/9v+li4uCCVm8/gPirn1ZeEwN6EFuqkZHuTr3Sfa7uvzczGLqvlinW8aqPWwFPKcMD7zFD9IWdPD1i2HnEukze9JL+GzXKh3qIhuIS8t5TamuSyh+7xYNR04UyNIfmuzcToGJgd+ourOAWHV8wvMqhzDoecECcA8opJfkTuwVxI+vcmBJrL1sImHPzA7ti7O5Cvy5WfUJNiis+SBP8RsKSnrnEHdKpEbC1jBiIORuf1zGV1Sw0hYizvvnFLeFDqzJJPl8H0pOQj80xbqaZYss+DDA/WwOVYRHa1zL0Nno1wVpTi/SbEOYiQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Sq0L48tfeINQbg2lisWYiEM+Anl8bVBoyWOFW2akFfU=; b=lwr7XpBlDMjNY23mMoFwrFeJG2X4RfitDoFt9TifGuree7EGgWlTSQFn27JdnwXN5eToXpWBOI5D0YkYcfjALTcxbrdENUaBXWVcjiyizIMvtlJVHU45aHTAG2y40Zmp5yoGtRiQ3Gh1rMz+UlYE72BdEmchQvy//5t+vfFv1SNq/ZtVdOeUZWu98AtmMP+2yasU8FN0BKesdtQvQs6Ripp1g7MH216vAxPDLuddQNRpv1ULzYG+GNXMf0LTjSzKuxgB0umqQX45zrXJLYwOrQOLp5XE57AjOkiXncAzPXZa917qQIpnoAdwEI1S/lBY3b1ndfcpz7APgqnSUfCuPg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Sq0L48tfeINQbg2lisWYiEM+Anl8bVBoyWOFW2akFfU=; b=GWa7rWhp/lJRUuCl0n335w6ZC3NEPWX6qDjO0eER/iVVwunWVyPxSD72BCI5S4R+3iNs2iznPMNqXhOqjrQiERMjUaF9slhGX9jOWDRXcRAibxlkM8oArHinQqD3qcVr6zIWOM6hZk9Chs+2xMh4u8zIlrdaY8dQ3y5sfU5MlA0d1TwIBkxgyyQF5RdUzDug56whVWjLdsq8jl1rpPDaD5ron0EAz8N1cjcj6Qd/b1CaBYAvfqKf+6HLhzOCAHdZkI9sAOJPFi4FbQoNRxNAdHGQ65tGiJKSKHznx4a7J0YeAF/tlNsUbcJodoJbQiUOwF8yQsUowZq1YPAGG1ik/w== Received: from CH0PR12MB5346.namprd12.prod.outlook.com (2603:10b6:610:d5::24) by CY4PR1201MB0165.namprd12.prod.outlook.com (2603:10b6:910:1c::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.17; Sun, 6 Feb 2022 03:25:50 +0000 Received: from BN9PR03CA0771.namprd03.prod.outlook.com (2603:10b6:408:13a::26) by CH0PR12MB5346.namprd12.prod.outlook.com (2603:10b6:610:d5::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12; Sun, 6 Feb 2022 03:25:48 +0000 Received: from BN8NAM11FT066.eop-nam11.prod.protection.outlook.com (2603:10b6:408:13a:cafe::11) by BN9PR03CA0771.outlook.office365.com (2603:10b6:408:13a::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12 via Frontend Transport; Sun, 6 Feb 2022 03:25:48 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by BN8NAM11FT066.mail.protection.outlook.com (10.13.177.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4951.12 via Frontend Transport; Sun, 6 Feb 2022 03:25:48 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 6 Feb 2022 03:25:47 +0000 Received: from pegasus01.mtr.labs.mlnx (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Sat, 5 Feb 2022 19:25:44 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , Subject: [PATCH v3 01/10] ethdev: introduce flow pre-configuration hints Date: Sun, 6 Feb 2022 05:25:17 +0200 Message-ID: <20220206032526.816079-2-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220206032526.816079-1-akozyrev@nvidia.com> References: <20220118153027.3947448-1-akozyrev@nvidia.com> <20220206032526.816079-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 9abf30a8-4f02-4e95-9ff0-08d9e9205e84 X-MS-TrafficTypeDiagnostic: CH0PR12MB5346:EE_|CY4PR1201MB0165:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:10000; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: gjpiWhs/n+8m+olALPXj+VIwnMO4g2pE4Z/ZoYZfPZQQsl9oYuoBJ6tTCjbmKf1UOj6BrtCutCHOwdO1AXimeFOugZ9rtY3IlzHZ3u1mBZIj2za9w9IeggUZatELOtmJEIOS82YuGkNtxqnOupHydZmzV8wsfoSMVOazvviINTbxsFa/G4QydFwz1LFk4kUDruMCEJ1ldmJEFOtt7tk2e13EffGBh4E5/Og+RDkjgfD45rJl5THOtxYHYanJoRMCaQ9ovT2GIe0ECMoSo0NjvRxnX3ByWrpj8u0s7hIaT1moMJkYUbdm4s3oQOg/jWiNSnKSGrHl68oJUWerr/Va9KDzcQ2v6puEzGFKcYNjnTisRtutnM9JISDaKQa6KK6kohByAU6yNv6fGMOnSCG8CPy7bMiGjTU1PoePiAXV1x6s/kPOdfbXnENh7Zlws6MIyuhtpUukuqnGFel2+K/9PA4BlMAc/dWqoFtoixEQxc3UxmIQAX3RtkBR49+KlTtTUvUb6JxSOSww9fxapyVAbTcSyxU+A7u7TgtPkZN3K86cfFgmb/PR4EBt/ReBAzMVdyRs+zdGlWN3HbSphXIfJsx6C13MHjxsLMuMF9WD3IafvUGf4QdHWEY2LT0xVvuunrDvnhM6QrsadE2+paHYqVmJY2OTKa01TX7jvILJRHjCDHkvFnQdmet2uMKrpznrRC20c0KPTIi1+IGLvbblrQ== X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(83380400001)(47076005)(36860700001)(8936002)(40460700003)(16526019)(2906002)(70206006)(8676002)(4326008)(70586007)(5660300002)(36756003)(82310400004)(316002)(54906003)(86362001)(508600001)(336012)(186003)(426003)(1076003)(2616005)(6666004)(26005)(81166007)(356005)(6916009)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Feb 2022 03:25:48.1111 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9abf30a8-4f02-4e95-9ff0-08d9e9205e84 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT066.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR1201MB0165 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The flow rules creation/destruction at a large scale incurs a performance penalty and may negatively impact the packet processing when used as part of the datapath logic. This is mainly because software/hardware resources are allocated and prepared during the flow rule creation. In order to optimize the insertion rate, PMD may use some hints provided by the application at the initialization phase. The rte_flow_configure() function allows to pre-allocate all the needed resources beforehand. These resources can be used at a later stage without costly allocations. Every PMD may use only the subset of hints and ignore unused ones or fail in case the requested configuration is not supported. The rte_flow_info_get() is available to retrieve the information about supported pre-configurable resources. Both these functions must be called before any other usage of the flow API engine. Signed-off-by: Alexander Kozyrev Acked-by: Ori Kam --- doc/guides/prog_guide/rte_flow.rst | 37 ++++++++++++ doc/guides/rel_notes/release_22_03.rst | 4 ++ lib/ethdev/rte_flow.c | 40 +++++++++++++ lib/ethdev/rte_flow.h | 82 ++++++++++++++++++++++++++ lib/ethdev/rte_flow_driver.h | 10 ++++ lib/ethdev/version.map | 4 ++ 6 files changed, 177 insertions(+) diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index b4aa9c47c2..5b4c5dd609 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -3589,6 +3589,43 @@ Return values: - 0 on success, a negative errno value otherwise and ``rte_errno`` is set. +Flow engine configuration +------------------------- + +Configure flow API management. + +An application may provide some hints at the initialization phase about +rules engine configuration and/or expected flow rules characteristics. +These hints may be used by PMD to pre-allocate resources and configure NIC. + +Configuration +~~~~~~~~~~~~~ + +This function performs the flow API management configuration and +pre-allocates needed resources beforehand to avoid costly allocations later. +Hints about the expected number of counters or meters in an application, +for example, allow PMD to prepare and optimize NIC memory layout in advance. +``rte_flow_configure()`` must be called before any flow rule is created, +but after an Ethernet device is configured. + +.. code-block:: c + + int + rte_flow_configure(uint16_t port_id, + const struct rte_flow_port_attr *port_attr, + struct rte_flow_error *error); + +Information about resources that can benefit from pre-allocation can be +retrieved via ``rte_flow_info_get()`` API. It returns the maximum number +of pre-configurable resources for a given port on a system. + +.. code-block:: c + + int + rte_flow_info_get(uint16_t port_id, + struct rte_flow_port_attr *port_attr, + struct rte_flow_error *error); + .. _flow_isolated_mode: Flow isolated mode diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst index bf2e3f78a9..8593db3f6a 100644 --- a/doc/guides/rel_notes/release_22_03.rst +++ b/doc/guides/rel_notes/release_22_03.rst @@ -55,6 +55,10 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= +* ethdev: Added ``rte_flow_configure`` API to configure Flow Management + engine, allowing to pre-allocate some resources for better performance. + Added ``rte_flow_info_get`` API to retrieve pre-configurable resources. + * **Updated AF_XDP PMD** * Added support for libxdp >=v1.2.2. diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index a93f68abbc..e7e6478bed 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -1391,3 +1391,43 @@ rte_flow_flex_item_release(uint16_t port_id, ret = ops->flex_item_release(dev, handle, error); return flow_err(port_id, ret, error); } + +int +rte_flow_info_get(uint16_t port_id, + struct rte_flow_port_attr *port_attr, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->info_get)) { + return flow_err(port_id, + ops->info_get(dev, port_attr, error), + error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} + +int +rte_flow_configure(uint16_t port_id, + const struct rte_flow_port_attr *port_attr, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->configure)) { + return flow_err(port_id, + ops->configure(dev, port_attr, error), + error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 1031fb246b..f3c7159484 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -4853,6 +4853,88 @@ rte_flow_flex_item_release(uint16_t port_id, const struct rte_flow_item_flex_handle *handle, struct rte_flow_error *error); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Resource pre-allocation settings. + * The zero value means on demand resource allocations only. + * + */ +struct rte_flow_port_attr { + /** + * Version of the struct layout, should be 0. + */ + uint32_t version; + /** + * Number of counter actions pre-configured. + * @see RTE_FLOW_ACTION_TYPE_COUNT + */ + uint32_t nb_counters; + /** + * Number of aging flows actions pre-configured. + * @see RTE_FLOW_ACTION_TYPE_AGE + */ + uint32_t nb_aging_flows; + /** + * Number of traffic metering actions pre-configured. + * @see RTE_FLOW_ACTION_TYPE_METER + */ + uint32_t nb_meters; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Retrieve configuration attributes supported by the port. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] port_attr + * Port configuration attributes. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_info_get(uint16_t port_id, + struct rte_flow_port_attr *port_attr, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Pre-configure the port's flow API engine. + * + * This API can only be invoked before the application + * starts using the rest of the flow library functions. + * + * The API can be invoked multiple times to change the + * settings. The port, however, may reject the changes. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] port_attr + * Port configuration attributes. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_configure(uint16_t port_id, + const struct rte_flow_port_attr *port_attr, + struct rte_flow_error *error); + #ifdef __cplusplus } #endif diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h index f691b04af4..503700aec4 100644 --- a/lib/ethdev/rte_flow_driver.h +++ b/lib/ethdev/rte_flow_driver.h @@ -152,6 +152,16 @@ struct rte_flow_ops { (struct rte_eth_dev *dev, const struct rte_flow_item_flex_handle *handle, struct rte_flow_error *error); + /** See rte_flow_info_get() */ + int (*info_get) + (struct rte_eth_dev *dev, + struct rte_flow_port_attr *port_attr, + struct rte_flow_error *err); + /** See rte_flow_configure() */ + int (*configure) + (struct rte_eth_dev *dev, + const struct rte_flow_port_attr *port_attr, + struct rte_flow_error *err); }; /** diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 1f7359c846..59785c3634 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -256,6 +256,10 @@ EXPERIMENTAL { rte_flow_flex_item_create; rte_flow_flex_item_release; rte_flow_pick_transfer_proxy; + + # added in 22.03 + rte_flow_info_get; + rte_flow_configure; }; INTERNAL { From patchwork Sun Feb 6 03:25:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 106895 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 87383A0351; Sun, 6 Feb 2022 04:26:08 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 46F74410EC; Sun, 6 Feb 2022 04:25:59 +0100 (CET) Received: from NAM04-DM6-obe.outbound.protection.outlook.com (mail-dm6nam08on2066.outbound.protection.outlook.com [40.107.102.66]) by mails.dpdk.org (Postfix) with ESMTP id 12555410EC for ; Sun, 6 Feb 2022 04:25:57 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=KaFhjCt9vjQH2aifFImC6AqMEBHojH7DS/GYwN77EP5hZiXcanY5yOXtf9+19ymyulFdlEy3XkZo+rAoX1Kb+Ime+B6AmufoaIJWuFRMkfsk/CS+DTOVYvunrNJ8XrkeYLPGmf4bL0JvuEoQW93c4MefDUINfeHAa52s1DC5SGQ7lPsTrMSkHWp1SfJL0S5i3I2/GSgF4kNMyi9IDZ9SeilzA9JdciNlw7S0+EwWqGgw1LczdsbwHvLTV2NLw3Z+7PUsAjVM2Vb66/TDTDKu3Os/9iDdWEt4d6xmO/C2gZoI0c7bjNY4HmLUS3UzTUzlCoxMpOBgZ7VkN2IwROuPfw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=y0BEZFsQXhfxguppQxE3TNWzN95fR/arrL1mkRa0UtQ=; b=CIWM8rJgifAcqPZKu+/T5wYfUQO8IvkRfCvTHZCueCawB+fEiwd8ZBWdhJqZc7ABvENKyImOWg1h+ugaRR89bouAdt/XCxCawPYQcMe8tY/tC6eKb7INiXS8MNOx3sBrj6ulwy9SMCeEONo/hgnW/DBjaNGKInuwdnDPmMpmJhQRmKPzYoz82pnvPizxFzL5pPebcvbJzS42UNDLVvUNW8vwFeqM3WhqMR6GqL6Z8npOGOyigy3xbvgo67I0EAgsVlB0VMhMR2L/Aqzwl4Hvugprgo/L8h+0qXOyPNFeIOL2C5MD+kq1pFcHIpssY1WPYrCUuxSa0ASVnr2KQK964g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=y0BEZFsQXhfxguppQxE3TNWzN95fR/arrL1mkRa0UtQ=; b=tllWigwMipckDZaR8cQ9SXaCQ4CNv4u2l0IxNfrvk/iIRe4hB1ltyGeCfetwfDdEAy7qu4RluPPA0EifkM+Hn9kwm+nRG5S7XajkY/v979sSTdvDoAIhu+uHcaYM5ulbk6r1lVLSvEurlkNAVI7naEWgyNg0hGB97Nkt7hKfGudTSMYNC80g0KglpvWZEiauPXhnJ3waha1n+Tb/1TrZJGz0+JAAbgpPgu+i2QxlWGT3Q1/3f9FeymPIrXVWPixkdeUCkC13toHgka9Esb7jQTSTTxLWqKgnr/yezxSMt/DDamhtNfdq/ODZCZ2z0S/1iWvi8UK8lF0ptJ4ufJ8FHw== Received: from CY4PR12MB1352.namprd12.prod.outlook.com (2603:10b6:903:3a::13) by CY4PR12MB1733.namprd12.prod.outlook.com (2603:10b6:903:11d::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.17; Sun, 6 Feb 2022 03:25:53 +0000 Received: from DM5PR2001CA0022.namprd20.prod.outlook.com (2603:10b6:4:16::32) by CY4PR12MB1352.namprd12.prod.outlook.com (2603:10b6:903:3a::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.17; Sun, 6 Feb 2022 03:25:51 +0000 Received: from DM6NAM11FT027.eop-nam11.prod.protection.outlook.com (2603:10b6:4:16:cafe::4e) by DM5PR2001CA0022.outlook.office365.com (2603:10b6:4:16::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.18 via Frontend Transport; Sun, 6 Feb 2022 03:25:51 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.234) by DM6NAM11FT027.mail.protection.outlook.com (10.13.172.205) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4951.12 via Frontend Transport; Sun, 6 Feb 2022 03:25:50 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 6 Feb 2022 03:25:50 +0000 Received: from pegasus01.mtr.labs.mlnx (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Sat, 5 Feb 2022 19:25:47 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , Subject: [PATCH v3 02/10] ethdev: add flow item/action templates Date: Sun, 6 Feb 2022 05:25:18 +0200 Message-ID: <20220206032526.816079-3-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220206032526.816079-1-akozyrev@nvidia.com> References: <20220118153027.3947448-1-akozyrev@nvidia.com> <20220206032526.816079-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 266b5cd8-1b90-4515-8863-08d9e920600c X-MS-TrafficTypeDiagnostic: CY4PR12MB1352:EE_|CY4PR12MB1733:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:8273; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: cQcScMJF93MzWl3upoRHc+jVlkpo/cWEeFYhIFVrMWSJf1ku0X+AyHLyxuy+0mlmK8MSWNqwmtl+Wr5zjmG/D+DAsaQ1j71DQh9IiEpDJwmYBlvWxWIWP8QmbvK9NMAChRvPvjJBMIDE6EEjUq0oliVW6n32wI9cH6TbX8vj7QQzjLmeObH8BBGl9wRl92TKdI+kq7aqyXUuhvtCpRe8IbnercBj2ptymGLQ/dx9agh5ZL3POxikRWeZFzQf47C78+o0z9AHBIHDs2Rvy+etZTeZPT4/sG43UT5Ky7WZQTcBYWzGj35bSe1qPpVEy2bFJun9HwMSg/XdsU169SoUctwf2zl1rxZG8Hd/bYf/iZWhsnb96C51bAQY98MCL5Y44P6yZqdTpnYHClaJf6ENtMoUM56127hvMxmEfUl8jzp+ekA2LdaZG6n/FTPGagkz78mKpvxYqXnVvJ1nfFglMEO1V2486G71WfADVzOt+t1FW6IiIsonZFzQKG7SBKJu42NL1vhHM1tx/Tl5zGh+GVnV9+VOfAOckAkzrQ1XSPTxtgwHZeSfD/I+qPYP56zlvtbZnRM2DseM9z2Lf5uQOUrS5YCy+ZruhmTxPQ8VVInntLo9AmiLgO3Gr/HKsmEVzIlCRjCs5Yjz4v0WrJXNttr8aI/Fw5XL/eEeiGwgft9whs1C3T/xBLgP6VO4TxChKU6cVUkFNb2Nkp8xdOV1Ng== X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(2906002)(36860700001)(30864003)(5660300002)(356005)(36756003)(47076005)(81166007)(40460700003)(4326008)(83380400001)(8676002)(8936002)(70586007)(70206006)(6666004)(508600001)(26005)(316002)(186003)(426003)(336012)(82310400004)(86362001)(54906003)(2616005)(1076003)(6916009)(16526019)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Feb 2022 03:25:50.7576 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 266b5cd8-1b90-4515-8863-08d9e920600c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT027.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR12MB1733 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Treating every single flow rule as a completely independent and separate entity negatively impacts the flow rules insertion rate. Oftentimes in an application, many flow rules share a common structure (the same item mask and/or action list) so they can be grouped and classified together. This knowledge may be used as a source of optimization by a PMD/HW. The pattern template defines common matching fields (the item mask) without values. The actions template holds a list of action types that will be used together in the same rule. The specific values for items and actions will be given only during the rule creation. A table combines pattern and actions templates along with shared flow rule attributes (group ID, priority and traffic direction). This way a PMD/HW can prepare all the resources needed for efficient flow rules creation in the datapath. To avoid any hiccups due to memory reallocation, the maximum number of flow rules is defined at the table creation time. The flow rule creation is done by selecting a table, a pattern template and an actions template (which are bound to the table), and setting unique values for the items and actions. Signed-off-by: Alexander Kozyrev Acked-by: Ori Kam --- doc/guides/prog_guide/rte_flow.rst | 124 +++++++++++ doc/guides/rel_notes/release_22_03.rst | 8 + lib/ethdev/rte_flow.c | 141 +++++++++++++ lib/ethdev/rte_flow.h | 274 +++++++++++++++++++++++++ lib/ethdev/rte_flow_driver.h | 37 ++++ lib/ethdev/version.map | 6 + 6 files changed, 590 insertions(+) diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 5b4c5dd609..b7799c5abe 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -3626,6 +3626,130 @@ of pre-configurable resources for a given port on a system. struct rte_flow_port_attr *port_attr, struct rte_flow_error *error); +Flow templates +~~~~~~~~~~~~~~ + +Oftentimes in an application, many flow rules share a common structure +(the same pattern and/or action list) so they can be grouped and classified +together. This knowledge may be used as a source of optimization by a PMD/HW. +The flow rule creation is done by selecting a table, a pattern template +and an actions template (which are bound to the table), and setting unique +values for the items and actions. This API is not thread-safe. + +Pattern templates +^^^^^^^^^^^^^^^^^ + +The pattern template defines a common pattern (the item mask) without values. +The mask value is used to select a field to match on, spec/last are ignored. +The pattern template may be used by multiple tables and must not be destroyed +until all these tables are destroyed first. + +.. code-block:: c + + struct rte_flow_pattern_template * + rte_flow_pattern_template_create(uint16_t port_id, + const struct rte_flow_pattern_template_attr *template_attr, + const struct rte_flow_item pattern[], + struct rte_flow_error *error); + +For example, to create a pattern template to match on the destination MAC: + +.. code-block:: c + + struct rte_flow_item pattern[2] = {{0}}; + struct rte_flow_item_eth eth_m = {0}; + pattern[0].type = RTE_FLOW_ITEM_TYPE_ETH; + eth_m.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff"; + pattern[0].mask = ð_m; + pattern[1].type = RTE_FLOW_ITEM_TYPE_END; + + struct rte_flow_pattern_template *pattern_template = + rte_flow_pattern_template_create(port, &itr, &pattern, &error); + +The concrete value to match on will be provided at the rule creation. + +Actions templates +^^^^^^^^^^^^^^^^^ + +The actions template holds a list of action types to be used in flow rules. +The mask parameter allows specifying a shared constant value for every rule. +The actions template may be used by multiple tables and must not be destroyed +until all these tables are destroyed first. + +.. code-block:: c + + struct rte_flow_actions_template * + rte_flow_actions_template_create(uint16_t port_id, + const struct rte_flow_actions_template_attr *template_attr, + const struct rte_flow_action actions[], + const struct rte_flow_action masks[], + struct rte_flow_error *error); + +For example, to create an actions template with the same Mark ID +but different Queue Index for every rule: + +.. code-block:: c + + struct rte_flow_action actions[] = { + /* Mark ID is constant (4) for every rule, Queue Index is unique */ + [0] = {.type = RTE_FLOW_ACTION_TYPE_MARK, + .conf = &(struct rte_flow_action_mark){.id = 4}}, + [1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE}, + [2] = {.type = RTE_FLOW_ACTION_TYPE_END,}, + }; + struct rte_flow_action masks[] = { + /* Assign to MARK mask any non-zero value to make it constant */ + [0] = {.type = RTE_FLOW_ACTION_TYPE_MARK, + .conf = &(struct rte_flow_action_mark){.id = 1}}, + [1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE}, + [2] = {.type = RTE_FLOW_ACTION_TYPE_END,}, + }; + + struct rte_flow_actions_template *at = + rte_flow_actions_template_create(port, &atr, &actions, &masks, &error); + +The concrete value for Queue Index will be provided at the rule creation. + +Flow table +^^^^^^^^^^ + +A table combines a number of pattern and actions templates along with shared flow +rule attributes (group ID, priority and traffic direction). This way a PMD/HW +can prepare all the resources needed for efficient flow rules creation in +the datapath. To avoid any hiccups due to memory reallocation, the maximum +number of flow rules is defined at table creation time. Any flow rule +creation beyond the maximum table size is rejected. Application may create +another table to accommodate more rules in this case. + +.. code-block:: c + + struct rte_flow_table * + rte_flow_table_create(uint16_t port_id, + const struct rte_flow_table_attr *table_attr, + struct rte_flow_pattern_template *pattern_templates[], + uint8_t nb_pattern_templates, + struct rte_flow_actions_template *actions_templates[], + uint8_t nb_actions_templates, + struct rte_flow_error *error); + +A table can be created only after the Flow Rules management is configured +and pattern and actions templates are created. + +.. code-block:: c + + rte_flow_configure(port, *port_attr, *error); + + struct rte_flow_pattern_template *pattern_templates[0] = + rte_flow_pattern_template_create(port, &itr, &pattern, &error); + struct rte_flow_actions_template *actions_templates[0] = + rte_flow_actions_template_create(port, &atr, &actions, &masks, &error); + + struct rte_flow_table *table = + rte_flow_table_create(port, *table_attr, + *pattern_templates, nb_pattern_templates, + *actions_templates, nb_actions_templates, + *error); + .. _flow_isolated_mode: Flow isolated mode diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst index 8593db3f6a..d23d1591df 100644 --- a/doc/guides/rel_notes/release_22_03.rst +++ b/doc/guides/rel_notes/release_22_03.rst @@ -59,6 +59,14 @@ New Features engine, allowing to pre-allocate some resources for better performance. Added ``rte_flow_info_get`` API to retrieve pre-configurable resources. +* ethdev: Added ``rte_flow_table_create`` API to group flow rules with + the same flow attributes and common matching patterns and actions + defined by ``rte_flow_pattern_template_create`` and + ``rte_flow_actions_template_create`` respectively. + Corresponding functions to destroy these entities are: + ``rte_flow_table_destroy``, ``rte_flow_pattern_template_destroy`` + and ``rte_flow_actions_template_destroy``. + * **Updated AF_XDP PMD** * Added support for libxdp >=v1.2.2. diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index e7e6478bed..ab942117d0 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -1431,3 +1431,144 @@ rte_flow_configure(uint16_t port_id, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, rte_strerror(ENOTSUP)); } + +struct rte_flow_pattern_template * +rte_flow_pattern_template_create(uint16_t port_id, + const struct rte_flow_pattern_template_attr *template_attr, + const struct rte_flow_item pattern[], + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + struct rte_flow_pattern_template *template; + + if (unlikely(!ops)) + return NULL; + if (likely(!!ops->pattern_template_create)) { + template = ops->pattern_template_create(dev, template_attr, + pattern, error); + if (template == NULL) + flow_err(port_id, -rte_errno, error); + return template; + } + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); + return NULL; +} + +int +rte_flow_pattern_template_destroy(uint16_t port_id, + struct rte_flow_pattern_template *pattern_template, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->pattern_template_destroy)) { + return flow_err(port_id, + ops->pattern_template_destroy(dev, pattern_template, error), + error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} + +struct rte_flow_actions_template * +rte_flow_actions_template_create(uint16_t port_id, + const struct rte_flow_actions_template_attr *template_attr, + const struct rte_flow_action actions[], + const struct rte_flow_action masks[], + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + struct rte_flow_actions_template *template; + + if (unlikely(!ops)) + return NULL; + if (likely(!!ops->actions_template_create)) { + template = ops->actions_template_create(dev, template_attr, + actions, masks, error); + if (template == NULL) + flow_err(port_id, -rte_errno, error); + return template; + } + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); + return NULL; +} + +int +rte_flow_actions_template_destroy(uint16_t port_id, + struct rte_flow_actions_template *actions_template, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->actions_template_destroy)) { + return flow_err(port_id, + ops->actions_template_destroy(dev, actions_template, error), + error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} + +struct rte_flow_table * +rte_flow_table_create(uint16_t port_id, + const struct rte_flow_table_attr *table_attr, + struct rte_flow_pattern_template *pattern_templates[], + uint8_t nb_pattern_templates, + struct rte_flow_actions_template *actions_templates[], + uint8_t nb_actions_templates, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + struct rte_flow_table *table; + + if (unlikely(!ops)) + return NULL; + if (likely(!!ops->table_create)) { + table = ops->table_create(dev, table_attr, + pattern_templates, nb_pattern_templates, + actions_templates, nb_actions_templates, + error); + if (table == NULL) + flow_err(port_id, -rte_errno, error); + return table; + } + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); + return NULL; +} + +int +rte_flow_table_destroy(uint16_t port_id, + struct rte_flow_table *table, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->table_destroy)) { + return flow_err(port_id, + ops->table_destroy(dev, table, error), + error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index f3c7159484..a65f5d4e6a 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -4935,6 +4935,280 @@ rte_flow_configure(uint16_t port_id, const struct rte_flow_port_attr *port_attr, struct rte_flow_error *error); +/** + * Opaque type returned after successful creation of pattern template. + * This handle can be used to manage the created pattern template. + */ +struct rte_flow_pattern_template; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Flow pattern template attributes. + */ +__extension__ +struct rte_flow_pattern_template_attr { + /** + * Version of the struct layout, should be 0. + */ + uint32_t version; + /** + * Relaxed matching policy. + * - PMD may match only on items with mask member set and skip + * matching on protocol layers specified without any masks. + * - If not set, PMD will match on protocol layers + * specified without any masks as well. + * - Packet data must be stacked in the same order as the + * protocol layers to match inside packets, starting from the lowest. + */ + uint32_t relaxed_matching:1; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Create pattern template. + * + * The pattern template defines common matching fields without values. + * For example, matching on 5 tuple TCP flow, the template will be + * eth(null) + IPv4(source + dest) + TCP(s_port + d_port), + * while values for each rule will be set during the flow rule creation. + * The number and order of items in the template must be the same + * at the rule creation. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] template_attr + * Pattern template attributes. + * @param[in] pattern + * Pattern specification (list terminated by the END pattern item). + * The spec member of an item is not used unless the end member is used. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * Handle on success, NULL otherwise and rte_errno is set. + */ +__rte_experimental +struct rte_flow_pattern_template * +rte_flow_pattern_template_create(uint16_t port_id, + const struct rte_flow_pattern_template_attr *template_attr, + const struct rte_flow_item pattern[], + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Destroy pattern template. + * + * This function may be called only when + * there are no more tables referencing this template. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] pattern_template + * Handle of the template to be destroyed. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_pattern_template_destroy(uint16_t port_id, + struct rte_flow_pattern_template *pattern_template, + struct rte_flow_error *error); + +/** + * Opaque type returned after successful creation of actions template. + * This handle can be used to manage the created actions template. + */ +struct rte_flow_actions_template; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Flow actions template attributes. + */ +struct rte_flow_actions_template_attr { + /** + * Version of the struct layout, should be 0. + */ + uint32_t version; + /* No attributes so far. */ +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Create actions template. + * + * The actions template holds a list of action types without values. + * For example, the template to change TCP ports is TCP(s_port + d_port), + * while values for each rule will be set during the flow rule creation. + * The number and order of actions in the template must be the same + * at the rule creation. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] template_attr + * Template attributes. + * @param[in] actions + * Associated actions (list terminated by the END action). + * The spec member is only used if @p masks spec is non-zero. + * @param[in] masks + * List of actions that marks which of the action's member is constant. + * A mask has the same format as the corresponding action. + * If the action field in @p masks is not 0, + * the corresponding value in an action from @p actions will be the part + * of the template and used in all flow rules. + * The order of actions in @p masks is the same as in @p actions. + * In case of indirect actions present in @p actions, + * the actual action type should be present in @p mask. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * Handle on success, NULL otherwise and rte_errno is set. + */ +__rte_experimental +struct rte_flow_actions_template * +rte_flow_actions_template_create(uint16_t port_id, + const struct rte_flow_actions_template_attr *template_attr, + const struct rte_flow_action actions[], + const struct rte_flow_action masks[], + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Destroy actions template. + * + * This function may be called only when + * there are no more tables referencing this template. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] actions_template + * Handle to the template to be destroyed. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_actions_template_destroy(uint16_t port_id, + struct rte_flow_actions_template *actions_template, + struct rte_flow_error *error); + +/** + * Opaque type returned after successful creation of table. + * This handle can be used to manage the created table. + */ +struct rte_flow_table; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Table attributes. + */ +struct rte_flow_table_attr { + /** + * Version of the struct layout, should be 0. + */ + uint32_t version; + /** + * Flow attributes to be used in each rule generated from this table. + */ + struct rte_flow_attr flow_attr; + /** + * Maximum number of flow rules that this table holds. + */ + uint32_t nb_flows; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Create table. + * + * A template table consists of multiple pattern templates and actions + * templates associated with a single set of rule attributes (group ID, + * priority and traffic direction). + * + * Each rule is free to use any combination of pattern and actions templates + * and specify particular values for items and actions it would like to change. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] table_attr + * Table attributes. + * @param[in] pattern_templates + * Array of pattern templates to be used in this table. + * @param[in] nb_pattern_templates + * The number of pattern templates in the pattern_templates array. + * @param[in] actions_templates + * Array of actions templates to be used in this table. + * @param[in] nb_actions_templates + * The number of actions templates in the actions_templates array. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * Handle on success, NULL otherwise and rte_errno is set. + */ +__rte_experimental +struct rte_flow_table * +rte_flow_table_create(uint16_t port_id, + const struct rte_flow_table_attr *table_attr, + struct rte_flow_pattern_template *pattern_templates[], + uint8_t nb_pattern_templates, + struct rte_flow_actions_template *actions_templates[], + uint8_t nb_actions_templates, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Destroy table. + * + * This function may be called only when + * there are no more flow rules referencing this table. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] table + * Handle to the table to be destroyed. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_table_destroy(uint16_t port_id, + struct rte_flow_table *table, + struct rte_flow_error *error); + #ifdef __cplusplus } #endif diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h index 503700aec4..04b0960825 100644 --- a/lib/ethdev/rte_flow_driver.h +++ b/lib/ethdev/rte_flow_driver.h @@ -162,6 +162,43 @@ struct rte_flow_ops { (struct rte_eth_dev *dev, const struct rte_flow_port_attr *port_attr, struct rte_flow_error *err); + /** See rte_flow_pattern_template_create() */ + struct rte_flow_pattern_template *(*pattern_template_create) + (struct rte_eth_dev *dev, + const struct rte_flow_pattern_template_attr *template_attr, + const struct rte_flow_item pattern[], + struct rte_flow_error *err); + /** See rte_flow_pattern_template_destroy() */ + int (*pattern_template_destroy) + (struct rte_eth_dev *dev, + struct rte_flow_pattern_template *pattern_template, + struct rte_flow_error *err); + /** See rte_flow_actions_template_create() */ + struct rte_flow_actions_template *(*actions_template_create) + (struct rte_eth_dev *dev, + const struct rte_flow_actions_template_attr *template_attr, + const struct rte_flow_action actions[], + const struct rte_flow_action masks[], + struct rte_flow_error *err); + /** See rte_flow_actions_template_destroy() */ + int (*actions_template_destroy) + (struct rte_eth_dev *dev, + struct rte_flow_actions_template *actions_template, + struct rte_flow_error *err); + /** See rte_flow_table_create() */ + struct rte_flow_table *(*table_create) + (struct rte_eth_dev *dev, + const struct rte_flow_table_attr *table_attr, + struct rte_flow_pattern_template *pattern_templates[], + uint8_t nb_pattern_templates, + struct rte_flow_actions_template *actions_templates[], + uint8_t nb_actions_templates, + struct rte_flow_error *err); + /** See rte_flow_table_destroy() */ + int (*table_destroy) + (struct rte_eth_dev *dev, + struct rte_flow_table *table, + struct rte_flow_error *err); }; /** diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 59785c3634..01c004d491 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -260,6 +260,12 @@ EXPERIMENTAL { # added in 22.03 rte_flow_info_get; rte_flow_configure; + rte_flow_pattern_template_create; + rte_flow_pattern_template_destroy; + rte_flow_actions_template_create; + rte_flow_actions_template_destroy; + rte_flow_table_create; + rte_flow_table_destroy; }; INTERNAL { From patchwork Sun Feb 6 03:25:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 106902 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 60B24A034E; Sun, 6 Feb 2022 04:26:56 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4D82E41183; Sun, 6 Feb 2022 04:26:16 +0100 (CET) Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam08on2043.outbound.protection.outlook.com [40.107.100.43]) by mails.dpdk.org (Postfix) with ESMTP id 4C1D041180 for ; Sun, 6 Feb 2022 04:26:14 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=aR7wvifrPKAjeISvS2BXPpwX9zQX/spCyEAUfSYPT0rP3VbskDCK9sQJ1bziHR/LrIo16Rbj90pgTxV2dOnWhWQ+MbVHNXo6SUNLH4lUilCp1jaAblQGlZcVNP68cR4tU9Xn4pW6io4DTRwTawqM/CHUwe0PmtoOi0EQdNBqXeSM5Gs6i+8K19sbrg7vD8wadbVx2Z6d56lU5Mgh/IDY/aWCoU3PBI6JbvZZ+/uwmx3f+zWhAV0e9EB6JnbsDpJScgDT/X0/C2aPZHczfqDQ15/9DN2Whu7P7YglsnX9dYK1P9bnkrolM+VLButnFMgckA5WUm/wbNO395pRJZecYA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=J0l/sNLmJDPeV+ZI55NEy+cK4PDamKuaqOFa8i+pLks=; b=B8crLm6QLeo/jLCYCFWSfw3q+R0QXoo/zL0HBiNV/ypf6gLSZGa0GwSXrd6UyMMgcMORw0cMcfmn3KO31M4sll1BoPbi7rVab5y30drHpe3UxlRC70qDp73DQojVi33b5WbayEQsHfcyTPBL+SHrTJk9wPG1zgXmMXrAepcGhWiDGV7I9Hoo2sSiAyXE2VCKGVqxIw/ogaa2n2w7V2ABi9fhG1TMMjc5rH704z/82S7NxUc6OJe1BSI70tc3zYa1ZQbc/GfcOtktXxUclvr6/H+i7AiJG/ImGjxQe9DInek/Tj3sYeKTqblSfmnNQbs7amRpDTqxMUVNuwFyZeD1XA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=J0l/sNLmJDPeV+ZI55NEy+cK4PDamKuaqOFa8i+pLks=; b=r4Ox8jaKnXmAR6uuJdtAn3gZgttB+wHFkf8MMUnIWnkxJV5v+tx9q+ls5jHAmiQzVjDogZkj+vBvsltwEdvkcrywZpzjrXVTErc2ubY6k/Gud0aL1L8BHVAr444rDGshpAgHk31+jYcP2ab6WG5S4ElG6PThUOT8XxfkhwSf5dZhJ1A47CDP1G7tBHv4RJykIh3yqn0Ay8MfeaK9O3u15kqIEWoPjwy1lbijby0psGhPimITyFukAZ8M/DTdo530emcUshpksbSDjgyI0IWKdw8Fp6RuVdW/jh7jDnzSJZXlEe0rhK8AN9LsaD5ysGYyzi+677Pw4bSUOmE6UVSorg== Received: from DM6PR12MB3179.namprd12.prod.outlook.com (2603:10b6:5:183::18) by BN8PR12MB3443.namprd12.prod.outlook.com (2603:10b6:408:63::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12; Sun, 6 Feb 2022 03:26:12 +0000 Received: from MW4PR04CA0292.namprd04.prod.outlook.com (2603:10b6:303:89::27) by DM6PR12MB3179.namprd12.prod.outlook.com (2603:10b6:5:183::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.14; Sun, 6 Feb 2022 03:25:54 +0000 Received: from CO1NAM11FT068.eop-nam11.prod.protection.outlook.com (2603:10b6:303:89:cafe::f5) by MW4PR04CA0292.outlook.office365.com (2603:10b6:303:89::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.14 via Frontend Transport; Sun, 6 Feb 2022 03:25:53 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.236) by CO1NAM11FT068.mail.protection.outlook.com (10.13.175.142) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4951.12 via Frontend Transport; Sun, 6 Feb 2022 03:25:53 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 6 Feb 2022 03:25:53 +0000 Received: from pegasus01.mtr.labs.mlnx (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Sat, 5 Feb 2022 19:25:50 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , Subject: [PATCH v3 03/10] ethdev: bring in async queue-based flow rules operations Date: Sun, 6 Feb 2022 05:25:19 +0200 Message-ID: <20220206032526.816079-4-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220206032526.816079-1-akozyrev@nvidia.com> References: <20220118153027.3947448-1-akozyrev@nvidia.com> <20220206032526.816079-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 9907b8bb-fb7f-4aea-c7d0-08d9e92061b9 X-MS-TrafficTypeDiagnostic: DM6PR12MB3179:EE_|BN8PR12MB3443:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:10000; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: FQMLXceXYWl6yqPWHOuTNvTEQzns3Jbj6tsEWWcJoKbPMzJmISKI8+DUwEa8k6uxzCHYGO7OeVLkbPmxR0VCSUM74pWTbnTx5m+Q569xV1dKQgBF0RwtkB1EnrRjQIq4is9Q+ds7eUlkuVdygrnsHcXaDsrWUolw88hnWUt0i5ZALuFtNU7RHBFMgD+3OeyCfYbNUD/TFln1qBBPdmemiAmUIIypPXd7ykAdHeb19G21UFCRB7/YTm5Obv7nxFXViUWMn/QN/rRqeSxAA8v1+xpKlc2SXj7kMbmfnbb0zifPTOe7XnDmVvtADL9h6BsSf/EaV99qHpKu1hD2vN5NMqC7fTKf4jfgf6/ZYb4ioco+1ebtmVRHQfDu+D3VkZZiAyaeDi/bIhuItlacy2gJEe7qXbzVWs2/oebIrFPGE8fz0UTBdjHNohEd7oqNKeLdc/Eb00Z7SHuUbOLU2LJbsgUfwnd4ba7bnPkQxLb2/qXCRj6Hs8HEDK/YZOqEc9mLKg8HYi4AkwNMXzL7Y27gjYRu+HZRtQmufzrDxK28zA2cUwgPP97Vb9v+qVkxrDBXgm2nnb7DxP9s0QyyeCUyOBDxQcdYo0D6568X5VdwnpwvuglgoaZGzcZW2XIxlg6AnBEWnAbvQiTWkOWHd6YOkjq3catjDLzadRGWJJqmc8MJ+/6q9xwE9fyhmYy3XH0OsG6qeNkqVGW7WNG08Id3jCA7jNjv97h7toDgphzfR8qpnqiX9L5sCy4r8rVphsg9JzOCdYspIBtVImNbZiEEpmKs4f4W8cFps/Wvg64hov61DfmNOuDcRf/+S5r5odLlcCQbFn2uOKkIUh1pT8+PYQ== X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(40470700004)(46966006)(16526019)(36756003)(6916009)(8676002)(70586007)(5660300002)(54906003)(70206006)(30864003)(26005)(316002)(8936002)(4326008)(40460700003)(1076003)(82310400004)(336012)(426003)(6666004)(2906002)(186003)(2616005)(47076005)(86362001)(36860700001)(83380400001)(81166007)(356005)(508600001)(36900700001)(579004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Feb 2022 03:25:53.5864 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9907b8bb-fb7f-4aea-c7d0-08d9e92061b9 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT068.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR12MB3443 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org A new, faster, queue-based flow rules management mechanism is needed for applications offloading rules inside the datapath. This asynchronous and lockless mechanism frees the CPU for further packet processing and reduces the performance impact of the flow rules creation/destruction on the datapath. Note that queues are not thread-safe and the queue should be accessed from the same thread for all queue operations. It is the responsibility of the app to sync the queue functions in case of multi-threaded access to the same queue. The rte_flow_q_flow_create() function enqueues a flow creation to the requested queue. It benefits from already configured resources and sets unique values on top of item and action templates. A flow rule is enqueued on the specified flow queue and offloaded asynchronously to the hardware. The function returns immediately to spare CPU for further packet processing. The application must invoke the rte_flow_q_pull() function to complete the flow rule operation offloading, to clear the queue, and to receive the operation status. The rte_flow_q_flow_destroy() function enqueues a flow destruction to the requested queue. Signed-off-by: Alexander Kozyrev Acked-by: Ori Kam --- doc/guides/prog_guide/img/rte_flow_q_init.svg | 71 ++++ .../prog_guide/img/rte_flow_q_usage.svg | 60 +++ doc/guides/prog_guide/rte_flow.rst | 159 +++++++- doc/guides/rel_notes/release_22_03.rst | 8 + lib/ethdev/rte_flow.c | 173 ++++++++- lib/ethdev/rte_flow.h | 342 ++++++++++++++++++ lib/ethdev/rte_flow_driver.h | 55 +++ lib/ethdev/version.map | 7 + 8 files changed, 873 insertions(+), 2 deletions(-) create mode 100644 doc/guides/prog_guide/img/rte_flow_q_init.svg create mode 100644 doc/guides/prog_guide/img/rte_flow_q_usage.svg diff --git a/doc/guides/prog_guide/img/rte_flow_q_init.svg b/doc/guides/prog_guide/img/rte_flow_q_init.svg new file mode 100644 index 0000000000..2080bf4c04 --- /dev/null +++ b/doc/guides/prog_guide/img/rte_flow_q_init.svg @@ -0,0 +1,71 @@ + + + + + + + + + + + + + + + rte_eth_dev_configure + () + + + + rte_flow_configure() + + + + rte_flow_pattern_template_create() + + + + rte_flow_actions_template_create + () + + + + rte_eal_init + () + + + + + + + rte_flow_table_create + ( + ) + + + + + + rte_eth_dev_start() + + + \ No newline at end of file diff --git a/doc/guides/prog_guide/img/rte_flow_q_usage.svg b/doc/guides/prog_guide/img/rte_flow_q_usage.svg new file mode 100644 index 0000000000..113da764ba --- /dev/null +++ b/doc/guides/prog_guide/img/rte_flow_q_usage.svg @@ -0,0 +1,60 @@ + + + + + + + + + + + + + + rte_eth_rx_burst() + + analyze packet + + rte_flow_q_create_flow() + + more packets? + + + + + + + add new rule? + + + yes + + no + + + destroy the rule? + + + rte_flow_q_destroy_flow() + + + + + rte_flow_q_pull() + + rte_flow_q_push() + + + no + + yes + + no + + yes + + \ No newline at end of file diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index b7799c5abe..734294e65d 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -3607,12 +3607,16 @@ Hints about the expected number of counters or meters in an application, for example, allow PMD to prepare and optimize NIC memory layout in advance. ``rte_flow_configure()`` must be called before any flow rule is created, but after an Ethernet device is configured. +It also creates flow queues for asynchronous flow rules operations via +queue-based API, see `Asynchronous operations`_ section. .. code-block:: c int rte_flow_configure(uint16_t port_id, const struct rte_flow_port_attr *port_attr, + uint16_t nb_queue, + const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *error); Information about resources that can benefit from pre-allocation can be @@ -3737,7 +3741,7 @@ and pattern and actions templates are created. .. code-block:: c - rte_flow_configure(port, *port_attr, *error); + rte_flow_configure(port, *port_attr, nb_queue, *queue_attr, *error); struct rte_flow_pattern_template *pattern_templates[0] = rte_flow_pattern_template_create(port, &itr, &pattern, &error); @@ -3750,6 +3754,159 @@ and pattern and actions templates are created. *actions_templates, nb_actions_templates, *error); +Asynchronous operations +----------------------- + +Flow rules management can be done via special lockless flow management queues. +- Queue operations are asynchronous and not thread-safe. +- Operations can thus be invoked by the app's datapath, +packet processing can continue while queue operations are processed by NIC. +- The queue number is configured at initialization stage. +- Available operation types: rule creation, rule destruction, +indirect rule creation, indirect rule destruction, indirect rule update. +- Operations may be reordered within a queue. +- Operations can be postponed and pushed to NIC in batches. +- Results pulling must be done on time to avoid queue overflows. +- User data is returned as part of the result to identify an operation. +- Flow handle is valid once the creation operation is enqueued and must be +destroyed even if the operation is not successful and the rule is not inserted. + +The asynchronous flow rule insertion logic can be broken into two phases. + +1. Initialization stage as shown here: + +.. _figure_rte_flow_q_init: + +.. figure:: img/rte_flow_q_init.* + +2. Main loop as presented on a datapath application example: + +.. _figure_rte_flow_q_usage: + +.. figure:: img/rte_flow_q_usage.* + +Enqueue creation operation +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Enqueueing a flow rule creation operation is similar to simple creation. + +.. code-block:: c + + struct rte_flow * + rte_flow_q_flow_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_table *table, + const struct rte_flow_item pattern[], + uint8_t pattern_template_index, + const struct rte_flow_action actions[], + uint8_t actions_template_index, + struct rte_flow_error *error); + +A valid handle in case of success is returned. It must be destroyed later +by calling ``rte_flow_q_flow_destroy()`` even if the rule is rejected by HW. + +Enqueue destruction operation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Enqueueing a flow rule destruction operation is similar to simple destruction. + +.. code-block:: c + + int + rte_flow_q_flow_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow *flow, + struct rte_flow_error *error); + +Push enqueued operations +~~~~~~~~~~~~~~~~~~~~~~~~ + +Pushing all internally stored rules from a queue to the NIC. + +.. code-block:: c + + int + rte_flow_q_push(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_error *error); + +There is the postpone attribute in the queue operation attributes. +When it is set, multiple operations can be bulked together and not sent to HW +right away to save SW/HW interactions and prioritize throughput over latency. +The application must invoke this function to actually push all outstanding +operations to HW in this case. + +Pull enqueued operations +~~~~~~~~~~~~~~~~~~~~~~~~ + +Pulling asynchronous operations results. + +The application must invoke this function in order to complete asynchronous +flow rule operations and to receive flow rule operations statuses. + +.. code-block:: c + + int + rte_flow_q_pull(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_q_op_res res[], + uint16_t n_res, + struct rte_flow_error *error); + +Multiple outstanding operation results can be pulled simultaneously. +User data may be provided during a flow creation/destruction in order +to distinguish between multiple operations. User data is returned as part +of the result to provide a method to detect which operation is completed. + +Enqueue indirect action creation operation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Asynchronous version of indirect action creation API. + +.. code-block:: c + + struct rte_flow_action_handle * + rte_flow_q_action_handle_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + const struct rte_flow_indir_action_conf *indir_action_conf, + const struct rte_flow_action *action, + struct rte_flow_error *error); + +A valid handle in case of success is returned. It must be destroyed later by +calling ``rte_flow_q_action_handle_destroy()`` even if the rule is rejected. + +Enqueue indirect action destruction operation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Asynchronous version of indirect action destruction API. + +.. code-block:: c + + int + rte_flow_q_action_handle_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_action_handle *action_handle, + struct rte_flow_error *error); + +Enqueue indirect action update operation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Asynchronous version of indirect action update API. + +.. code-block:: c + + int + rte_flow_q_action_handle_update(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_action_handle *action_handle, + const void *update, + struct rte_flow_error *error); + .. _flow_isolated_mode: Flow isolated mode diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst index d23d1591df..80a85124e6 100644 --- a/doc/guides/rel_notes/release_22_03.rst +++ b/doc/guides/rel_notes/release_22_03.rst @@ -67,6 +67,14 @@ New Features ``rte_flow_table_destroy``, ``rte_flow_pattern_template_destroy`` and ``rte_flow_actions_template_destroy``. +* ethdev: Added ``rte_flow_q_flow_create`` and ``rte_flow_q_flow_destroy`` API + to enqueue flow creaion/destruction operations asynchronously as well as + ``rte_flow_q_pull`` to poll and retrieve results of these operations and + ``rte_flow_q_push`` to push all the in-flight operations to the NIC. + Introduced asynchronous API for indirect actions management as well: + ``rte_flow_q_action_handle_create``, ``rte_flow_q_action_handle_destroy`` and + ``rte_flow_q_action_handle_update``. + * **Updated AF_XDP PMD** * Added support for libxdp >=v1.2.2. diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index ab942117d0..127dbb13cb 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -1415,6 +1415,8 @@ rte_flow_info_get(uint16_t port_id, int rte_flow_configure(uint16_t port_id, const struct rte_flow_port_attr *port_attr, + uint16_t nb_queue, + const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; @@ -1424,7 +1426,7 @@ rte_flow_configure(uint16_t port_id, return -rte_errno; if (likely(!!ops->configure)) { return flow_err(port_id, - ops->configure(dev, port_attr, error), + ops->configure(dev, port_attr, nb_queue, queue_attr, error), error); } return rte_flow_error_set(error, ENOTSUP, @@ -1572,3 +1574,172 @@ rte_flow_table_destroy(uint16_t port_id, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, rte_strerror(ENOTSUP)); } + +struct rte_flow * +rte_flow_q_flow_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_table *table, + const struct rte_flow_item pattern[], + uint8_t pattern_template_index, + const struct rte_flow_action actions[], + uint8_t actions_template_index, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + struct rte_flow *flow; + + if (unlikely(!ops)) + return NULL; + if (likely(!!ops->q_flow_create)) { + flow = ops->q_flow_create(dev, queue_id, q_ops_attr, table, + pattern, pattern_template_index, + actions, actions_template_index, + error); + if (flow == NULL) + flow_err(port_id, -rte_errno, error); + return flow; + } + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); + return NULL; +} + +int +rte_flow_q_flow_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow *flow, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->q_flow_destroy)) { + return flow_err(port_id, + ops->q_flow_destroy(dev, queue_id, + q_ops_attr, flow, error), + error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} + +struct rte_flow_action_handle * +rte_flow_q_action_handle_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + const struct rte_flow_indir_action_conf *indir_action_conf, + const struct rte_flow_action *action, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + struct rte_flow_action_handle *handle; + + if (unlikely(!ops)) + return NULL; + if (unlikely(!ops->q_action_handle_create)) { + rte_flow_error_set(error, ENOSYS, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENOSYS)); + return NULL; + } + handle = ops->q_action_handle_create(dev, queue_id, q_ops_attr, + indir_action_conf, action, error); + if (handle == NULL) + flow_err(port_id, -rte_errno, error); + return handle; +} + +int +rte_flow_q_action_handle_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_action_handle *action_handle, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + int ret; + + if (unlikely(!ops)) + return -rte_errno; + if (unlikely(!ops->q_action_handle_destroy)) + return rte_flow_error_set(error, ENOSYS, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOSYS)); + ret = ops->q_action_handle_destroy(dev, queue_id, q_ops_attr, + action_handle, error); + return flow_err(port_id, ret, error); +} + +int +rte_flow_q_action_handle_update(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_action_handle *action_handle, + const void *update, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + int ret; + + if (unlikely(!ops)) + return -rte_errno; + if (unlikely(!ops->q_action_handle_update)) + return rte_flow_error_set(error, ENOSYS, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOSYS)); + ret = ops->q_action_handle_update(dev, queue_id, q_ops_attr, + action_handle, update, error); + return flow_err(port_id, ret, error); +} + +int +rte_flow_q_push(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->q_push)) { + return flow_err(port_id, + ops->q_push(dev, queue_id, error), + error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} + +int +rte_flow_q_pull(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_q_op_res res[], + uint16_t n_res, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + int ret; + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->q_pull)) { + ret = ops->q_pull(dev, queue_id, res, n_res, error); + return ret ? ret : flow_err(port_id, ret, error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index a65f5d4e6a..25a6ad5b64 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -4883,6 +4883,21 @@ struct rte_flow_port_attr { uint32_t nb_meters; }; +/** + * Flow engine queue configuration. + */ +__extension__ +struct rte_flow_queue_attr { + /** + * Version of the struct layout, should be 0. + */ + uint32_t version; + /** + * Number of flow rule operations a queue can hold. + */ + uint32_t size; +}; + /** * @warning * @b EXPERIMENTAL: this API may change without prior notice. @@ -4922,6 +4937,11 @@ rte_flow_info_get(uint16_t port_id, * Port identifier of Ethernet device. * @param[in] port_attr * Port configuration attributes. + * @param[in] nb_queue + * Number of flow queues to be configured. + * @param[in] queue_attr + * Array that holds attributes for each flow queue. + * Number of elements is set in @p port_attr.nb_queues. * @param[out] error * Perform verbose error reporting if not NULL. * PMDs initialize this structure in case of error only. @@ -4933,6 +4953,8 @@ __rte_experimental int rte_flow_configure(uint16_t port_id, const struct rte_flow_port_attr *port_attr, + uint16_t nb_queue, + const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *error); /** @@ -5209,6 +5231,326 @@ rte_flow_table_destroy(uint16_t port_id, struct rte_flow_table *table, struct rte_flow_error *error); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Queue operation attributes. + */ +struct rte_flow_q_ops_attr { + /** + * Version of the struct layout, should be 0. + */ + uint32_t version; + /** + * The user data that will be returned on the completion events. + */ + void *user_data; + /** + * When set, the requested action will not be sent to the HW immediately. + * The application must call the rte_flow_queue_push to actually send it. + */ + uint32_t postpone:1; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue rule creation operation. + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue_id + * Flow queue used to insert the rule. + * @param[in] q_ops_attr + * Rule creation operation attributes. + * @param[in] table + * Table to select templates from. + * @param[in] pattern + * List of pattern items to be used. + * The list order should match the order in the pattern template. + * The spec is the only relevant member of the item that is being used. + * @param[in] pattern_template_index + * Pattern template index in the table. + * @param[in] actions + * List of actions to be used. + * The list order should match the order in the actions template. + * @param[in] actions_template_index + * Actions template index in the table. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * Handle on success, NULL otherwise and rte_errno is set. + * The rule handle doesn't mean that the rule was offloaded. + * Only completion result indicates that the rule was offloaded. + */ +__rte_experimental +struct rte_flow * +rte_flow_q_flow_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_table *table, + const struct rte_flow_item pattern[], + uint8_t pattern_template_index, + const struct rte_flow_action actions[], + uint8_t actions_template_index, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue rule destruction operation. + * + * This function enqueues a destruction operation on the queue. + * Application should assume that after calling this function + * the rule handle is not valid anymore. + * Completion indicates the full removal of the rule from the HW. + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue_id + * Flow queue which is used to destroy the rule. + * This must match the queue on which the rule was created. + * @param[in] q_ops_attr + * Rule destroy operation attributes. + * @param[in] flow + * Flow handle to be destroyed. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_q_flow_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow *flow, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue indirect action creation operation. + * @see rte_flow_action_handle_create + * + * @param[in] port_id + * Port identifier of Ethernet device. + * @param[in] queue_id + * Flow queue which is used to create the rule. + * @param[in] q_ops_attr + * Queue operation attributes. + * @param[in] indir_action_conf + * Action configuration for the indirect action object creation. + * @param[in] action + * Specific configuration of the indirect action object. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * - (0) if success. + * - (-ENODEV) if *port_id* invalid. + * - (-ENOSYS) if underlying device does not support this functionality. + * - (-EIO) if underlying device is removed. + * - (-ENOENT) if action pointed by *action* handle was not found. + * - (-EBUSY) if action pointed by *action* handle still used by some rules + * rte_errno is also set. + */ +__rte_experimental +struct rte_flow_action_handle * +rte_flow_q_action_handle_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + const struct rte_flow_indir_action_conf *indir_action_conf, + const struct rte_flow_action *action, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue indirect action destruction operation. + * The destroy queue must be the same + * as the queue on which the action was created. + * + * @param[in] port_id + * Port identifier of Ethernet device. + * @param[in] queue_id + * Flow queue which is used to destroy the rule. + * @param[in] q_ops_attr + * Queue operation attributes. + * @param[in] action_handle + * Handle for the indirect action object to be destroyed. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * - (0) if success. + * - (-ENODEV) if *port_id* invalid. + * - (-ENOSYS) if underlying device does not support this functionality. + * - (-EIO) if underlying device is removed. + * - (-ENOENT) if action pointed by *action* handle was not found. + * - (-EBUSY) if action pointed by *action* handle still used by some rules + * rte_errno is also set. + */ +__rte_experimental +int +rte_flow_q_action_handle_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_action_handle *action_handle, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue indirect action update operation. + * @see rte_flow_action_handle_create + * + * @param[in] port_id + * Port identifier of Ethernet device. + * @param[in] queue_id + * Flow queue which is used to update the rule. + * @param[in] q_ops_attr + * Queue operation attributes. + * @param[in] action_handle + * Handle for the indirect action object to be updated. + * @param[in] update + * Update profile specification used to modify the action pointed by handle. + * *update* could be with the same type of the immediate action corresponding + * to the *handle* argument when creating, or a wrapper structure includes + * action configuration to be updated and bit fields to indicate the member + * of fields inside the action to update. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * - (0) if success. + * - (-ENODEV) if *port_id* invalid. + * - (-ENOSYS) if underlying device does not support this functionality. + * - (-EIO) if underlying device is removed. + * - (-ENOENT) if action pointed by *action* handle was not found. + * - (-EBUSY) if action pointed by *action* handle still used by some rules + * rte_errno is also set. + */ +__rte_experimental +int +rte_flow_q_action_handle_update(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_action_handle *action_handle, + const void *update, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Push all internally stored rules to the HW. + * Postponed rules are rules that were inserted with the postpone flag set. + * Can be used to notify the HW about batch of rules prepared by the SW to + * reduce the number of communications between the HW and SW. + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue_id + * Flow queue to be pushed. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_q_push(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Queue operation status. + */ +enum rte_flow_q_op_status { + /** + * The operation was completed successfully. + */ + RTE_FLOW_Q_OP_SUCCESS, + /** + * The operation was not completed successfully. + */ + RTE_FLOW_Q_OP_ERROR, +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Queue operation results. + */ +__extension__ +struct rte_flow_q_op_res { + /** + * Version of the struct layout, should be 0. + */ + uint32_t version; + /** + * Returns the status of the operation that this completion signals. + */ + enum rte_flow_q_op_status status; + /** + * The user data that will be returned on the completion events. + */ + void *user_data; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Pull a rte flow operation. + * The application must invoke this function in order to complete + * the flow rule offloading and to retrieve the flow rule operation status. + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue_id + * Flow queue which is used to pull the operation. + * @param[out] res + * Array of results that will be set. + * @param[in] n_res + * Maximum number of results that can be returned. + * This value is equal to the size of the res array. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * Number of results that were pulled, + * a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_q_pull(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_q_op_res res[], + uint16_t n_res, + struct rte_flow_error *error); + #ifdef __cplusplus } #endif diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h index 04b0960825..0edd933bf3 100644 --- a/lib/ethdev/rte_flow_driver.h +++ b/lib/ethdev/rte_flow_driver.h @@ -161,6 +161,8 @@ struct rte_flow_ops { int (*configure) (struct rte_eth_dev *dev, const struct rte_flow_port_attr *port_attr, + uint16_t nb_queue, + const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *err); /** See rte_flow_pattern_template_create() */ struct rte_flow_pattern_template *(*pattern_template_create) @@ -199,6 +201,59 @@ struct rte_flow_ops { (struct rte_eth_dev *dev, struct rte_flow_table *table, struct rte_flow_error *err); + /** See rte_flow_q_flow_create() */ + struct rte_flow *(*q_flow_create) + (struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_table *table, + const struct rte_flow_item pattern[], + uint8_t pattern_template_index, + const struct rte_flow_action actions[], + uint8_t actions_template_index, + struct rte_flow_error *err); + /** See rte_flow_q_flow_destroy() */ + int (*q_flow_destroy) + (struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow *flow, + struct rte_flow_error *err); + /** See rte_flow_q_action_handle_create() */ + struct rte_flow_action_handle *(*q_action_handle_create) + (struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + const struct rte_flow_indir_action_conf *indir_action_conf, + const struct rte_flow_action *action, + struct rte_flow_error *err); + /** See rte_flow_q_action_handle_destroy() */ + int (*q_action_handle_destroy) + (struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_action_handle *action_handle, + struct rte_flow_error *error); + /** See rte_flow_q_action_handle_update() */ + int (*q_action_handle_update) + (struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_action_handle *action_handle, + const void *update, + struct rte_flow_error *error); + /** See rte_flow_q_push() */ + int (*q_push) + (struct rte_eth_dev *dev, + uint32_t queue_id, + struct rte_flow_error *err); + /** See rte_flow_q_pull() */ + int (*q_pull) + (struct rte_eth_dev *dev, + uint32_t queue_id, + struct rte_flow_q_op_res res[], + uint16_t n_res, + struct rte_flow_error *error); }; /** diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 01c004d491..f431ef0a5d 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -266,6 +266,13 @@ EXPERIMENTAL { rte_flow_actions_template_destroy; rte_flow_table_create; rte_flow_table_destroy; + rte_flow_q_flow_create; + rte_flow_q_flow_destroy; + rte_flow_q_action_handle_create; + rte_flow_q_action_handle_destroy; + rte_flow_q_action_handle_update; + rte_flow_q_push; + rte_flow_q_pull; }; INTERNAL { From patchwork Sun Feb 6 03:25:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 106896 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E897EA034E; Sun, 6 Feb 2022 04:26:16 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 17A7A41144; Sun, 6 Feb 2022 04:26:03 +0100 (CET) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2082.outbound.protection.outlook.com [40.107.236.82]) by mails.dpdk.org (Postfix) with ESMTP id 6623A410FC for ; Sun, 6 Feb 2022 04:26:01 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=a0yWHYBtW+/QQSwMUxmWeOrd6r9363JGeyt7TDg1NvRZ4qCIVo2JrVgkcdKuxwi3KoP1Awvkz/foC7pR/WxFFJJETYEcaJCaKEJ8q9sX/nbFis/21ZyHX1sbeWT//Z8IDJlDoJ+Cd/WknAVfyFi921emDStjPIV07pPUjgpowTS3V/zQxCYCYWirjgyyUC0p1ILr/kLl9TRmLs9jZF+mmOzJrkp3Z41fR70HJTWIflk9mtJ/FLydGwLsexCyOoaTt7wtvHcnN3u9Xv0NMPQOQzyVtjUg3d0eSXOYiNTWm/83NtKr0ttoz81btJYkn+zFWVEsIUgDnxL8IhjT0YLcZQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=rUH9YLhvZI/nslItRpKxShDb8JDncqK7wxIrFtAVFUU=; b=jPhQwgXEP/emg5q2k0pKp5qpu6nxc7s0yG0Opj4f28ScxbKVyAfHgjkgdFKaEKV6AbXpx+8xoaQ8rKmwFbpRQUnzb39J5w0fi8TfAHe6Ghge6XmJ1ssNurSXk98a3pOLuvuH2waEey9qP77faoLF+bzVGYb7s/8n4MrV9B9g1lVCi1A8LTRRNK17XvSlpQx+SbU1wA3aRyFLqDtbIteA38G17ye8YBjmo6nApRQUct4uEA4TEYqrgXZDBzyJ8YT8jYtv5u0nNGEEXu8lyihsAS2FiQ9GnWcpPeApW2AZPkbIahgbRUYUduZwPaVwtQzn1+WOChUTEqC+em0NGOIa6A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=rUH9YLhvZI/nslItRpKxShDb8JDncqK7wxIrFtAVFUU=; b=dXe+5CLQU87mPLZUcErt961W4ZTtyykaxn0Cw2aPCrYpZL0TN9Ku4oUh6HleqT1PnbxaZp8dKHNGLiONKFMWtB7yzgzCGxgCRWFKCvtxatHU3mFv1LiPzVjTUuBjnJv8/5C+DIMXYltkTx8yNvenza8d+L53N7qemNwv2Rb344366ASqCQ8Hr2C8u9bwpXscJq6FSPI3iwvsmt5hS+YWpAg+gvZSfqcwbFV+x26kzVWi5EWbP04ABxmANMRv36GbWhimNmyd/nO1Js9eUm34eAoYrwSt7i8nsdYKNSx6f2qd7jYdphl/ivXraN4YeXYMzR45vXfj6jrN5+3RmbVGjg== Received: from PH0PR12MB5436.namprd12.prod.outlook.com (2603:10b6:510:eb::11) by BYAPR12MB2693.namprd12.prod.outlook.com (2603:10b6:a03:6a::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.17; Sun, 6 Feb 2022 03:25:58 +0000 Received: from DS7PR06CA0039.namprd06.prod.outlook.com (2603:10b6:8:54::7) by PH0PR12MB5436.namprd12.prod.outlook.com (2603:10b6:510:eb::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.17; Sun, 6 Feb 2022 03:25:57 +0000 Received: from DM6NAM11FT008.eop-nam11.prod.protection.outlook.com (2603:10b6:8:54:cafe::c7) by DS7PR06CA0039.outlook.office365.com (2603:10b6:8:54::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12 via Frontend Transport; Sun, 6 Feb 2022 03:25:57 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.235) by DM6NAM11FT008.mail.protection.outlook.com (10.13.172.85) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4951.12 via Frontend Transport; Sun, 6 Feb 2022 03:25:56 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 6 Feb 2022 03:25:56 +0000 Received: from pegasus01.mtr.labs.mlnx (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Sat, 5 Feb 2022 19:25:52 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , Subject: [PATCH v3 04/10] app/testpmd: implement rte flow configuration Date: Sun, 6 Feb 2022 05:25:20 +0200 Message-ID: <20220206032526.816079-5-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220206032526.816079-1-akozyrev@nvidia.com> References: <20220118153027.3947448-1-akozyrev@nvidia.com> <20220206032526.816079-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: cfd8a88d-4721-4cb4-9af4-08d9e920638c X-MS-TrafficTypeDiagnostic: PH0PR12MB5436:EE_|BYAPR12MB2693:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:6108; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 83V3J8POAwyt/0fdSNJG/E8UNx0lCRPDTncGAavZEppRCmLBuFeF34QQSj0OYIeSicj2LmteYlnV3/d8ogxcgVt6WBajphUq1hKsuDkS2YwmVx+IkhA0SYh68jbpWJmvQWTpXwdtK10KclkiPveQoZZILVpg0G6WrH39VTWwMMyE24VhbTsIyoAehK6UVaOSN47vZuIQK9lZdvYAhibE2XkGrNjMfuIXumcFPEFqNiLqhRH+fImTzPmZ3BCsEXR4OYxgu4dkXdk5dGm1WY3ghgShUAqrH4ARbJk0dzL2BPIwuBVbicHuXIX6CBnM0bwv1oI5T+XwW+1Op/Qb9yKxiWeleUQ09DEUwCpjnMrUIPxUz9V3VxlI3D1ybNyslepd+9tNd7WghIskowI8TfchMyBwtfALEevJ8i4G5L+sI7H4VepiUpzXyP5bNLHh9jWeuIOkmWUddsVuI96mkGEEdVfsW8a4rIEcYx4VzvUm/EC1MzUrnQvRQxqtwO7zbcRAbV31LEisDBcRqZsLdmXFn4hzZzwb/ViHrABU7z7oFWZcL6JVck5kdLb7UNe6FqMyRorM7HO8dewLUHazZfdydwC7Ci3NBkgH6EumLNnkQwH6E5gGTEmghQHvPYaX1HrPAHdI4abmsWuNAruru86wLkBFNLjk7+u2Gc/oe2e1Y50JbtYmo0tp3xcNDcUlHca7e1Eur6ij1WtwsbkPn7RgEg== X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(36840700001)(46966006)(54906003)(8936002)(70586007)(6666004)(316002)(6916009)(83380400001)(36860700001)(40460700003)(8676002)(2906002)(82310400004)(47076005)(70206006)(86362001)(1076003)(336012)(426003)(356005)(4326008)(5660300002)(30864003)(26005)(16526019)(186003)(81166007)(36756003)(2616005)(508600001)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Feb 2022 03:25:56.6301 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: cfd8a88d-4721-4cb4-9af4-08d9e920638c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT008.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR12MB2693 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add testpmd support for the rte_flow_configure API. Provide the command line interface for the Flow management. Usage example: flow configure 0 queues_number 8 queues_size 256 Implement rte_flow_info_get API to get available resources: Usage example: flow info 0 Signed-off-by: Alexander Kozyrev Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 126 +++++++++++++++++++- app/test-pmd/config.c | 53 ++++++++ app/test-pmd/testpmd.h | 7 ++ doc/guides/testpmd_app_ug/testpmd_funcs.rst | 59 ++++++++- 4 files changed, 242 insertions(+), 3 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index bbaf18d76e..bbf9f137a0 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -72,6 +72,8 @@ enum index { /* Top-level command. */ FLOW, /* Sub-level commands. */ + INFO, + CONFIGURE, INDIRECT_ACTION, VALIDATE, CREATE, @@ -122,6 +124,13 @@ enum index { DUMP_ALL, DUMP_ONE, + /* Configure arguments */ + CONFIG_QUEUES_NUMBER, + CONFIG_QUEUES_SIZE, + CONFIG_COUNTERS_NUMBER, + CONFIG_AGING_COUNTERS_NUMBER, + CONFIG_METERS_NUMBER, + /* Indirect action arguments */ INDIRECT_ACTION_CREATE, INDIRECT_ACTION_UPDATE, @@ -846,6 +855,11 @@ struct buffer { enum index command; /**< Flow command. */ portid_t port; /**< Affected port ID. */ union { + struct { + struct rte_flow_port_attr port_attr; + uint32_t nb_queue; + struct rte_flow_queue_attr queue_attr; + } configure; /**< Configuration arguments. */ struct { uint32_t *action_id; uint32_t action_id_n; @@ -927,6 +941,16 @@ static const enum index next_flex_item[] = { ZERO, }; +static const enum index next_config_attr[] = { + CONFIG_QUEUES_NUMBER, + CONFIG_QUEUES_SIZE, + CONFIG_COUNTERS_NUMBER, + CONFIG_AGING_COUNTERS_NUMBER, + CONFIG_METERS_NUMBER, + END, + ZERO, +}; + static const enum index next_ia_create_attr[] = { INDIRECT_ACTION_CREATE_ID, INDIRECT_ACTION_INGRESS, @@ -1962,6 +1986,9 @@ static int parse_aged(struct context *, const struct token *, static int parse_isolate(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_configure(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); static int parse_tunnel(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -2187,7 +2214,9 @@ static const struct token token_list[] = { .type = "{command} {port_id} [{arg} [...]]", .help = "manage ingress/egress flow rules", .next = NEXT(NEXT_ENTRY - (INDIRECT_ACTION, + (INFO, + CONFIGURE, + INDIRECT_ACTION, VALIDATE, CREATE, DESTROY, @@ -2202,6 +2231,65 @@ static const struct token token_list[] = { .call = parse_init, }, /* Top-level command. */ + [INFO] = { + .name = "info", + .help = "get information about flow engine", + .next = NEXT(NEXT_ENTRY(END), + NEXT_ENTRY(COMMON_PORT_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_configure, + }, + /* Top-level command. */ + [CONFIGURE] = { + .name = "configure", + .help = "configure flow engine", + .next = NEXT(next_config_attr, + NEXT_ENTRY(COMMON_PORT_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_configure, + }, + /* Configure arguments. */ + [CONFIG_QUEUES_NUMBER] = { + .name = "queues_number", + .help = "number of queues", + .next = NEXT(next_config_attr, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.configure.nb_queue)), + }, + [CONFIG_QUEUES_SIZE] = { + .name = "queues_size", + .help = "number of elements in queues", + .next = NEXT(next_config_attr, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.configure.queue_attr.size)), + }, + [CONFIG_COUNTERS_NUMBER] = { + .name = "counters_number", + .help = "number of counters", + .next = NEXT(next_config_attr, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.configure.port_attr.nb_counters)), + }, + [CONFIG_AGING_COUNTERS_NUMBER] = { + .name = "aging_counters_number", + .help = "number of aging flows", + .next = NEXT(next_config_attr, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.configure.port_attr.nb_aging_flows)), + }, + [CONFIG_METERS_NUMBER] = { + .name = "meters_number", + .help = "number of meters", + .next = NEXT(next_config_attr, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.configure.port_attr.nb_meters)), + }, + /* Top-level command. */ [INDIRECT_ACTION] = { .name = "indirect_action", .type = "{command} {port_id} [{arg} [...]]", @@ -7465,6 +7553,33 @@ parse_isolate(struct context *ctx, const struct token *token, return len; } +/** Parse tokens for info/configure command. */ +static int +parse_configure(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command) { + if (ctx->curr != INFO && ctx->curr != CONFIGURE) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + } + return len; +} + static int parse_flex(struct context *ctx, const struct token *token, const char *str, unsigned int len, @@ -8693,6 +8808,15 @@ static void cmd_flow_parsed(const struct buffer *in) { switch (in->command) { + case INFO: + port_flow_get_info(in->port); + break; + case CONFIGURE: + port_flow_configure(in->port, + &in->args.configure.port_attr, + in->args.configure.nb_queue, + &in->args.configure.queue_attr); + break; case INDIRECT_ACTION_CREATE: port_action_handle_create( in->port, in->args.vc.attr.group, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 1722d6c8f8..eb3fa8a8cc 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -1595,6 +1595,59 @@ action_alloc(portid_t port_id, uint32_t id, return 0; } +/** Get info about flow management resources. */ +int +port_flow_get_info(portid_t port_id) +{ + struct rte_flow_port_attr port_attr = { 0 }; + struct rte_flow_error error; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + /* Poisoning to make sure PMDs update it in case of error. */ + memset(&error, 0x99, sizeof(error)); + if (rte_flow_info_get(port_id, &port_attr, &error)) + return port_flow_complain(&error); + printf("Pre-configurable resources on port %u:\n" + "Number of counters: %d\n" + "Number of aging flows: %d\n" + "Number of meters: %d\n", + port_id, port_attr.nb_counters, + port_attr.nb_aging_flows, port_attr.nb_meters); + return 0; +} + +/** Configure flow management resources. */ +int +port_flow_configure(portid_t port_id, + const struct rte_flow_port_attr *port_attr, + uint16_t nb_queue, + const struct rte_flow_queue_attr *queue_attr) +{ + struct rte_port *port; + struct rte_flow_error error; + const struct rte_flow_queue_attr *attr_list[nb_queue]; + int std_queue; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + port->queue_nb = nb_queue; + port->queue_sz = queue_attr->size; + for (std_queue = 0; std_queue < nb_queue; std_queue++) + attr_list[std_queue] = queue_attr; + /* Poisoning to make sure PMDs update it in case of error. */ + memset(&error, 0x66, sizeof(error)); + if (rte_flow_configure(port_id, port_attr, nb_queue, attr_list, &error)) + return port_flow_complain(&error); + printf("Configure flows on port %u: " + "number of queues %d with %d elements\n", + port_id, nb_queue, queue_attr->size); + return 0; +} + /** Create indirect action */ int port_action_handle_create(portid_t port_id, uint32_t id, diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 9967825044..096b6825eb 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -243,6 +243,8 @@ struct rte_port { struct rte_eth_txconf tx_conf[RTE_MAX_QUEUES_PER_PORT+1]; /**< per queue tx configuration */ struct rte_ether_addr *mc_addr_pool; /**< pool of multicast addrs */ uint32_t mc_addr_nb; /**< nb. of addr. in mc_addr_pool */ + queueid_t queue_nb; /**< nb. of queues for flow rules */ + uint32_t queue_sz; /**< size of a queue for flow rules */ uint8_t slave_flag; /**< bonding slave port */ struct port_flow *flow_list; /**< Associated flows. */ struct port_indirect_action *actions_list; @@ -885,6 +887,11 @@ struct rte_flow_action_handle *port_action_handle_get_by_id(portid_t port_id, uint32_t id); int port_action_handle_update(portid_t port_id, uint32_t id, const struct rte_flow_action *action); +int port_flow_get_info(portid_t port_id); +int port_flow_configure(portid_t port_id, + const struct rte_flow_port_attr *port_attr, + uint16_t nb_queue, + const struct rte_flow_queue_attr *queue_attr); int port_flow_validate(portid_t port_id, const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 94792d88cc..d452fcfce3 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3285,8 +3285,8 @@ Flow rules management --------------------- Control of the generic flow API (*rte_flow*) is fully exposed through the -``flow`` command (validation, creation, destruction, queries and operation -modes). +``flow`` command (configuration, validation, creation, destruction, queries +and operation modes). Considering *rte_flow* overlaps with all `Filter Functions`_, using both features simultaneously may cause undefined side-effects and is therefore @@ -3309,6 +3309,18 @@ The first parameter stands for the operation mode. Possible operations and their general syntax are described below. They are covered in detail in the following sections. +- Get info about flow engine:: + + flow info {port_id} + +- Configure flow engine:: + + flow configure {port_id} + [queues_number {number}] [queues_size {size}] + [counters_number {number}] + [aging_counters_number {number}] + [meters_number {number}] + - Check whether a flow rule can be created:: flow validate {port_id} @@ -3368,6 +3380,49 @@ following sections. flow tunnel list {port_id} +Retrieving info about flow management engine +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow info`` retrieves info on pre-configurable resources in the underlying +device to give a hint of possible values for flow engine configuration. + +``rte_flow_info_get()``:: + + flow info {port_id} + +If successful, it will show:: + + Pre-configurable resources on port #[...]: + Number of counters: #[...] + Number of aging flows: #[...] + Number of meters: #[...] + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + +Configuring flow management engine +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow configure`` pre-allocates all the needed resources in the underlying +device to be used later at the flow creation. Flow queues are allocated as well +for asynchronous flow creation/destruction operations. It is bound to +``rte_flow_configure()``:: + + flow configure {port_id} + [queues_number {number}] [queues_size {size}] + [counters_number {number}] + [aging_counters_number {number}] + [meters_number {number}] + +If successful, it will show:: + + Configure flows on port #[...]: number of queues #[...] with #[...] elements + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + Creating a tunnel stub for offload ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From patchwork Sun Feb 6 03:25:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 106897 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 86068A034E; Sun, 6 Feb 2022 04:26:23 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2E751410FC; Sun, 6 Feb 2022 04:26:06 +0100 (CET) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2042.outbound.protection.outlook.com [40.107.92.42]) by mails.dpdk.org (Postfix) with ESMTP id 4FF534114F for ; Sun, 6 Feb 2022 04:26:04 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=EynXvBGn9L3PgnGsTH38RO5f0YkonaYF0kz89tv6Y5BuB2qeGhk4vEOjCqWgiV/t6GaQdwjRQAsIzy5zh1DUlB5pkW92l+0BN8y10oDJIgn61bgx97EcXKBqBWBmZx+CD8YroAtda155KFcYiRTVTrxBm3hyakyej65mXQvQ+nlU0rpqIwWgSx3jkQR1X+uawE6BZ6isBZptaOsb9uUF3mHPlyAm71+LWktJk+bnwAPsWaiRVie9y8QE5zm9WPbrJCFTqcbbe+szLMNIFovI6kxzgfL3kmnh18cpOeQuBbsv1x6uFUY06oL3oJAMgbCz4nunJQj3AR25JeUcoJ572Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=mMonBND+HJHn55KjUFoyw9TJpBUThA/dpxsNQ4oqUdI=; b=JfGFEnmSmSm17ltCtgoYVA18lYYTXo60gYw88CDinEWqVg0yel1caP3sl9nv+IZYpZRWB6Oy7JmEE4ctglt0T3vt3gNWd5ONnsOAPO/6E1BY9aFM4CrXwSejU1K6/EXxZvxJahz5uyDYObszLO6vyagQNhoHc4LJKO4TUUvXFu+K/4s7oLA6+LzVZ/70xLnk8bChk/Uhvt1Z2zX0dUxHbXfVVwSvkbW6j2nDP8caof7wcVEzg2UEvUuEq+/P+LYn/h92fPy8Zzp6Nliv1kpOWyUJvLnaAIVe0CzAvvyxCBRbsid4ibvjm0E0JzepCcFBnmiXass3jSGmSwajwlXKIw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=mMonBND+HJHn55KjUFoyw9TJpBUThA/dpxsNQ4oqUdI=; b=m4s+ltXm5XcubRMb51WVLk+7EqrGOTyYnnVXH4nQrI//99kNtyYl6/uyZswZ6Z/y+1yaxhVxwqSzb0zV4iqPu4sl6jYROlnKponx9dZJD44ThCR3lHGAOoFNy6yia6hzvcVh6gfqB0ttZiYYmtLWhN9vHmk50eV2ni9Vk4sb4jxQLqf2myaC8B2FjBiEIS6H9SKdKHG4vyrL+DP+12ihCsojiYRIOLSArcdZtY9QXBtdtN5+uAznkA0ywtdKWcKWltjcY+rHofan+Vaa9+QfDGDGCAOo7MyKQqczPB3BfWDCOSESz2ocWvwYGsdF7394aVlss986jZ3+oEviIjkGdw== Received: from BY5PR12MB4113.namprd12.prod.outlook.com (2603:10b6:a03:207::15) by BYAPR12MB2662.namprd12.prod.outlook.com (2603:10b6:a03:61::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12; Sun, 6 Feb 2022 03:26:01 +0000 Received: from BN6PR17CA0017.namprd17.prod.outlook.com (2603:10b6:404:65::27) by BY5PR12MB4113.namprd12.prod.outlook.com (2603:10b6:a03:207::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.11; Sun, 6 Feb 2022 03:26:01 +0000 Received: from BN8NAM11FT058.eop-nam11.prod.protection.outlook.com (2603:10b6:404:65:cafe::b7) by BN6PR17CA0017.outlook.office365.com (2603:10b6:404:65::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.18 via Frontend Transport; Sun, 6 Feb 2022 03:26:00 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by BN8NAM11FT058.mail.protection.outlook.com (10.13.177.58) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4951.12 via Frontend Transport; Sun, 6 Feb 2022 03:25:59 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 6 Feb 2022 03:25:58 +0000 Received: from pegasus01.mtr.labs.mlnx (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Sat, 5 Feb 2022 19:25:55 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , Subject: [PATCH v3 05/10] app/testpmd: implement rte flow template management Date: Sun, 6 Feb 2022 05:25:21 +0200 Message-ID: <20220206032526.816079-6-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220206032526.816079-1-akozyrev@nvidia.com> References: <20220118153027.3947448-1-akozyrev@nvidia.com> <20220206032526.816079-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: cbf1ef0e-02c6-4d61-66ba-08d9e9206570 X-MS-TrafficTypeDiagnostic: BY5PR12MB4113:EE_|BYAPR12MB2662:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2331; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: oy/ojZBXZt+4hgtIDnt88EAoBRqqp2YIBEONXwTMfFCSiPEPPggcYlsO5DIiJp9QdV0RbjxkA7FgpGaXo/WO68NM8w3TCHJ+P/aw3l40/brqCmocmdri6c/66LhRHDfGZmXqWkXglKvTfLgFyS0RQpu0eX3QEaPZPOjItODoNkrab4ir1si2WfAU8H2Lwj4/5dSbVlB3mbEM7LHPDF5ZJ/6FMmaswapF1A96SHRgSq9eN7RNlQfpfNcggd8//qjcPNA8JgnBpKj+VkxcsOT3HRWZPwiL4bIZ/rkRpLx1Gm1O1l9TB+6e61uukf+zb+PXoBCQE15cvRXg6IdONCRrxszF2e+dhNZqq6xSeOCaWfNDAzoCC/0cM4Sy3EC3GYLmPl0FLtjnRX16/CvoNDZF6IhmVtYoGM/oIZzSzKI/vBbgrFNPwS8N1Hg1bTKwJQ9JRiMVbxXyeP6eSKa6WjLaq5Jdc7sJ1GF3WGU7QqSXslWjeF64REiyHSSYav1pEdfGFAXT/Zq4tjJzGSIB7gEQCGUpPATOc5EZ11MWrMSm9/chsiU8nglsfWgsVsoZm4gRD3kEwGwqnQueQ7CpDnLiBQAy8oEf45ss3mdiPZ1OL+rxxBojJlRsJRQILjEf32dXV/KGbZDUF0GgSb3kPVasQC2OY7XWqaXK9l1lbX6zA1W1hCpMMZ0IFffjn6qz5ZgdxYDCIxSDERz453uFtxkEkA== X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(36840700001)(46966006)(2906002)(2616005)(356005)(5660300002)(36756003)(336012)(30864003)(36860700001)(83380400001)(81166007)(16526019)(1076003)(186003)(26005)(82310400004)(426003)(508600001)(6666004)(316002)(4326008)(6916009)(70586007)(70206006)(8676002)(8936002)(86362001)(54906003)(40460700003)(47076005)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Feb 2022 03:25:59.7382 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: cbf1ef0e-02c6-4d61-66ba-08d9e9206570 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT058.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR12MB2662 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add testpmd support for the rte_flow_pattern_template and rte_flow_actions_template APIs. Provide the command line interface for the template creation/destruction. Usage example: testpmd> flow pattern_template 0 create pattern_template_id 2 template eth dst is 00:16:3e:31:15:c3 / end testpmd> flow actions_template 0 create actions_template_id 4 template drop / end mask drop / end testpmd> flow actions_template 0 destroy actions_template 4 testpmd> flow pattern_template 0 destroy pattern_template 2 Signed-off-by: Alexander Kozyrev Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 376 +++++++++++++++++++- app/test-pmd/config.c | 204 +++++++++++ app/test-pmd/testpmd.h | 23 ++ doc/guides/testpmd_app_ug/testpmd_funcs.rst | 97 +++++ 4 files changed, 698 insertions(+), 2 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index bbf9f137a0..3f0e73743a 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -56,6 +56,8 @@ enum index { COMMON_POLICY_ID, COMMON_FLEX_HANDLE, COMMON_FLEX_TOKEN, + COMMON_PATTERN_TEMPLATE_ID, + COMMON_ACTIONS_TEMPLATE_ID, /* TOP-level command. */ ADD, @@ -74,6 +76,8 @@ enum index { /* Sub-level commands. */ INFO, CONFIGURE, + PATTERN_TEMPLATE, + ACTIONS_TEMPLATE, INDIRECT_ACTION, VALIDATE, CREATE, @@ -92,6 +96,22 @@ enum index { FLEX_ITEM_CREATE, FLEX_ITEM_DESTROY, + /* Pattern template arguments. */ + PATTERN_TEMPLATE_CREATE, + PATTERN_TEMPLATE_DESTROY, + PATTERN_TEMPLATE_CREATE_ID, + PATTERN_TEMPLATE_DESTROY_ID, + PATTERN_TEMPLATE_RELAXED_MATCHING, + PATTERN_TEMPLATE_SPEC, + + /* Actions template arguments. */ + ACTIONS_TEMPLATE_CREATE, + ACTIONS_TEMPLATE_DESTROY, + ACTIONS_TEMPLATE_CREATE_ID, + ACTIONS_TEMPLATE_DESTROY_ID, + ACTIONS_TEMPLATE_SPEC, + ACTIONS_TEMPLATE_MASK, + /* Tunnel arguments. */ TUNNEL_CREATE, TUNNEL_CREATE_TYPE, @@ -860,6 +880,10 @@ struct buffer { uint32_t nb_queue; struct rte_flow_queue_attr queue_attr; } configure; /**< Configuration arguments. */ + struct { + uint32_t *template_id; + uint32_t template_id_n; + } templ_destroy; /**< Template destroy arguments. */ struct { uint32_t *action_id; uint32_t action_id_n; @@ -868,10 +892,13 @@ struct buffer { uint32_t action_id; } ia; /* Indirect action query arguments */ struct { + uint32_t pat_templ_id; + uint32_t act_templ_id; struct rte_flow_attr attr; struct tunnel_ops tunnel_ops; struct rte_flow_item *pattern; struct rte_flow_action *actions; + struct rte_flow_action *masks; uint32_t pattern_n; uint32_t actions_n; uint8_t *data; @@ -951,6 +978,43 @@ static const enum index next_config_attr[] = { ZERO, }; +static const enum index next_pt_subcmd[] = { + PATTERN_TEMPLATE_CREATE, + PATTERN_TEMPLATE_DESTROY, + ZERO, +}; + +static const enum index next_pt_attr[] = { + PATTERN_TEMPLATE_CREATE_ID, + PATTERN_TEMPLATE_RELAXED_MATCHING, + PATTERN_TEMPLATE_SPEC, + ZERO, +}; + +static const enum index next_pt_destroy_attr[] = { + PATTERN_TEMPLATE_DESTROY_ID, + END, + ZERO, +}; + +static const enum index next_at_subcmd[] = { + ACTIONS_TEMPLATE_CREATE, + ACTIONS_TEMPLATE_DESTROY, + ZERO, +}; + +static const enum index next_at_attr[] = { + ACTIONS_TEMPLATE_CREATE_ID, + ACTIONS_TEMPLATE_SPEC, + ZERO, +}; + +static const enum index next_at_destroy_attr[] = { + ACTIONS_TEMPLATE_DESTROY_ID, + END, + ZERO, +}; + static const enum index next_ia_create_attr[] = { INDIRECT_ACTION_CREATE_ID, INDIRECT_ACTION_INGRESS, @@ -1989,6 +2053,12 @@ static int parse_isolate(struct context *, const struct token *, static int parse_configure(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_template(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); +static int parse_template_destroy(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); static int parse_tunnel(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -2058,6 +2128,10 @@ static int comp_set_modify_field_op(struct context *, const struct token *, unsigned int, char *, unsigned int); static int comp_set_modify_field_id(struct context *, const struct token *, unsigned int, char *, unsigned int); +static int comp_pattern_template_id(struct context *, const struct token *, + unsigned int, char *, unsigned int); +static int comp_actions_template_id(struct context *, const struct token *, + unsigned int, char *, unsigned int); /** Token definitions. */ static const struct token token_list[] = { @@ -2208,6 +2282,20 @@ static const struct token token_list[] = { .call = parse_flex_handle, .comp = comp_none, }, + [COMMON_PATTERN_TEMPLATE_ID] = { + .name = "{pattern_template_id}", + .type = "PATTERN_TEMPLATE_ID", + .help = "pattern template id", + .call = parse_int, + .comp = comp_pattern_template_id, + }, + [COMMON_ACTIONS_TEMPLATE_ID] = { + .name = "{actions_template_id}", + .type = "ACTIONS_TEMPLATE_ID", + .help = "actions template id", + .call = parse_int, + .comp = comp_actions_template_id, + }, /* Top-level command. */ [FLOW] = { .name = "flow", @@ -2216,6 +2304,8 @@ static const struct token token_list[] = { .next = NEXT(NEXT_ENTRY (INFO, CONFIGURE, + PATTERN_TEMPLATE, + ACTIONS_TEMPLATE, INDIRECT_ACTION, VALIDATE, CREATE, @@ -2290,6 +2380,112 @@ static const struct token token_list[] = { args.configure.port_attr.nb_meters)), }, /* Top-level command. */ + [PATTERN_TEMPLATE] = { + .name = "pattern_template", + .type = "{command} {port_id} [{arg} [...]]", + .help = "manage pattern templates", + .next = NEXT(next_pt_subcmd, NEXT_ENTRY(COMMON_PORT_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_template, + }, + /* Sub-level commands. */ + [PATTERN_TEMPLATE_CREATE] = { + .name = "create", + .help = "create pattern template", + .next = NEXT(next_pt_attr), + .call = parse_template, + }, + [PATTERN_TEMPLATE_DESTROY] = { + .name = "destroy", + .help = "destroy pattern template", + .next = NEXT(NEXT_ENTRY(PATTERN_TEMPLATE_DESTROY_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_template_destroy, + }, + /* Pattern template arguments. */ + [PATTERN_TEMPLATE_CREATE_ID] = { + .name = "pattern_template_id", + .help = "specify a pattern template id to create", + .next = NEXT(next_pt_attr, + NEXT_ENTRY(COMMON_PATTERN_TEMPLATE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, args.vc.pat_templ_id)), + }, + [PATTERN_TEMPLATE_DESTROY_ID] = { + .name = "pattern_template", + .help = "specify a pattern template id to destroy", + .next = NEXT(next_pt_destroy_attr, + NEXT_ENTRY(COMMON_PATTERN_TEMPLATE_ID)), + .args = ARGS(ARGS_ENTRY_PTR(struct buffer, + args.templ_destroy.template_id)), + .call = parse_template_destroy, + }, + [PATTERN_TEMPLATE_RELAXED_MATCHING] = { + .name = "relaxed", + .help = "is matching relaxed", + .next = NEXT(next_pt_attr, + NEXT_ENTRY(COMMON_BOOLEAN)), + .args = ARGS(ARGS_ENTRY_BF(struct buffer, + args.vc.attr.reserved, 1)), + }, + [PATTERN_TEMPLATE_SPEC] = { + .name = "template", + .help = "specify item to create pattern template", + .next = NEXT(next_item), + }, + /* Top-level command. */ + [ACTIONS_TEMPLATE] = { + .name = "actions_template", + .type = "{command} {port_id} [{arg} [...]]", + .help = "manage actions templates", + .next = NEXT(next_at_subcmd, NEXT_ENTRY(COMMON_PORT_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_template, + }, + /* Sub-level commands. */ + [ACTIONS_TEMPLATE_CREATE] = { + .name = "create", + .help = "create actions template", + .next = NEXT(next_at_attr), + .call = parse_template, + }, + [ACTIONS_TEMPLATE_DESTROY] = { + .name = "destroy", + .help = "destroy actions template", + .next = NEXT(NEXT_ENTRY(ACTIONS_TEMPLATE_DESTROY_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_template_destroy, + }, + /* Actions template arguments. */ + [ACTIONS_TEMPLATE_CREATE_ID] = { + .name = "actions_template_id", + .help = "specify an actions template id to create", + .next = NEXT(NEXT_ENTRY(ACTIONS_TEMPLATE_MASK), + NEXT_ENTRY(ACTIONS_TEMPLATE_SPEC), + NEXT_ENTRY(COMMON_ACTIONS_TEMPLATE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, args.vc.act_templ_id)), + }, + [ACTIONS_TEMPLATE_DESTROY_ID] = { + .name = "actions_template", + .help = "specify an actions template id to destroy", + .next = NEXT(next_at_destroy_attr, + NEXT_ENTRY(COMMON_ACTIONS_TEMPLATE_ID)), + .args = ARGS(ARGS_ENTRY_PTR(struct buffer, + args.templ_destroy.template_id)), + .call = parse_template_destroy, + }, + [ACTIONS_TEMPLATE_SPEC] = { + .name = "template", + .help = "specify action to create actions template", + .next = NEXT(next_action), + .call = parse_template, + }, + [ACTIONS_TEMPLATE_MASK] = { + .name = "mask", + .help = "specify action mask to create actions template", + .next = NEXT(next_action), + .call = parse_template, + }, + /* Top-level command. */ [INDIRECT_ACTION] = { .name = "indirect_action", .type = "{command} {port_id} [{arg} [...]]", @@ -2612,7 +2808,7 @@ static const struct token token_list[] = { .name = "end", .help = "end list of pattern items", .priv = PRIV_ITEM(END, 0), - .next = NEXT(NEXT_ENTRY(ACTIONS)), + .next = NEXT(NEXT_ENTRY(ACTIONS, END)), .call = parse_vc, }, [ITEM_VOID] = { @@ -5716,7 +5912,9 @@ parse_vc(struct context *ctx, const struct token *token, if (!out) return len; if (!out->command) { - if (ctx->curr != VALIDATE && ctx->curr != CREATE) + if (ctx->curr != VALIDATE && ctx->curr != CREATE && + ctx->curr != PATTERN_TEMPLATE_CREATE && + ctx->curr != ACTIONS_TEMPLATE_CREATE) return -1; if (sizeof(*out) > size) return -1; @@ -7580,6 +7778,114 @@ parse_configure(struct context *ctx, const struct token *token, return len; } +/** Parse tokens for template create command. */ +static int +parse_template(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command) { + if (ctx->curr != PATTERN_TEMPLATE && + ctx->curr != ACTIONS_TEMPLATE) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.vc.data = (uint8_t *)out + size; + return len; + } + switch (ctx->curr) { + case PATTERN_TEMPLATE_CREATE: + out->args.vc.pattern = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + out->args.vc.pat_templ_id = UINT32_MAX; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + return len; + case ACTIONS_TEMPLATE_CREATE: + out->args.vc.act_templ_id = UINT32_MAX; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + return len; + case ACTIONS_TEMPLATE_SPEC: + out->args.vc.actions = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + ctx->object = out->args.vc.actions; + ctx->objmask = NULL; + return len; + case ACTIONS_TEMPLATE_MASK: + out->args.vc.masks = + (void *)RTE_ALIGN_CEIL((uintptr_t) + (out->args.vc.actions + + out->args.vc.actions_n), + sizeof(double)); + ctx->object = out->args.vc.masks; + ctx->objmask = NULL; + return len; + default: + return -1; + } +} + +/** Parse tokens for template destroy command. */ +static int +parse_template_destroy(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + uint32_t *template_id; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command || + out->command == PATTERN_TEMPLATE || + out->command == ACTIONS_TEMPLATE) { + if (ctx->curr != PATTERN_TEMPLATE_DESTROY && + ctx->curr != ACTIONS_TEMPLATE_DESTROY) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.templ_destroy.template_id = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + return len; + } + template_id = out->args.templ_destroy.template_id + + out->args.templ_destroy.template_id_n++; + if ((uint8_t *)template_id > (uint8_t *)out + size) + return -1; + ctx->objdata = 0; + ctx->object = template_id; + ctx->objmask = NULL; + return len; +} + static int parse_flex(struct context *ctx, const struct token *token, const char *str, unsigned int len, @@ -8549,6 +8855,54 @@ comp_set_modify_field_id(struct context *ctx, const struct token *token, return -1; } +/** Complete available pattern template IDs. */ +static int +comp_pattern_template_id(struct context *ctx, const struct token *token, + unsigned int ent, char *buf, unsigned int size) +{ + unsigned int i = 0; + struct rte_port *port; + struct port_template *pt; + + (void)token; + if (port_id_is_invalid(ctx->port, DISABLED_WARN) || + ctx->port == (portid_t)RTE_PORT_ALL) + return -1; + port = &ports[ctx->port]; + for (pt = port->pattern_templ_list; pt != NULL; pt = pt->next) { + if (buf && i == ent) + return snprintf(buf, size, "%u", pt->id); + ++i; + } + if (buf) + return -1; + return i; +} + +/** Complete available actions template IDs. */ +static int +comp_actions_template_id(struct context *ctx, const struct token *token, + unsigned int ent, char *buf, unsigned int size) +{ + unsigned int i = 0; + struct rte_port *port; + struct port_template *pt; + + (void)token; + if (port_id_is_invalid(ctx->port, DISABLED_WARN) || + ctx->port == (portid_t)RTE_PORT_ALL) + return -1; + port = &ports[ctx->port]; + for (pt = port->actions_templ_list; pt != NULL; pt = pt->next) { + if (buf && i == ent) + return snprintf(buf, size, "%u", pt->id); + ++i; + } + if (buf) + return -1; + return i; +} + /** Internal context. */ static struct context cmd_flow_context; @@ -8817,6 +9171,24 @@ cmd_flow_parsed(const struct buffer *in) in->args.configure.nb_queue, &in->args.configure.queue_attr); break; + case PATTERN_TEMPLATE_CREATE: + port_flow_pattern_template_create(in->port, in->args.vc.pat_templ_id, + in->args.vc.attr.reserved, in->args.vc.pattern); + break; + case PATTERN_TEMPLATE_DESTROY: + port_flow_pattern_template_destroy(in->port, + in->args.templ_destroy.template_id_n, + in->args.templ_destroy.template_id); + break; + case ACTIONS_TEMPLATE_CREATE: + port_flow_actions_template_create(in->port, in->args.vc.act_templ_id, + in->args.vc.actions, in->args.vc.masks); + break; + case ACTIONS_TEMPLATE_DESTROY: + port_flow_actions_template_destroy(in->port, + in->args.templ_destroy.template_id_n, + in->args.templ_destroy.template_id); + break; case INDIRECT_ACTION_CREATE: port_action_handle_create( in->port, in->args.vc.attr.group, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index eb3fa8a8cc..adc77169af 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -1595,6 +1595,49 @@ action_alloc(portid_t port_id, uint32_t id, return 0; } +static int +template_alloc(uint32_t id, struct port_template **template, + struct port_template **list) +{ + struct port_template *lst = *list; + struct port_template **ppt; + struct port_template *pt = NULL; + + *template = NULL; + if (id == UINT32_MAX) { + /* taking first available ID */ + if (lst) { + if (lst->id == UINT32_MAX - 1) { + printf("Highest template ID is already" + " assigned, delete it first\n"); + return -ENOMEM; + } + id = lst->id + 1; + } else { + id = 0; + } + } + pt = calloc(1, sizeof(*pt)); + if (!pt) { + printf("Allocation of port template failed\n"); + return -ENOMEM; + } + ppt = list; + while (*ppt && (*ppt)->id > id) + ppt = &(*ppt)->next; + if (*ppt && (*ppt)->id == id) { + printf("Template #%u is already assigned," + " delete it first\n", id); + free(pt); + return -EINVAL; + } + pt->next = *ppt; + pt->id = id; + *ppt = pt; + *template = pt; + return 0; +} + /** Get info about flow management resources. */ int port_flow_get_info(portid_t port_id) @@ -2063,6 +2106,167 @@ age_action_get(const struct rte_flow_action *actions) return NULL; } +/** Create pattern template */ +int +port_flow_pattern_template_create(portid_t port_id, uint32_t id, bool relaxed, + const struct rte_flow_item *pattern) +{ + struct rte_port *port; + struct port_template *pit; + int ret; + struct rte_flow_pattern_template_attr attr = { + .relaxed_matching = relaxed }; + struct rte_flow_error error; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + ret = template_alloc(id, &pit, &port->pattern_templ_list); + if (ret) + return ret; + /* Poisoning to make sure PMDs update it in case of error. */ + memset(&error, 0x22, sizeof(error)); + pit->template.pattern_template = rte_flow_pattern_template_create(port_id, + &attr, pattern, &error); + if (!pit->template.pattern_template) { + uint32_t destroy_id = pit->id; + port_flow_pattern_template_destroy(port_id, 1, &destroy_id); + return port_flow_complain(&error); + } + printf("Pattern template #%u created\n", pit->id); + return 0; +} + +/** Destroy pattern template */ +int +port_flow_pattern_template_destroy(portid_t port_id, uint32_t n, + const uint32_t *template) +{ + struct rte_port *port; + struct port_template **tmp; + uint32_t c = 0; + int ret = 0; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + tmp = &port->pattern_templ_list; + while (*tmp) { + uint32_t i; + + for (i = 0; i != n; ++i) { + struct rte_flow_error error; + struct port_template *pit = *tmp; + + if (template[i] != pit->id) + continue; + /* + * Poisoning to make sure PMDs update it in case + * of error. + */ + memset(&error, 0x33, sizeof(error)); + + if (pit->template.pattern_template && + rte_flow_pattern_template_destroy(port_id, + pit->template.pattern_template, + &error)) { + ret = port_flow_complain(&error); + continue; + } + *tmp = pit->next; + printf("Pattern template #%u destroyed\n", pit->id); + free(pit); + break; + } + if (i == n) + tmp = &(*tmp)->next; + ++c; + } + return ret; +} + +/** Create actions template */ +int +port_flow_actions_template_create(portid_t port_id, uint32_t id, + const struct rte_flow_action *actions, + const struct rte_flow_action *masks) +{ + struct rte_port *port; + struct port_template *pat; + int ret; + struct rte_flow_actions_template_attr attr = { 0 }; + struct rte_flow_error error; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + ret = template_alloc(id, &pat, &port->actions_templ_list); + if (ret) + return ret; + /* Poisoning to make sure PMDs update it in case of error. */ + memset(&error, 0x22, sizeof(error)); + pat->template.actions_template = rte_flow_actions_template_create(port_id, + &attr, actions, masks, &error); + if (!pat->template.actions_template) { + uint32_t destroy_id = pat->id; + port_flow_actions_template_destroy(port_id, 1, &destroy_id); + return port_flow_complain(&error); + } + printf("Actions template #%u created\n", pat->id); + return 0; +} + +/** Destroy actions template */ +int +port_flow_actions_template_destroy(portid_t port_id, uint32_t n, + const uint32_t *template) +{ + struct rte_port *port; + struct port_template **tmp; + uint32_t c = 0; + int ret = 0; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + tmp = &port->actions_templ_list; + while (*tmp) { + uint32_t i; + + for (i = 0; i != n; ++i) { + struct rte_flow_error error; + struct port_template *pat = *tmp; + + if (template[i] != pat->id) + continue; + /* + * Poisoning to make sure PMDs update it in case + * of error. + */ + memset(&error, 0x33, sizeof(error)); + + if (pat->template.actions_template && + rte_flow_actions_template_destroy(port_id, + pat->template.actions_template, &error)) { + ret = port_flow_complain(&error); + continue; + } + *tmp = pat->next; + printf("Actions template #%u destroyed\n", pat->id); + free(pat); + break; + } + if (i == n) + tmp = &(*tmp)->next; + ++c; + } + return ret; +} + /** Create flow rule. */ int port_flow_create(portid_t port_id, diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 096b6825eb..c70b1fa4e8 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -166,6 +166,17 @@ enum age_action_context_type { ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION, }; +/** Descriptor for a template. */ +struct port_template { + struct port_template *next; /**< Next template in list. */ + struct port_template *tmp; /**< Temporary linking. */ + uint32_t id; /**< Template ID. */ + union { + struct rte_flow_pattern_template *pattern_template; + struct rte_flow_actions_template *actions_template; + } template; /**< PMD opaque template object */ +}; + /** Descriptor for a single flow. */ struct port_flow { struct port_flow *next; /**< Next flow in list. */ @@ -246,6 +257,8 @@ struct rte_port { queueid_t queue_nb; /**< nb. of queues for flow rules */ uint32_t queue_sz; /**< size of a queue for flow rules */ uint8_t slave_flag; /**< bonding slave port */ + struct port_template *pattern_templ_list; /**< Pattern templates. */ + struct port_template *actions_templ_list; /**< Actions templates. */ struct port_flow *flow_list; /**< Associated flows. */ struct port_indirect_action *actions_list; /**< Associated indirect actions. */ @@ -892,6 +905,16 @@ int port_flow_configure(portid_t port_id, const struct rte_flow_port_attr *port_attr, uint16_t nb_queue, const struct rte_flow_queue_attr *queue_attr); +int port_flow_pattern_template_create(portid_t port_id, uint32_t id, + bool relaxed, + const struct rte_flow_item *pattern); +int port_flow_pattern_template_destroy(portid_t port_id, uint32_t n, + const uint32_t *template); +int port_flow_actions_template_create(portid_t port_id, uint32_t id, + const struct rte_flow_action *actions, + const struct rte_flow_action *masks); +int port_flow_actions_template_destroy(portid_t port_id, uint32_t n, + const uint32_t *template); int port_flow_validate(portid_t port_id, const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index d452fcfce3..56e821ec5c 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3321,6 +3321,24 @@ following sections. [aging_counters_number {number}] [meters_number {number}] +- Create a pattern template:: + flow pattern_template {port_id} create [pattern_template_id {id}] + [relaxed {boolean}] template {item} [/ {item} [...]] / end + +- Destroy a pattern template:: + + flow pattern_template {port_id} destroy pattern_template {id} [...] + +- Create an actions template:: + + flow actions_template {port_id} create [actions_template_id {id}] + template {action} [/ {action} [...]] / end + mask {action} [/ {action} [...]] / end + +- Destroy an actions template:: + + flow actions_template {port_id} destroy actions_template {id} [...] + - Check whether a flow rule can be created:: flow validate {port_id} @@ -3423,6 +3441,85 @@ Otherwise it will show an error message of the form:: Caught error type [...] ([...]): [...] +Creating pattern templates +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow pattern_template create`` creates the specified pattern template. +It is bound to ``rte_flow_pattern_template_create()``:: + + flow pattern_template {port_id} create [pattern_template_id {id}] + [relaxed {boolean}] template {item} [/ {item} [...]] / end + +If successful, it will show:: + + Pattern template #[...] created + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + +This command uses the same pattern items as ``flow create``, +their format is described in `Creating flow rules`_. + +Destroying pattern templates +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow pattern_template destroy`` destroys one or more pattern templates +from their template ID (as returned by ``flow pattern_template create``), +this command calls ``rte_flow_pattern_template_destroy()`` as many +times as necessary:: + + flow pattern_template {port_id} destroy pattern_template {id} [...] + +If successful, it will show:: + + Pattern template #[...] destroyed + +It does not report anything for pattern template IDs that do not exist. +The usual error message is shown when a pattern template cannot be destroyed:: + + Caught error type [...] ([...]): [...] + +Creating actions templates +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow actions_template create`` creates the specified actions template. +It is bound to ``rte_flow_actions_template_create()``:: + + flow actions_template {port_id} create [actions_template_id {id}] + template {action} [/ {action} [...]] / end + mask {action} [/ {action} [...]] / end + +If successful, it will show:: + + Actions template #[...] created + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + +This command uses the same actions as ``flow create``, +their format is described in `Creating flow rules`_. + +Destroying actions templates +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow actions_template destroy`` destroys one or more actions templates +from their template ID (as returned by ``flow actions_template create``), +this command calls ``rte_flow_actions_template_destroy()`` as many +times as necessary:: + + flow actions_template {port_id} destroy actions_template {id} [...] + +If successful, it will show:: + + Actions template #[...] destroyed + +It does not report anything for actions template IDs that do not exist. +The usual error message is shown when an actions template cannot be destroyed:: + + Caught error type [...] ([...]): [...] + Creating a tunnel stub for offload ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From patchwork Sun Feb 6 03:25:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 106898 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C1FE2A034E; Sun, 6 Feb 2022 04:26:31 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A933F41163; Sun, 6 Feb 2022 04:26:08 +0100 (CET) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2054.outbound.protection.outlook.com [40.107.220.54]) by mails.dpdk.org (Postfix) with ESMTP id 01EF541145 for ; Sun, 6 Feb 2022 04:26:07 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Rk8ApYXT9POyfMbAY9STUkq5JmNn+bN4SahU+9SYcnOILhed1aUR9890bC84dWoW2Kn0fuhv4iGjj9sRq/4k1Iowp8rXoaqgIEbGyWK0jE280m85MsdJ8CUY45Pkx3oqnwV4ByAby47JUXCU2h5CXmoRPRNWr6YTFbB7VFi1q72wTlRhuR9wVKX+yRwkOsPU0xczHEtKra7uVjXg6pXdZkTl2ID9/tWqtLo7zd3xMxWK0KrBtlLm3KizaVVfphnulFXcI2jEigYXq2tBmvxZ7uw1WTM2g/+uqgbxvTLsYXV0Y8yACX806zkjVSR8qHPGxV/F1u/7z0Q/C1yFPvOtLw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=bEkped1zA9P2+FokXpdJ7+075IRuJTq9NFehH9cFC9Y=; b=Pw2PHzh9lGuP4Yc3pfT/TpvBliwUgb3MNY5OSeMLJdgf6LhpOXtG5HcHEXfNtbARZASgMX2PB0vTZoW23H/Uyw8qTerJwG6H3kT9lSegduRRbEDnJcimaIs5gka9Mc29SQEYIoZnA9Igr10g7YwBybEAVKVpuFsK8XVUWsxqsMejFSFmONtdt5Zo9VmDsift+LFhWsDw7lijgFX6i53pICsCNi368jGQFsklqTer2306CTZhPXwwe7EMNQEvitZsrrtRvWPNXwjRbDwJFjerygs1Va6ckJr68s9v2m3BLcgYMqFV9egH/Nq2GBZL/WWl66t8FZjV9CmM43T/C0SWqA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=bEkped1zA9P2+FokXpdJ7+075IRuJTq9NFehH9cFC9Y=; b=aZPqB73XQ1bLlYCe+wrn8nYk0XxX2kz3hBOEjaJpYIMUmJHT+XMMuvZe45EPwHuUxU7693IyiMArRkpBTrOE/unqQZ6oe3H6fZ8Gda+9HrvEj5ftCR6sor/DmqNbpyaiUscJlqtcg4tVIoIlQChwOvVzGi1tQKxFxEt+ecDSz9Ga+K9cBzWYV7PcWT7Io0aSt2nNAYJvlsVLkQll3PPTUx2vzXX31RGtRz2JhXswxKwPeg3qFBy2fRqs4l9B6UeX1lKc8gBEU4BLXSHWfN7nP7xkQPivhwJ6Xjf6Q1MiPMduQWXslLF7u14gQYmE+V2ZRHM478AFNtyhhWte2zeHzw== Received: from BYAPR12MB3126.namprd12.prod.outlook.com (2603:10b6:a03:df::28) by BN8PR12MB2978.namprd12.prod.outlook.com (2603:10b6:408:42::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12; Sun, 6 Feb 2022 03:26:04 +0000 Received: from DS7PR07CA0008.namprd07.prod.outlook.com (2603:10b6:5:3af::21) by BYAPR12MB3126.namprd12.prod.outlook.com (2603:10b6:a03:df::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12; Sun, 6 Feb 2022 03:26:02 +0000 Received: from DM6NAM11FT032.eop-nam11.prod.protection.outlook.com (2603:10b6:5:3af:cafe::31) by DS7PR07CA0008.outlook.office365.com (2603:10b6:5:3af::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12 via Frontend Transport; Sun, 6 Feb 2022 03:26:02 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.234) by DM6NAM11FT032.mail.protection.outlook.com (10.13.173.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4951.12 via Frontend Transport; Sun, 6 Feb 2022 03:26:02 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 6 Feb 2022 03:26:01 +0000 Received: from pegasus01.mtr.labs.mlnx (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Sat, 5 Feb 2022 19:25:58 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , Subject: [PATCH v3 06/10] app/testpmd: implement rte flow table management Date: Sun, 6 Feb 2022 05:25:22 +0200 Message-ID: <20220206032526.816079-7-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220206032526.816079-1-akozyrev@nvidia.com> References: <20220118153027.3947448-1-akozyrev@nvidia.com> <20220206032526.816079-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 01043b21-0f27-4740-a6f1-08d9e92066c1 X-MS-TrafficTypeDiagnostic: BYAPR12MB3126:EE_|BN8PR12MB2978:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2331; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: rP+KkUk1+xSrjh6OzY1oEWn49tKHsX+eBNJ6ntEKRf8kPehai/0HtmQ9twRYu97wwizsTZS4uxPWXdKy9RbXLPq2EwO3k2e1YsuuBvkZ/TyKWDF5pIgq7m3kMJ7aQlgqXhMQjkRdtpeFgcpv4prsW3nGIK3fsWxv2A9KVT8bdi0KR0eeVneGDtJlSpiZNMMJFPrznlcO2HP26kTxsJvsLZmauQxDXhyk1zoXuMRoCa9sR4SN0DjrPbJvUWthwzY6+sK7SzvtWk929HKupaUbLZ9vk5I3nqLsn1asoe9yZS8HZLJeknthoUYbsJmr1hhEM3v7PMFmXBpgdhgNCSrNadhgFNIFVBryfc4hTqfhxVS/u/nIBqL8xmmvmke3RfPBzBf4broAjy9aSe8PZIeOa6OaLUpZAnjqmClC/gPLm4zBJnHNekNUHV67mwsygrDv7ojdpryyXAae+fpB0NVtIeASg3UfeTirx+JbnV0yhjFuFvsSy4wOk+3HK1XshQ3iEiA4/T+9DUNMPulSxLpGnbjMWwj/80Nd87lDDvRsxhQsn1A36R7hk7pLcNuZjAENjy0unNzm18qryNiO+JVdoYaJw4KgzUjQsWKjq0VmWdz+6XOG2etVVCwPPo6sOTKs9HfvarEQ0wtrrsTOyDU2QPMlrtnNaJUyit0gO2hczngubkopIrba39UaF6zt+iKPh9nzkERnitYZpyclWNNPcQ== X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(40470700004)(46966006)(8676002)(47076005)(4326008)(70586007)(70206006)(36860700001)(186003)(26005)(16526019)(316002)(54906003)(36756003)(81166007)(356005)(426003)(336012)(83380400001)(6916009)(2906002)(82310400004)(6666004)(8936002)(5660300002)(86362001)(508600001)(2616005)(40460700003)(1076003)(30864003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Feb 2022 03:26:02.0109 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 01043b21-0f27-4740-a6f1-08d9e92066c1 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT032.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR12MB2978 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add testpmd support for the rte_flow_table API. Provide the command line interface for the flow table creation/destruction. Usage example: testpmd> flow table 0 create table_id 6 group 9 priority 4 ingress mode 1 rules_number 64 pattern_template 2 actions_template 4 testpmd> flow table 0 destroy table 6 Signed-off-by: Alexander Kozyrev Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 315 ++++++++++++++++++++ app/test-pmd/config.c | 170 +++++++++++ app/test-pmd/testpmd.h | 17 ++ doc/guides/testpmd_app_ug/testpmd_funcs.rst | 53 ++++ 4 files changed, 555 insertions(+) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 3f0e73743a..75bd128e68 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -58,6 +58,7 @@ enum index { COMMON_FLEX_TOKEN, COMMON_PATTERN_TEMPLATE_ID, COMMON_ACTIONS_TEMPLATE_ID, + COMMON_TABLE_ID, /* TOP-level command. */ ADD, @@ -78,6 +79,7 @@ enum index { CONFIGURE, PATTERN_TEMPLATE, ACTIONS_TEMPLATE, + TABLE, INDIRECT_ACTION, VALIDATE, CREATE, @@ -112,6 +114,20 @@ enum index { ACTIONS_TEMPLATE_SPEC, ACTIONS_TEMPLATE_MASK, + /* Table arguments. */ + TABLE_CREATE, + TABLE_DESTROY, + TABLE_CREATE_ID, + TABLE_DESTROY_ID, + TABLE_GROUP, + TABLE_PRIORITY, + TABLE_INGRESS, + TABLE_EGRESS, + TABLE_TRANSFER, + TABLE_RULES_NUMBER, + TABLE_PATTERN_TEMPLATE, + TABLE_ACTIONS_TEMPLATE, + /* Tunnel arguments. */ TUNNEL_CREATE, TUNNEL_CREATE_TYPE, @@ -884,6 +900,18 @@ struct buffer { uint32_t *template_id; uint32_t template_id_n; } templ_destroy; /**< Template destroy arguments. */ + struct { + uint32_t id; + struct rte_flow_table_attr attr; + uint32_t *pat_templ_id; + uint32_t pat_templ_id_n; + uint32_t *act_templ_id; + uint32_t act_templ_id_n; + } table; /**< Table arguments. */ + struct { + uint32_t *table_id; + uint32_t table_id_n; + } table_destroy; /**< Template destroy arguments. */ struct { uint32_t *action_id; uint32_t action_id_n; @@ -1015,6 +1043,32 @@ static const enum index next_at_destroy_attr[] = { ZERO, }; +static const enum index next_table_subcmd[] = { + TABLE_CREATE, + TABLE_DESTROY, + ZERO, +}; + +static const enum index next_table_attr[] = { + TABLE_CREATE_ID, + TABLE_GROUP, + TABLE_PRIORITY, + TABLE_INGRESS, + TABLE_EGRESS, + TABLE_TRANSFER, + TABLE_RULES_NUMBER, + TABLE_PATTERN_TEMPLATE, + TABLE_ACTIONS_TEMPLATE, + END, + ZERO, +}; + +static const enum index next_table_destroy_attr[] = { + TABLE_DESTROY_ID, + END, + ZERO, +}; + static const enum index next_ia_create_attr[] = { INDIRECT_ACTION_CREATE_ID, INDIRECT_ACTION_INGRESS, @@ -2059,6 +2113,11 @@ static int parse_template(struct context *, const struct token *, static int parse_template_destroy(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_table(struct context *, const struct token *, + const char *, unsigned int, void *, unsigned int); +static int parse_table_destroy(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); static int parse_tunnel(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -2132,6 +2191,8 @@ static int comp_pattern_template_id(struct context *, const struct token *, unsigned int, char *, unsigned int); static int comp_actions_template_id(struct context *, const struct token *, unsigned int, char *, unsigned int); +static int comp_table_id(struct context *, const struct token *, + unsigned int, char *, unsigned int); /** Token definitions. */ static const struct token token_list[] = { @@ -2296,6 +2357,13 @@ static const struct token token_list[] = { .call = parse_int, .comp = comp_actions_template_id, }, + [COMMON_TABLE_ID] = { + .name = "{table_id}", + .type = "TABLE_ID", + .help = "table id", + .call = parse_int, + .comp = comp_table_id, + }, /* Top-level command. */ [FLOW] = { .name = "flow", @@ -2306,6 +2374,7 @@ static const struct token token_list[] = { CONFIGURE, PATTERN_TEMPLATE, ACTIONS_TEMPLATE, + TABLE, INDIRECT_ACTION, VALIDATE, CREATE, @@ -2486,6 +2555,104 @@ static const struct token token_list[] = { .call = parse_template, }, /* Top-level command. */ + [TABLE] = { + .name = "table", + .type = "{command} {port_id} [{arg} [...]]", + .help = "manage tables", + .next = NEXT(next_table_subcmd, NEXT_ENTRY(COMMON_PORT_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_table, + }, + /* Sub-level commands. */ + [TABLE_CREATE] = { + .name = "create", + .help = "create table", + .next = NEXT(next_table_attr), + .call = parse_table, + }, + [TABLE_DESTROY] = { + .name = "destroy", + .help = "destroy table", + .next = NEXT(NEXT_ENTRY(TABLE_DESTROY_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_table_destroy, + }, + /* Table arguments. */ + [TABLE_CREATE_ID] = { + .name = "table_id", + .help = "specify table id to create", + .next = NEXT(next_table_attr, + NEXT_ENTRY(COMMON_TABLE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, args.table.id)), + }, + [TABLE_DESTROY_ID] = { + .name = "table", + .help = "specify table id to destroy", + .next = NEXT(next_table_destroy_attr, + NEXT_ENTRY(COMMON_TABLE_ID)), + .args = ARGS(ARGS_ENTRY_PTR(struct buffer, + args.table_destroy.table_id)), + .call = parse_table_destroy, + }, + [TABLE_GROUP] = { + .name = "group", + .help = "specify a group", + .next = NEXT(next_table_attr, NEXT_ENTRY(COMMON_GROUP_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.table.attr.flow_attr.group)), + }, + [TABLE_PRIORITY] = { + .name = "priority", + .help = "specify a priority level", + .next = NEXT(next_table_attr, NEXT_ENTRY(COMMON_PRIORITY_LEVEL)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.table.attr.flow_attr.priority)), + }, + [TABLE_EGRESS] = { + .name = "egress", + .help = "affect rule to egress", + .next = NEXT(next_table_attr), + .call = parse_table, + }, + [TABLE_INGRESS] = { + .name = "ingress", + .help = "affect rule to ingress", + .next = NEXT(next_table_attr), + .call = parse_table, + }, + [TABLE_TRANSFER] = { + .name = "transfer", + .help = "affect rule to transfer", + .next = NEXT(next_table_attr), + .call = parse_table, + }, + [TABLE_RULES_NUMBER] = { + .name = "rules_number", + .help = "number of rules in table", + .next = NEXT(next_table_attr, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.table.attr.nb_flows)), + }, + [TABLE_PATTERN_TEMPLATE] = { + .name = "pattern_template", + .help = "specify pattern template id", + .next = NEXT(next_table_attr, + NEXT_ENTRY(COMMON_PATTERN_TEMPLATE_ID)), + .args = ARGS(ARGS_ENTRY_PTR(struct buffer, + args.table.pat_templ_id)), + .call = parse_table, + }, + [TABLE_ACTIONS_TEMPLATE] = { + .name = "actions_template", + .help = "specify actions template id", + .next = NEXT(next_table_attr, + NEXT_ENTRY(COMMON_ACTIONS_TEMPLATE_ID)), + .args = ARGS(ARGS_ENTRY_PTR(struct buffer, + args.table.act_templ_id)), + .call = parse_table, + }, + /* Top-level command. */ [INDIRECT_ACTION] = { .name = "indirect_action", .type = "{command} {port_id} [{arg} [...]]", @@ -7886,6 +8053,119 @@ parse_template_destroy(struct context *ctx, const struct token *token, return len; } +/** Parse tokens for table create command. */ +static int +parse_table(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + uint32_t *template_id; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command) { + if (ctx->curr != TABLE) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + return len; + } + switch (ctx->curr) { + case TABLE_CREATE: + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.table.id = UINT32_MAX; + return len; + case TABLE_PATTERN_TEMPLATE: + out->args.table.pat_templ_id = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + template_id = out->args.table.pat_templ_id + + out->args.table.pat_templ_id_n++; + if ((uint8_t *)template_id > (uint8_t *)out + size) + return -1; + ctx->objdata = 0; + ctx->object = template_id; + ctx->objmask = NULL; + return len; + case TABLE_ACTIONS_TEMPLATE: + out->args.table.act_templ_id = + (void *)RTE_ALIGN_CEIL((uintptr_t) + (out->args.table.pat_templ_id + + out->args.table.pat_templ_id_n), + sizeof(double)); + template_id = out->args.table.act_templ_id + + out->args.table.act_templ_id_n++; + if ((uint8_t *)template_id > (uint8_t *)out + size) + return -1; + ctx->objdata = 0; + ctx->object = template_id; + ctx->objmask = NULL; + return len; + case TABLE_INGRESS: + out->args.table.attr.flow_attr.ingress = 1; + return len; + case TABLE_EGRESS: + out->args.table.attr.flow_attr.egress = 1; + return len; + case TABLE_TRANSFER: + out->args.table.attr.flow_attr.transfer = 1; + return len; + default: + return -1; + } +} + +/** Parse tokens for table destroy command. */ +static int +parse_table_destroy(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + uint32_t *table_id; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command || out->command == TABLE) { + if (ctx->curr != TABLE_DESTROY) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.table_destroy.table_id = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + return len; + } + table_id = out->args.table_destroy.table_id + + out->args.table_destroy.table_id_n++; + if ((uint8_t *)table_id > (uint8_t *)out + size) + return -1; + ctx->objdata = 0; + ctx->object = table_id; + ctx->objmask = NULL; + return len; +} + static int parse_flex(struct context *ctx, const struct token *token, const char *str, unsigned int len, @@ -8903,6 +9183,30 @@ comp_actions_template_id(struct context *ctx, const struct token *token, return i; } +/** Complete available table IDs. */ +static int +comp_table_id(struct context *ctx, const struct token *token, + unsigned int ent, char *buf, unsigned int size) +{ + unsigned int i = 0; + struct rte_port *port; + struct port_table *pt; + + (void)token; + if (port_id_is_invalid(ctx->port, DISABLED_WARN) || + ctx->port == (portid_t)RTE_PORT_ALL) + return -1; + port = &ports[ctx->port]; + for (pt = port->table_list; pt != NULL; pt = pt->next) { + if (buf && i == ent) + return snprintf(buf, size, "%u", pt->id); + ++i; + } + if (buf) + return -1; + return i; +} + /** Internal context. */ static struct context cmd_flow_context; @@ -9189,6 +9493,17 @@ cmd_flow_parsed(const struct buffer *in) in->args.templ_destroy.template_id_n, in->args.templ_destroy.template_id); break; + case TABLE_CREATE: + port_flow_table_create(in->port, in->args.table.id, + &in->args.table.attr, in->args.table.pat_templ_id_n, + in->args.table.pat_templ_id, in->args.table.act_templ_id_n, + in->args.table.act_templ_id); + break; + case TABLE_DESTROY: + port_flow_table_destroy(in->port, + in->args.table_destroy.table_id_n, + in->args.table_destroy.table_id); + break; case INDIRECT_ACTION_CREATE: port_action_handle_create( in->port, in->args.vc.attr.group, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index adc77169af..126bead03e 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -1638,6 +1638,49 @@ template_alloc(uint32_t id, struct port_template **template, return 0; } +static int +table_alloc(uint32_t id, struct port_table **table, + struct port_table **list) +{ + struct port_table *lst = *list; + struct port_table **ppt; + struct port_table *pt = NULL; + + *table = NULL; + if (id == UINT32_MAX) { + /* taking first available ID */ + if (lst) { + if (lst->id == UINT32_MAX - 1) { + printf("Highest table ID is already" + " assigned, delete it first\n"); + return -ENOMEM; + } + id = lst->id + 1; + } else { + id = 0; + } + } + pt = calloc(1, sizeof(*pt)); + if (!pt) { + printf("Allocation of table failed\n"); + return -ENOMEM; + } + ppt = list; + while (*ppt && (*ppt)->id > id) + ppt = &(*ppt)->next; + if (*ppt && (*ppt)->id == id) { + printf("Table #%u is already assigned," + " delete it first\n", id); + free(pt); + return -EINVAL; + } + pt->next = *ppt; + pt->id = id; + *ppt = pt; + *table = pt; + return 0; +} + /** Get info about flow management resources. */ int port_flow_get_info(portid_t port_id) @@ -2267,6 +2310,133 @@ port_flow_actions_template_destroy(portid_t port_id, uint32_t n, return ret; } +/** Create table */ +int +port_flow_table_create(portid_t port_id, uint32_t id, + const struct rte_flow_table_attr *table_attr, + uint32_t nb_pattern_templates, uint32_t *pattern_templates, + uint32_t nb_actions_templates, uint32_t *actions_templates) +{ + struct rte_port *port; + struct port_table *pt; + struct port_template *temp = NULL; + int ret; + uint32_t i; + struct rte_flow_error error; + struct rte_flow_pattern_template + *flow_pattern_templates[nb_pattern_templates]; + struct rte_flow_actions_template + *flow_actions_templates[nb_actions_templates]; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + for (i = 0; i < nb_pattern_templates; ++i) { + bool found = false; + temp = port->pattern_templ_list; + while (temp) { + if (pattern_templates[i] == temp->id) { + flow_pattern_templates[i] = temp->template.pattern_template; + found = true; + break; + } + temp = temp->next; + } + if (!found) { + printf("Pattern template #%u is invalid\n", + pattern_templates[i]); + return -EINVAL; + } + } + for (i = 0; i < nb_actions_templates; ++i) { + bool found = false; + temp = port->actions_templ_list; + while (temp) { + if (actions_templates[i] == temp->id) { + flow_actions_templates[i] = + temp->template.actions_template; + found = true; + break; + } + temp = temp->next; + } + if (!found) { + printf("Actions template #%u is invalid\n", + actions_templates[i]); + return -EINVAL; + } + } + ret = table_alloc(id, &pt, &port->table_list); + if (ret) + return ret; + /* Poisoning to make sure PMDs update it in case of error. */ + memset(&error, 0x22, sizeof(error)); + pt->table = rte_flow_table_create(port_id, table_attr, + flow_pattern_templates, nb_pattern_templates, + flow_actions_templates, nb_actions_templates, + &error); + + if (!pt->table) { + uint32_t destroy_id = pt->id; + port_flow_table_destroy(port_id, 1, &destroy_id); + return port_flow_complain(&error); + } + pt->nb_pattern_templates = nb_pattern_templates; + pt->nb_actions_templates = nb_actions_templates; + printf("Table #%u created\n", pt->id); + return 0; +} + +/** Destroy table */ +int +port_flow_table_destroy(portid_t port_id, + uint32_t n, const uint32_t *table) +{ + struct rte_port *port; + struct port_table **tmp; + uint32_t c = 0; + int ret = 0; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + tmp = &port->table_list; + while (*tmp) { + uint32_t i; + + for (i = 0; i != n; ++i) { + struct rte_flow_error error; + struct port_table *pt = *tmp; + + if (table[i] != pt->id) + continue; + /* + * Poisoning to make sure PMDs update it in case + * of error. + */ + memset(&error, 0x33, sizeof(error)); + + if (pt->table && + rte_flow_table_destroy(port_id, + pt->table, + &error)) { + ret = port_flow_complain(&error); + continue; + } + *tmp = pt->next; + printf("Table #%u destroyed\n", pt->id); + free(pt); + break; + } + if (i == n) + tmp = &(*tmp)->next; + ++c; + } + return ret; +} + /** Create flow rule. */ int port_flow_create(portid_t port_id, diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index c70b1fa4e8..4d85dfdaf6 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -177,6 +177,16 @@ struct port_template { } template; /**< PMD opaque template object */ }; +/** Descriptor for a flow table. */ +struct port_table { + struct port_table *next; /**< Next table in list. */ + struct port_table *tmp; /**< Temporary linking. */ + uint32_t id; /**< Table ID. */ + uint32_t nb_pattern_templates; /**< Number of pattern templates. */ + uint32_t nb_actions_templates; /**< Number of actions templates. */ + struct rte_flow_table *table; /**< PMD opaque template object */ +}; + /** Descriptor for a single flow. */ struct port_flow { struct port_flow *next; /**< Next flow in list. */ @@ -259,6 +269,7 @@ struct rte_port { uint8_t slave_flag; /**< bonding slave port */ struct port_template *pattern_templ_list; /**< Pattern templates. */ struct port_template *actions_templ_list; /**< Actions templates. */ + struct port_table *table_list; /**< Flow tables. */ struct port_flow *flow_list; /**< Associated flows. */ struct port_indirect_action *actions_list; /**< Associated indirect actions. */ @@ -915,6 +926,12 @@ int port_flow_actions_template_create(portid_t port_id, uint32_t id, const struct rte_flow_action *masks); int port_flow_actions_template_destroy(portid_t port_id, uint32_t n, const uint32_t *template); +int port_flow_table_create(portid_t port_id, uint32_t id, + const struct rte_flow_table_attr *table_attr, + uint32_t nb_pattern_templates, uint32_t *pattern_templates, + uint32_t nb_actions_templates, uint32_t *actions_templates); +int port_flow_table_destroy(portid_t port_id, + uint32_t n, const uint32_t *table); int port_flow_validate(portid_t port_id, const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 56e821ec5c..cfa9aecdba 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3339,6 +3339,19 @@ following sections. flow actions_template {port_id} destroy actions_template {id} [...] +- Create a table:: + + flow table {port_id} create + [table_id {id}] + [group {group_id}] [priority {level}] [ingress] [egress] [transfer] + rules_number {number} + pattern_template {pattern_template_id} + actions_template {actions_template_id} + +- Destroy a table:: + + flow table {port_id} destroy table {id} [...] + - Check whether a flow rule can be created:: flow validate {port_id} @@ -3520,6 +3533,46 @@ The usual error message is shown when an actions template cannot be destroyed:: Caught error type [...] ([...]): [...] +Creating flow table +~~~~~~~~~~~~~~~~~~~ + +``flow table create`` creates the specified flow table. +It is bound to ``rte_flow_table_create()``:: + + flow table {port_id} create + [table_id {id}] [group {group_id}] + [priority {level}] [ingress] [egress] [transfer] + rules_number {number} + pattern_template {pattern_template_id} + actions_template {actions_template_id} + +If successful, it will show:: + + Table #[...] created + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + +Destroying flow table +~~~~~~~~~~~~~~~~~~~~~ + +``flow table destroy`` destroys one or more flow tables +from their table ID (as returned by ``flow table create``), +this command calls ``rte_flow_table_destroy()`` as many +times as necessary:: + + flow table {port_id} destroy table {id} [...] + +If successful, it will show:: + + Table #[...] destroyed + +It does not report anything for table IDs that do not exist. +The usual error message is shown when a table cannot be destroyed:: + + Caught error type [...] ([...]): [...] + Creating a tunnel stub for offload ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From patchwork Sun Feb 6 03:25:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 106899 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D7012A034E; Sun, 6 Feb 2022 04:26:37 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AD95241153; Sun, 6 Feb 2022 04:26:10 +0100 (CET) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2084.outbound.protection.outlook.com [40.107.94.84]) by mails.dpdk.org (Postfix) with ESMTP id 052B74116A for ; Sun, 6 Feb 2022 04:26:09 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=f0pGduG6rPDkAbSCgs1yqN8k6LxXYNH4ZLZ+Qe4YmzLGn8Iu5CVYQquT1sOyHBo/r05HmcShSPy7ZNYIyE7ykQ5RtvMtuZlDlpxx8bxD/+2MzmIhKVzZreBsaVaB7c+WdOOw08tUgwhZuV7AcFcfOzhd6Ovsr2WiwP7fZ+lGZHBFOkMAJlI9HvYlmLNusUmymsimEkTRivB2VG4GlaUd6OPB3n6XpiTSwz0rpti8+Kn/wkdGfvCXL6xAl5J8DFniKAtlxym7kOEIr7V6C2hzdyWKkA3EOOWHI5zMibiCY3OXInpXuHugP32KRYCf31AzY/HTVD1nFjGcp4flFHey/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=aTZgX5ktuVcIbPrlXp0phhMeUNVgqAWIqfoVji3qRew=; b=PVtDfttscSNfR4j1cgzgJurZgDP4qundv8yaelVMC+UfghhgJGgEXs/XX2emNr/x0ydDcd3QJkGMlPYMRkS2X07Xab3wq+g//3HRv4wbejMd+w+6TGrg745rlEJQyRHjKkcOmJesjxjtdr59td9Au5qzgrr7hJrwgi9TQ2yL9rxrGtxo3CnpvmRk5o2xVlhSpA2L5GBse6rHDWRmqpdv5doi9jTkkbXVaUmN/lEywdFuWMbctqQUX1GXI7cWdVplJc9MTun9lKKpKW5jaOQY4ba8UfJkTQ58UhIrFYptP4FP/fTfnld1QIOCJDLA0seh61X4EKb2sUobvrjKJNUEOw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=aTZgX5ktuVcIbPrlXp0phhMeUNVgqAWIqfoVji3qRew=; b=Hfuw9l2mbEzvoFTrERT5xumkwZblbVPJdwSI/Mx6sgtn9cvYKE5gyL/3LK19BwA3U0b2qjkMpRcZx1BTwlHsaHdisRi7kppwhGbmYGLy6VGwrvtx1Ya4EV0Iwi2XEBAbJPMJAArtuNIu+4Mwlu7pp3MWLBPXeFiDSdEcQA5Sz7JvOL0ei1ViOazcXZDq4slFDwb42sq3wzdKZurZmVN7zGryTvpwIEQgdHsdz9KVtZ4SpkvLdZOvavwUmpnR67JnkKUl4bGts8UyjIvoh+qRpTSg+Lz/DEItwHyPYRvwb8qzFs84HL5O/OPUmeWTlhGWBZXqC2MotzoumAHN2lH39Q== Received: from SA0PR12MB4445.namprd12.prod.outlook.com (2603:10b6:806:95::15) by MN2PR12MB3023.namprd12.prod.outlook.com (2603:10b6:208:c8::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12; Sun, 6 Feb 2022 03:26:06 +0000 Received: from DM5PR15CA0069.namprd15.prod.outlook.com (2603:10b6:3:ae::31) by SA0PR12MB4445.namprd12.prod.outlook.com (2603:10b6:806:95::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12; Sun, 6 Feb 2022 03:26:05 +0000 Received: from DM6NAM11FT067.eop-nam11.prod.protection.outlook.com (2603:10b6:3:ae:cafe::e5) by DM5PR15CA0069.outlook.office365.com (2603:10b6:3:ae::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.18 via Frontend Transport; Sun, 6 Feb 2022 03:26:05 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.235) by DM6NAM11FT067.mail.protection.outlook.com (10.13.172.76) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4951.12 via Frontend Transport; Sun, 6 Feb 2022 03:26:04 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 6 Feb 2022 03:26:04 +0000 Received: from pegasus01.mtr.labs.mlnx (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Sat, 5 Feb 2022 19:26:01 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , Subject: [PATCH v3 07/10] app/testpmd: implement rte flow queue flow operations Date: Sun, 6 Feb 2022 05:25:23 +0200 Message-ID: <20220206032526.816079-8-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220206032526.816079-1-akozyrev@nvidia.com> References: <20220118153027.3947448-1-akozyrev@nvidia.com> <20220206032526.816079-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: a3bff7ca-44ed-4a75-c8ce-08d9e9206884 X-MS-TrafficTypeDiagnostic: SA0PR12MB4445:EE_|MN2PR12MB3023:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:5516; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: N8o+I2g9aKSiNQcMylBzt8b91on6L3kS6wGAPG8B8C3suXLHdwLkkZwcYqboAl8Q5aIh+b1Gup06ukWVLgQFeTdBdnpbbO0xF9oeli5hHpN/LV+HBtlz2yqmlyxwY8gwwhIG7kWuV0dFyjlW9QtLTWTxGLpFZ0V/9zQObGKf80o4kbXGx7r4rmMbqL01DATVV2QY+7MNq30+1jFVVy2ZiykjEcCFDrKBCQmTPfD9uDqlpKpspjO9mFqEZTDR11H4qSi9GlpwQ+Q78zKXS7n3qYnANgEkLDcgj7wv9RCfztXrOkZZSgJ6T2Rj0lhAm8gXZzHMFwzP/LE2BNpDZv+FNqUBGrd9dT8Wkcta5tdzWy5bpRy4s8i6dzZ2ziNmPQLgK9JaJ5q3w1JNzniw3LsJnID4/M5kssQCsxJc26hbisBvBE76xy4Vbe5RrlVO3yEvIoZX6mj/Fl+R8g9ZgRv2cg6Bp7XSvJgmK65P0kGMEZXvJdYYDy5y+vykAKK454rOZzSgMJu0QaQYtT2blfCniKQP1Wtt+bNbo+st0qB0ALecCMUV09tUsg1RChsEcBnH90y0OwO07xiO15Wh/u7NoUiXrtk92AZkYGNh62p40rDzczb4qS5NJU9Zz8JOC07w/UTvqveL1aVit6zI5nxOclYWM8fMszZ4jo6NBWoPx2N3kfrp0NTjf9wIMioF8//N/wTxgfgndfKkl9++iM/tlQ== X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(40470700004)(46966006)(16526019)(36756003)(6916009)(8676002)(70586007)(5660300002)(54906003)(70206006)(30864003)(26005)(316002)(8936002)(4326008)(40460700003)(1076003)(82310400004)(336012)(426003)(6666004)(2906002)(186003)(2616005)(47076005)(86362001)(36860700001)(83380400001)(81166007)(356005)(508600001)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Feb 2022 03:26:04.9675 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a3bff7ca-44ed-4a75-c8ce-08d9e9206884 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT067.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB3023 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add testpmd support for the rte_flow_q_create/rte_flow_q_destroy API. Provide the command line interface for enqueueing flow creation/destruction operations. Usage example: testpmd> flow queue 0 create 0 postpone no table 6 pattern_template 0 actions_template 0 pattern eth dst is 00:16:3e:31:15:c3 / end actions drop / end testpmd> flow queue 0 destroy 0 postpone yes rule 0 Signed-off-by: Alexander Kozyrev Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 266 +++++++++++++++++++- app/test-pmd/config.c | 166 ++++++++++++ app/test-pmd/testpmd.h | 7 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 55 ++++ 4 files changed, 493 insertions(+), 1 deletion(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 75bd128e68..d4c7f9542f 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -59,6 +59,7 @@ enum index { COMMON_PATTERN_TEMPLATE_ID, COMMON_ACTIONS_TEMPLATE_ID, COMMON_TABLE_ID, + COMMON_QUEUE_ID, /* TOP-level command. */ ADD, @@ -92,6 +93,7 @@ enum index { ISOLATE, TUNNEL, FLEX, + QUEUE, /* Flex arguments */ FLEX_ITEM_INIT, @@ -114,6 +116,22 @@ enum index { ACTIONS_TEMPLATE_SPEC, ACTIONS_TEMPLATE_MASK, + /* Queue arguments. */ + QUEUE_CREATE, + QUEUE_DESTROY, + + /* Queue create arguments. */ + QUEUE_CREATE_ID, + QUEUE_CREATE_POSTPONE, + QUEUE_TABLE, + QUEUE_PATTERN_TEMPLATE, + QUEUE_ACTIONS_TEMPLATE, + QUEUE_SPEC, + + /* Queue destroy arguments. */ + QUEUE_DESTROY_ID, + QUEUE_DESTROY_POSTPONE, + /* Table arguments. */ TABLE_CREATE, TABLE_DESTROY, @@ -890,6 +908,8 @@ struct token { struct buffer { enum index command; /**< Flow command. */ portid_t port; /**< Affected port ID. */ + queueid_t queue; /** Async queue ID. */ + bool postpone; /** Postpone async operation */ union { struct { struct rte_flow_port_attr port_attr; @@ -920,6 +940,7 @@ struct buffer { uint32_t action_id; } ia; /* Indirect action query arguments */ struct { + uint32_t table_id; uint32_t pat_templ_id; uint32_t act_templ_id; struct rte_flow_attr attr; @@ -1069,6 +1090,18 @@ static const enum index next_table_destroy_attr[] = { ZERO, }; +static const enum index next_queue_subcmd[] = { + QUEUE_CREATE, + QUEUE_DESTROY, + ZERO, +}; + +static const enum index next_queue_destroy_attr[] = { + QUEUE_DESTROY_ID, + END, + ZERO, +}; + static const enum index next_ia_create_attr[] = { INDIRECT_ACTION_CREATE_ID, INDIRECT_ACTION_INGRESS, @@ -2118,6 +2151,12 @@ static int parse_table(struct context *, const struct token *, static int parse_table_destroy(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_qo(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); +static int parse_qo_destroy(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); static int parse_tunnel(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -2193,6 +2232,8 @@ static int comp_actions_template_id(struct context *, const struct token *, unsigned int, char *, unsigned int); static int comp_table_id(struct context *, const struct token *, unsigned int, char *, unsigned int); +static int comp_queue_id(struct context *, const struct token *, + unsigned int, char *, unsigned int); /** Token definitions. */ static const struct token token_list[] = { @@ -2364,6 +2405,13 @@ static const struct token token_list[] = { .call = parse_int, .comp = comp_table_id, }, + [COMMON_QUEUE_ID] = { + .name = "{queue_id}", + .type = "QUEUE_ID", + .help = "queue id", + .call = parse_int, + .comp = comp_queue_id, + }, /* Top-level command. */ [FLOW] = { .name = "flow", @@ -2386,7 +2434,8 @@ static const struct token token_list[] = { QUERY, ISOLATE, TUNNEL, - FLEX)), + FLEX, + QUEUE)), .call = parse_init, }, /* Top-level command. */ @@ -2653,6 +2702,83 @@ static const struct token token_list[] = { .call = parse_table, }, /* Top-level command. */ + [QUEUE] = { + .name = "queue", + .help = "queue a flow rule operation", + .next = NEXT(next_queue_subcmd, NEXT_ENTRY(COMMON_PORT_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_qo, + }, + /* Sub-level commands. */ + [QUEUE_CREATE] = { + .name = "create", + .help = "create a flow rule", + .next = NEXT(NEXT_ENTRY(QUEUE_TABLE), NEXT_ENTRY(COMMON_QUEUE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, queue)), + .call = parse_qo, + }, + [QUEUE_DESTROY] = { + .name = "destroy", + .help = "destroy a flow rule", + .next = NEXT(NEXT_ENTRY(QUEUE_DESTROY_ID), + NEXT_ENTRY(COMMON_QUEUE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, queue)), + .call = parse_qo_destroy, + }, + /* Queue arguments. */ + [QUEUE_TABLE] = { + .name = "table", + .help = "specify table id", + .next = NEXT(NEXT_ENTRY(QUEUE_PATTERN_TEMPLATE), + NEXT_ENTRY(COMMON_TABLE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.vc.table_id)), + .call = parse_qo, + }, + [QUEUE_PATTERN_TEMPLATE] = { + .name = "pattern_template", + .help = "specify pattern template index", + .next = NEXT(NEXT_ENTRY(QUEUE_ACTIONS_TEMPLATE), + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.vc.pat_templ_id)), + .call = parse_qo, + }, + [QUEUE_ACTIONS_TEMPLATE] = { + .name = "actions_template", + .help = "specify actions template index", + .next = NEXT(NEXT_ENTRY(QUEUE_CREATE_POSTPONE), + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.vc.act_templ_id)), + .call = parse_qo, + }, + [QUEUE_CREATE_POSTPONE] = { + .name = "postpone", + .help = "postpone create operation", + .next = NEXT(NEXT_ENTRY(ITEM_PATTERN), + NEXT_ENTRY(COMMON_BOOLEAN)), + .args = ARGS(ARGS_ENTRY(struct buffer, postpone)), + .call = parse_qo, + }, + [QUEUE_DESTROY_POSTPONE] = { + .name = "postpone", + .help = "postpone destroy operation", + .next = NEXT(NEXT_ENTRY(QUEUE_DESTROY_ID), + NEXT_ENTRY(COMMON_BOOLEAN)), + .args = ARGS(ARGS_ENTRY(struct buffer, postpone)), + .call = parse_qo_destroy, + }, + [QUEUE_DESTROY_ID] = { + .name = "rule", + .help = "specify rule id to destroy", + .next = NEXT(next_queue_destroy_attr, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY_PTR(struct buffer, + args.destroy.rule)), + .call = parse_qo_destroy, + }, + /* Top-level command. */ [INDIRECT_ACTION] = { .name = "indirect_action", .type = "{command} {port_id} [{arg} [...]]", @@ -8166,6 +8292,111 @@ parse_table_destroy(struct context *ctx, const struct token *token, return len; } +/** Parse tokens for queue create commands. */ +static int +parse_qo(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command) { + if (ctx->curr != QUEUE) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.vc.data = (uint8_t *)out + size; + return len; + } + switch (ctx->curr) { + case QUEUE_CREATE: + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + return len; + case QUEUE_TABLE: + case QUEUE_PATTERN_TEMPLATE: + case QUEUE_ACTIONS_TEMPLATE: + case QUEUE_CREATE_POSTPONE: + return len; + case ITEM_PATTERN: + out->args.vc.pattern = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + ctx->object = out->args.vc.pattern; + ctx->objmask = NULL; + return len; + case ACTIONS: + out->args.vc.actions = + (void *)RTE_ALIGN_CEIL((uintptr_t) + (out->args.vc.pattern + + out->args.vc.pattern_n), + sizeof(double)); + ctx->object = out->args.vc.actions; + ctx->objmask = NULL; + return len; + default: + return -1; + } +} + +/** Parse tokens for queue destroy command. */ +static int +parse_qo_destroy(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + uint32_t *flow_id; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command || out->command == QUEUE) { + if (ctx->curr != QUEUE_DESTROY) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.destroy.rule = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + return len; + } + switch (ctx->curr) { + case QUEUE_DESTROY_ID: + flow_id = out->args.destroy.rule + + out->args.destroy.rule_n++; + if ((uint8_t *)flow_id > (uint8_t *)out + size) + return -1; + ctx->objdata = 0; + ctx->object = flow_id; + ctx->objmask = NULL; + return len; + case QUEUE_DESTROY_POSTPONE: + return len; + default: + return -1; + } +} + static int parse_flex(struct context *ctx, const struct token *token, const char *str, unsigned int len, @@ -9207,6 +9438,28 @@ comp_table_id(struct context *ctx, const struct token *token, return i; } +/** Complete available queue IDs. */ +static int +comp_queue_id(struct context *ctx, const struct token *token, + unsigned int ent, char *buf, unsigned int size) +{ + unsigned int i = 0; + struct rte_port *port; + + (void)token; + if (port_id_is_invalid(ctx->port, DISABLED_WARN) || + ctx->port == (portid_t)RTE_PORT_ALL) + return -1; + port = &ports[ctx->port]; + for (i = 0; i < port->queue_nb; i++) { + if (buf && i == ent) + return snprintf(buf, size, "%u", i); + } + if (buf) + return -1; + return i; +} + /** Internal context. */ static struct context cmd_flow_context; @@ -9504,6 +9757,17 @@ cmd_flow_parsed(const struct buffer *in) in->args.table_destroy.table_id_n, in->args.table_destroy.table_id); break; + case QUEUE_CREATE: + port_queue_flow_create(in->port, in->queue, in->postpone, + in->args.vc.table_id, in->args.vc.pat_templ_id, + in->args.vc.act_templ_id, in->args.vc.pattern, + in->args.vc.actions); + break; + case QUEUE_DESTROY: + port_queue_flow_destroy(in->port, in->queue, in->postpone, + in->args.destroy.rule_n, + in->args.destroy.rule); + break; case INDIRECT_ACTION_CREATE: port_action_handle_create( in->port, in->args.vc.attr.group, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 126bead03e..1013c4b252 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -2437,6 +2437,172 @@ port_flow_table_destroy(portid_t port_id, return ret; } +/** Enqueue create flow rule operation. */ +int +port_queue_flow_create(portid_t port_id, queueid_t queue_id, + bool postpone, uint32_t table_id, + uint32_t pattern_idx, uint32_t actions_idx, + const struct rte_flow_item *pattern, + const struct rte_flow_action *actions) +{ + struct rte_flow_q_ops_attr ops_attr = { .postpone = postpone }; + struct rte_flow_q_op_res comp = { 0 }; + struct rte_flow *flow; + struct rte_port *port; + struct port_flow *pf; + struct port_table *pt; + uint32_t id = 0; + bool found; + int ret = 0; + struct rte_flow_error error = { RTE_FLOW_ERROR_TYPE_NONE, NULL, NULL }; + struct rte_flow_action_age *age = age_action_get(actions); + + port = &ports[port_id]; + if (port->flow_list) { + if (port->flow_list->id == UINT32_MAX) { + printf("Highest rule ID is already assigned," + " delete it first"); + return -ENOMEM; + } + id = port->flow_list->id + 1; + } + + if (queue_id >= port->queue_nb) { + printf("Queue #%u is invalid\n", queue_id); + return -EINVAL; + } + + found = false; + pt = port->table_list; + while (pt) { + if (table_id == pt->id) { + found = true; + break; + } + pt = pt->next; + } + if (!found) { + printf("Table #%u is invalid\n", table_id); + return -EINVAL; + } + + if (pattern_idx >= pt->nb_pattern_templates) { + printf("Pattern template index #%u is invalid," + " %u templates present in the table\n", + pattern_idx, pt->nb_pattern_templates); + return -EINVAL; + } + if (actions_idx >= pt->nb_actions_templates) { + printf("Actions template index #%u is invalid," + " %u templates present in the table\n", + actions_idx, pt->nb_actions_templates); + return -EINVAL; + } + + pf = port_flow_new(NULL, pattern, actions, &error); + if (!pf) + return port_flow_complain(&error); + if (age) { + pf->age_type = ACTION_AGE_CONTEXT_TYPE_FLOW; + age->context = &pf->age_type; + } + /* Poisoning to make sure PMDs update it in case of error. */ + memset(&error, 0x11, sizeof(error)); + flow = rte_flow_q_flow_create(port_id, queue_id, &ops_attr, + pt->table, pattern, pattern_idx, actions, actions_idx, &error); + if (!flow) { + uint32_t flow_id = pf->id; + port_queue_flow_destroy(port_id, queue_id, true, 1, &flow_id); + return port_flow_complain(&error); + } + + while (ret == 0) { + /* Poisoning to make sure PMDs update it in case of error. */ + memset(&error, 0x22, sizeof(error)); + ret = rte_flow_q_pull(port_id, queue_id, &comp, 1, &error); + if (ret < 0) { + printf("Failed to pull queue\n"); + return -EINVAL; + } + } + + pf->next = port->flow_list; + pf->id = id; + pf->flow = flow; + port->flow_list = pf; + printf("Flow rule #%u creation enqueued\n", pf->id); + return 0; +} + +/** Enqueue number of destroy flow rules operations. */ +int +port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, + bool postpone, uint32_t n, const uint32_t *rule) +{ + struct rte_flow_q_ops_attr op_attr = { .postpone = postpone }; + struct rte_flow_q_op_res comp = { 0 }; + struct rte_port *port; + struct port_flow **tmp; + uint32_t c = 0; + int ret = 0; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + + if (queue_id >= port->queue_nb) { + printf("Queue #%u is invalid\n", queue_id); + return -EINVAL; + } + + tmp = &port->flow_list; + while (*tmp) { + uint32_t i; + + for (i = 0; i != n; ++i) { + struct rte_flow_error error; + struct port_flow *pf = *tmp; + + if (rule[i] != pf->id) + continue; + /* + * Poisoning to make sure PMD + * update it in case of error. + */ + memset(&error, 0x33, sizeof(error)); + if (rte_flow_q_flow_destroy(port_id, queue_id, &op_attr, + pf->flow, &error)) { + ret = port_flow_complain(&error); + continue; + } + + while (ret == 0) { + /* + * Poisoning to make sure PMD + * update it in case of error. + */ + memset(&error, 0x44, sizeof(error)); + ret = rte_flow_q_pull(port_id, queue_id, + &comp, 1, &error); + if (ret < 0) { + printf("Failed to pull queue\n"); + return -EINVAL; + } + } + + printf("Flow rule #%u destruction enqueued\n", pf->id); + *tmp = pf->next; + free(pf); + break; + } + if (i == n) + tmp = &(*tmp)->next; + ++c; + } + return ret; +} + /** Create flow rule. */ int port_flow_create(portid_t port_id, diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 4d85dfdaf6..f574fd77ba 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -932,6 +932,13 @@ int port_flow_table_create(portid_t port_id, uint32_t id, uint32_t nb_actions_templates, uint32_t *actions_templates); int port_flow_table_destroy(portid_t port_id, uint32_t n, const uint32_t *table); +int port_queue_flow_create(portid_t port_id, queueid_t queue_id, + bool postpone, uint32_t table_id, + uint32_t pattern_idx, uint32_t actions_idx, + const struct rte_flow_item *pattern, + const struct rte_flow_action *actions); +int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, + bool postpone, uint32_t n, const uint32_t *rule); int port_flow_validate(portid_t port_id, const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index cfa9aecdba..de46bd00d5 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3359,6 +3359,19 @@ following sections. pattern {item} [/ {item} [...]] / end actions {action} [/ {action} [...]] / end +- Enqueue creation of a flow rule:: + + flow queue {port_id} create {queue_id} [postpone {boolean}] + table {table_id} pattern_template {pattern_template_index} + actions_template {actions_template_index} + pattern {item} [/ {item} [...]] / end + actions {action} [/ {action} [...]] / end + +- Enqueue destruction of specific flow rules:: + + flow queue {port_id} destroy {queue_id} + [postpone {boolean}] rule {rule_id} [...] + - Create a flow rule:: flow create {port_id} @@ -3679,6 +3692,29 @@ one. **All unspecified object values are automatically initialized to 0.** +Enqueueing creation of flow rules +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow queue create`` adds creation operation of a flow rule to a queue. +It is bound to ``rte_flow_q_flow_create()``:: + + flow queue {port_id} create {queue_id} [postpone {boolean}] + table {table_id} pattern_template {pattern_template_index} + actions_template {actions_template_index} + pattern {item} [/ {item} [...]] / end + actions {action} [/ {action} [...]] / end + +If successful, it will return a flow rule ID usable with other commands:: + + Flow rule #[...] creaion enqueued + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + +This command uses the same pattern items and actions as ``flow create``, +their format is described in `Creating flow rules`_. + Attributes ^^^^^^^^^^ @@ -4393,6 +4429,25 @@ Non-existent rule IDs are ignored:: Flow rule #0 destroyed testpmd> +Enqueueing destruction of flow rules +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow queue destroy`` adds destruction operations to destroy one or more rules +from their rule ID (as returned by ``flow queue create``) to a queue, +this command calls ``rte_flow_q_flow_destroy()`` as many times as necessary:: + + flow queue {port_id} destroy {queue_id} + [postpone {boolean}] rule {rule_id} [...] + +If successful, it will show:: + + Flow rule #[...] destruction enqueued + +It does not report anything for rule IDs that do not exist. The usual error +message is shown when a rule cannot be destroyed:: + + Caught error type [...] ([...]): [...] + Querying flow rules ~~~~~~~~~~~~~~~~~~~ From patchwork Sun Feb 6 03:25:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 106900 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 954CCA034E; Sun, 6 Feb 2022 04:26:45 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2908C41174; Sun, 6 Feb 2022 04:26:13 +0100 (CET) Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam07on2066.outbound.protection.outlook.com [40.107.212.66]) by mails.dpdk.org (Postfix) with ESMTP id E962841174 for ; Sun, 6 Feb 2022 04:26:11 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=DJiIhKYiu3vUnHGtOrf4Ardk6gDF/ypMYkTA2TQedAM5vJhdY/ndIWAWxWSeh4QcX5Roz7SZW9zhf15KRw2wDT8c7LaYSOjActbFfqDxHFJkTH27zwItmFl/BHXPsBPfp2H78ModoUiG9veYrc+44wRmxkPN4DMM+DPYwV7Wv0kIKUsbwyZGXBxJ33WFhRuOe7ha4OegTRAi5FgqAlui3AfoiF27FnAZz9Y/8MkKjtRu8ImVTsacVSHag+XFanu05EU9C8/aE1iUO78RTJ8yeTEL+S8V4F6VekCe9IBUaGp1SYuLh7n/RojTzbb59+CzB/nduHjGdmnNfIsWXaZUvw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=4ENgBP8UB+CMBcx7nYK9rg77648+4MK5WMssPZAlJYQ=; b=HDBF6/o0poOYzXkF77GzYr2Sj7QH2aZPzC5ugn7Js4HchD4S8Z4S7lR+WpT1h+H0+JbJpHa5cstJMJknJIznUzFchtMLxOlptDh1xaY+92LbAy2HdhhaOKd5sKZ4AqsWXq9vWa7XMAvC8wg9rFwpt63WbzP+gLl3gktVmx1JtzS6LZeIB9TaZTWvpCohywCQW/QUor6pi5ezWn/rz2AODnE0LvUS5dU3WWFIx8yXyfVbDX+x4FVKuSVAfTwVn4XgvW9CFBMTzYzQWof6VcV+Z1Rcnqv1DWBWtfPo3Zl5A7D5g7FIgVbIPCGJFY2ACoyFN9H5VwLY0bJXfac0RVp4IA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=4ENgBP8UB+CMBcx7nYK9rg77648+4MK5WMssPZAlJYQ=; b=lhkphNKLp9kDbpQkT/mRkjsKuMEjHVSrPxjbBYm1JNx85Jb+2/nTi/l2/qDJMS5qimborZCqGNIFutAVq+Y2TtAeExvcotpAG45VPHUlVNripMBCGiIltLJ/3hhTMaomtfSpCeKi/dp1LxOE9/EBVKiuqdoLM2SBoOkeXjtzKZdDH0Cen9cP7xou4ab5FinMR+LrFfwgwEMNF4ZNiCEJVap92hSSwnjrfZ8y9d73XEKRqCL03Brt/XYAcqQWWuemyGQ/ZifHkTW0LEeLZlZ7L3tdmpNUXH33RIMDBwABmfN90W+CNG9xH1PdvI8Tn0unPjTGmtVWwfEevMO3BxqyRA== Received: from BYAPR12MB5702.namprd12.prod.outlook.com (2603:10b6:a03:9a::21) by PH0PR12MB5484.namprd12.prod.outlook.com (2603:10b6:510:eb::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12; Sun, 6 Feb 2022 03:26:10 +0000 Received: from BN6PR19CA0078.namprd19.prod.outlook.com (2603:10b6:404:133::16) by BYAPR12MB5702.namprd12.prod.outlook.com (2603:10b6:a03:9a::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12; Sun, 6 Feb 2022 03:26:08 +0000 Received: from BN8NAM11FT016.eop-nam11.prod.protection.outlook.com (2603:10b6:404:133:cafe::13) by BN6PR19CA0078.outlook.office365.com (2603:10b6:404:133::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.18 via Frontend Transport; Sun, 6 Feb 2022 03:26:08 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by BN8NAM11FT016.mail.protection.outlook.com (10.13.176.97) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4951.12 via Frontend Transport; Sun, 6 Feb 2022 03:26:07 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 6 Feb 2022 03:26:07 +0000 Received: from pegasus01.mtr.labs.mlnx (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Sat, 5 Feb 2022 19:26:04 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , Subject: [PATCH v3 08/10] app/testpmd: implement rte flow push operations Date: Sun, 6 Feb 2022 05:25:24 +0200 Message-ID: <20220206032526.816079-9-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220206032526.816079-1-akozyrev@nvidia.com> References: <20220118153027.3947448-1-akozyrev@nvidia.com> <20220206032526.816079-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 465b47cc-8ec1-44ae-9c57-08d9e9206a39 X-MS-TrafficTypeDiagnostic: BYAPR12MB5702:EE_|PH0PR12MB5484:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4714; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: EZTQOZy9kRTznKuIqH580fKjT2KL+wO2hxsnosP8cuW2L2NEi79D+sB9xEbqQAXgA0UjQYTJxLlxoE66LrH2Y9wLckcjYicVFDPhXaleSay3hN5Lk9zsJnP/cQ2VhZPNGWf/A0F2Rmkj6FyXQIbqVbFALwGHQ0gj/VqwYkg852cWdZhqHTmlzfHKk9OWL52BT7c7faZTyKw9YgNrmiwnizYMaj3MVjW3uxWZlf9nBz0guyMepbcpIepKcS8VC1rlqBYmBeHoqKyVbSnftaiIrHxt8GjRjRBfKq7ciuh1kf8jle18pJK1iarVH6p5ANqFDYenIOqNI1vBU4kcLWqr2uKr25kkYAGMNcCMxoaosunq3PPbWwXYoprpN97t4JkZvUtDN+bUE2XMZs4AOqscGGQuxoAR0Yau86eZibYrZif939USve39rhZ6EyKgTcS1FDxSWr/D63eRpQOZTvcPvGTHCy3wiGk7dmz9hMw9qhzTO1jEPi8TQR+il8iQKDlAvcLTqVm76e1PZfgALHRlwBihia+s+ZH5/dYoBBJ0wL3Z9fyKXzn1o7gM0AK2J1bScL4vqrdBcc9Pdb+dh4XXxu2qkSe9TL3bDmq8V8W5K0tg2bCsgmlBJ+wbTC0bn2xVDBIcdt2NyF2WbxPUv7Qtqm5gtp34ljqkIiz7o/eIa+FGUzWULh0XfbDQIDjRiaKzGTQO/yaVmtrfeeg/Y/janA== X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(86362001)(70206006)(316002)(83380400001)(2906002)(6666004)(36860700001)(47076005)(36756003)(4326008)(40460700003)(70586007)(8936002)(508600001)(5660300002)(8676002)(6916009)(81166007)(356005)(426003)(54906003)(1076003)(186003)(16526019)(26005)(336012)(2616005)(82310400004)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Feb 2022 03:26:07.8302 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 465b47cc-8ec1-44ae-9c57-08d9e9206a39 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT016.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB5484 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add testpmd support for the rte_flow_q_push API. Provide the command line interface for pushing operations. Usage example: flow queue 0 push 0 Signed-off-by: Alexander Kozyrev Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 56 ++++++++++++++++++++- app/test-pmd/config.c | 28 +++++++++++ app/test-pmd/testpmd.h | 1 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 21 ++++++++ 4 files changed, 105 insertions(+), 1 deletion(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index d4c7f9542f..773bf57a14 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -94,6 +94,7 @@ enum index { TUNNEL, FLEX, QUEUE, + PUSH, /* Flex arguments */ FLEX_ITEM_INIT, @@ -132,6 +133,9 @@ enum index { QUEUE_DESTROY_ID, QUEUE_DESTROY_POSTPONE, + /* Push arguments. */ + PUSH_QUEUE, + /* Table arguments. */ TABLE_CREATE, TABLE_DESTROY, @@ -2157,6 +2161,9 @@ static int parse_qo(struct context *, const struct token *, static int parse_qo_destroy(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_push(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); static int parse_tunnel(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -2435,7 +2442,8 @@ static const struct token token_list[] = { ISOLATE, TUNNEL, FLEX, - QUEUE)), + QUEUE, + PUSH)), .call = parse_init, }, /* Top-level command. */ @@ -2779,6 +2787,21 @@ static const struct token token_list[] = { .call = parse_qo_destroy, }, /* Top-level command. */ + [PUSH] = { + .name = "push", + .help = "push enqueued operations", + .next = NEXT(NEXT_ENTRY(PUSH_QUEUE), NEXT_ENTRY(COMMON_PORT_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_push, + }, + /* Sub-level commands. */ + [PUSH_QUEUE] = { + .name = "queue", + .help = "specify queue id", + .next = NEXT(NEXT_ENTRY(END), NEXT_ENTRY(COMMON_QUEUE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, queue)), + }, + /* Top-level command. */ [INDIRECT_ACTION] = { .name = "indirect_action", .type = "{command} {port_id} [{arg} [...]]", @@ -8397,6 +8420,34 @@ parse_qo_destroy(struct context *ctx, const struct token *token, } } +/** Parse tokens for push queue command. */ +static int +parse_push(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command) { + if (ctx->curr != PUSH) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.vc.data = (uint8_t *)out + size; + } + return len; +} + static int parse_flex(struct context *ctx, const struct token *token, const char *str, unsigned int len, @@ -9768,6 +9819,9 @@ cmd_flow_parsed(const struct buffer *in) in->args.destroy.rule_n, in->args.destroy.rule); break; + case PUSH: + port_queue_flow_push(in->port, in->queue); + break; case INDIRECT_ACTION_CREATE: port_action_handle_create( in->port, in->args.vc.attr.group, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 1013c4b252..2e6343972b 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -2603,6 +2603,34 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, return ret; } +/** Push all the queue operations in the queue to the NIC. */ +int +port_queue_flow_push(portid_t port_id, queueid_t queue_id) +{ + struct rte_port *port; + struct rte_flow_error error; + int ret = 0; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + + if (queue_id >= port->queue_nb) { + printf("Queue #%u is invalid\n", queue_id); + return -EINVAL; + } + + memset(&error, 0x55, sizeof(error)); + ret = rte_flow_q_push(port_id, queue_id, &error); + if (ret < 0) { + printf("Failed to push operations in the queue\n"); + return -EINVAL; + } + printf("Queue #%u operations pushed\n", queue_id); + return ret; +} + /** Create flow rule. */ int port_flow_create(portid_t port_id, diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index f574fd77ba..28c6680987 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -939,6 +939,7 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id, const struct rte_flow_action *actions); int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, bool postpone, uint32_t n, const uint32_t *rule); +int port_queue_flow_push(portid_t port_id, queueid_t queue_id); int port_flow_validate(portid_t port_id, const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index de46bd00d5..dd49e4d1bc 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3372,6 +3372,10 @@ following sections. flow queue {port_id} destroy {queue_id} [postpone {boolean}] rule {rule_id} [...] +- Push enqueued operations:: + + flow push {port_id} queue {queue_id} + - Create a flow rule:: flow create {port_id} @@ -3586,6 +3590,23 @@ The usual error message is shown when a table cannot be destroyed:: Caught error type [...] ([...]): [...] +Pushing enqueued operations +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow push`` pushes all the outstanding enqueued operations +to the underlying device immediately. +It is bound to ``rte_flow_q_push()``:: + + flow push {port_id} queue {queue_id} + +If successful, it will show:: + + Queue #[...] operations pushed + +The usual error message is shown when operations cannot be pushed:: + + Caught error type [...] ([...]): [...] + Creating a tunnel stub for offload ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From patchwork Sun Feb 6 03:25:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 106901 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1624BA034E; Sun, 6 Feb 2022 04:26:51 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2DEAF41155; Sun, 6 Feb 2022 04:26:15 +0100 (CET) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2089.outbound.protection.outlook.com [40.107.243.89]) by mails.dpdk.org (Postfix) with ESMTP id 37A124117E for ; Sun, 6 Feb 2022 04:26:14 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=gQXtA98zhF/syYINRJ77Wgd+tPfFJpb5YwNSzp1XtsNOLhvCX5Id1sGCH4EZwVgIMzjAyKKU5wGJCqlUA+4ln2xyLD6DOO+RaVhns3Lh26ORPfZ1bBof8nybHMTbO7hwio5fhuXMP5GiiRIeFlG+KElYQ0yByKzGvFwhaTmzbSYgEFGDY0wbiu7KgM8qtUiSd6MbvDV1dA6rbP8EbKoJL7mUyo2BNDpNiZRHdBs61UUkP5zl+c52aOaW8bpBcCLYNhb9WonuK6S7LlvbB5LSvXHPDyuECbuG6JNhEKBMwsxCyVWASB0HQBCOQoY21DP37n+qtWYXLSiMx3nkveSezg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=a8o64mHWWBr3uRsHwiQjrlS16A0FQUGH9/LXMsB+EPw=; b=hOeqLBLTOzQz8rHSl8Pr2pF3DXv0EC4VS+tl/GXbZ59+Sa9/bbwD3AzzfeOQ5kSqqdKqENzQMtWYQgzHUdX5mSEeFOB/xfM1LekbKHNtm2brv13x5XFDQE1RM6oOWO8euW8kr/EC7dRIv3hfl2uf1FvuEqGqsQGsok7jGLbN+mlgJ/pdBoeoKEMIJ9Nb8uZH5ywGD8yixnZAnuKYsVXiXzVtJvNh0S21sWskvEuajFhspqzFiWg1MGYias9vgr8w5zFsO8rl/iLjH2+9kt6PAvJVinEBwo8rCMNiHDxUlMhc19Qge6xf0A9nxGDGcMpB6k2DPJTGAl95zXhu1wHNNA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=a8o64mHWWBr3uRsHwiQjrlS16A0FQUGH9/LXMsB+EPw=; b=kN9MLH0o8kc08U7kGylY1tFLF48pQON/c1k6t/QDRo4mgZv++pUt6vBMduz+G9QGeNZBxkFqC9ntr9bfqRMujQXCHJUKhczUFydk3x1d3ztYTSnvuFUrqezZonJt8gwVQMoxi28h+MsHFK9+IoJDzSb5pvNZFOxg/OJ7bqiPp8a897QSNkCicQuJ9H/IkwpHFpNuOp/GgF+Vg4tGdu2QKUsF7rqtYwiZwC1Cvbq1ldYCbtOE7PUIE+JEcK1CT7t9th337djKi5RY0QiXZZclyl+HLv4EtapEVr2tX4Ip6NW/2j8G+mzuiZAcTqx7H/Y72lmjV0+tWJp5E/k1aJyNDg== Received: from DM6PR12MB2650.namprd12.prod.outlook.com (2603:10b6:5:4a::23) by CH2PR12MB5561.namprd12.prod.outlook.com (2603:10b6:610:69::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.18; Sun, 6 Feb 2022 03:26:11 +0000 Received: from DM5PR06CA0065.namprd06.prod.outlook.com (2603:10b6:3:37::27) by DM6PR12MB2650.namprd12.prod.outlook.com (2603:10b6:5:4a::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12; Sun, 6 Feb 2022 03:26:11 +0000 Received: from DM6NAM11FT049.eop-nam11.prod.protection.outlook.com (2603:10b6:3:37:cafe::67) by DM5PR06CA0065.outlook.office365.com (2603:10b6:3:37::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12 via Frontend Transport; Sun, 6 Feb 2022 03:26:10 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.234) by DM6NAM11FT049.mail.protection.outlook.com (10.13.172.188) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4951.12 via Frontend Transport; Sun, 6 Feb 2022 03:26:10 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 6 Feb 2022 03:26:09 +0000 Received: from pegasus01.mtr.labs.mlnx (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Sat, 5 Feb 2022 19:26:06 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , Subject: [PATCH v3 09/10] app/testpmd: implement rte flow pull operations Date: Sun, 6 Feb 2022 05:25:25 +0200 Message-ID: <20220206032526.816079-10-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220206032526.816079-1-akozyrev@nvidia.com> References: <20220118153027.3947448-1-akozyrev@nvidia.com> <20220206032526.816079-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 581715ad-8bd2-43cd-832f-08d9e9206bd7 X-MS-TrafficTypeDiagnostic: DM6PR12MB2650:EE_|CH2PR12MB5561:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:5516; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: +nAywcZyWydKiD07PLxwJz1SSUZom90FZ74HqYJhATdQpLRNm/7nJ5jALjBydh2xBalCCjy0WjrQKCVsQJfGMlZXo8uMclm4HbNOnhNqAvWJ42S4wMQwP6pxt445ZxR540FXxc7pyjFfYUo179m3uOkZX9CeTq5EdNU+GHNiNS0LZU0btusubvz3zXBURA4tKEWjpBTF7OmcP/iBCyTTyC5wzHN167YpHiLbgDKvT3XbUhju1nQw7khYvKxK1LLHYrW7iWqZDTkipjIlZ38wtGdaMfPhTiCtQhjoRKTgzKpGo6RRquGjbOum0nN8txyDQALaFC3uimnfQxe9gNBK4El8KfBim18zrulgtvZHaMsQ0WujifLS5/OGGdhyZzvb/nFmYwV4XvpscXGl6uowap7hQKedP50R8pvmgJkducLOgyqUfSxCSqY58A6CJQW4uVhs3C+74Q44z1aNhpXil1ryq9Qg9VnWMk386R81DclRgClgCf1COUbV59kUI2HrjP+u1DKniCgptFoXund+LCA9bNTdzhJUxJZG5HmVhIuwIVAZhha+vHiYT7fGmwWR6/oUwKV6xQcbMlx7O2e3eL5hpzGK4I0QR1//9UHef0uxU66aCmR3vLZDGyeRTguWaf2JLdtJ7T84AOTGp142jW99Q6OXKS3EUyE6YpWjINyCH4+5KcOBrbojyMA/m0w72j+Qulrk5MLq/cOo5OxguQ== X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(40470700004)(46966006)(70586007)(70206006)(8936002)(8676002)(6916009)(316002)(356005)(2906002)(81166007)(5660300002)(36756003)(83380400001)(6666004)(426003)(86362001)(26005)(186003)(336012)(54906003)(4326008)(36860700001)(508600001)(82310400004)(2616005)(1076003)(40460700003)(16526019)(47076005)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Feb 2022 03:26:10.5428 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 581715ad-8bd2-43cd-832f-08d9e9206bd7 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT049.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB5561 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add testpmd support for the rte_flow_q_pull API. Provide the command line interface for pulling operations results. Usage example: flow pull 0 queue 0 Signed-off-by: Alexander Kozyrev Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 56 +++++++++++++++- app/test-pmd/config.c | 74 +++++++++++++-------- app/test-pmd/testpmd.h | 1 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 25 +++++++ 4 files changed, 127 insertions(+), 29 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 773bf57a14..35eb2a0997 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -95,6 +95,7 @@ enum index { FLEX, QUEUE, PUSH, + PULL, /* Flex arguments */ FLEX_ITEM_INIT, @@ -136,6 +137,9 @@ enum index { /* Push arguments. */ PUSH_QUEUE, + /* Pull arguments. */ + PULL_QUEUE, + /* Table arguments. */ TABLE_CREATE, TABLE_DESTROY, @@ -2164,6 +2168,9 @@ static int parse_qo_destroy(struct context *, const struct token *, static int parse_push(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_pull(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); static int parse_tunnel(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -2443,7 +2450,8 @@ static const struct token token_list[] = { TUNNEL, FLEX, QUEUE, - PUSH)), + PUSH, + PULL)), .call = parse_init, }, /* Top-level command. */ @@ -2802,6 +2810,21 @@ static const struct token token_list[] = { .args = ARGS(ARGS_ENTRY(struct buffer, queue)), }, /* Top-level command. */ + [PULL] = { + .name = "pull", + .help = "pull flow operations results", + .next = NEXT(NEXT_ENTRY(PULL_QUEUE), NEXT_ENTRY(COMMON_PORT_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_pull, + }, + /* Sub-level commands. */ + [PULL_QUEUE] = { + .name = "queue", + .help = "specify queue id", + .next = NEXT(NEXT_ENTRY(END), NEXT_ENTRY(COMMON_QUEUE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, queue)), + }, + /* Top-level command. */ [INDIRECT_ACTION] = { .name = "indirect_action", .type = "{command} {port_id} [{arg} [...]]", @@ -8448,6 +8471,34 @@ parse_push(struct context *ctx, const struct token *token, return len; } +/** Parse tokens for pull command. */ +static int +parse_pull(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command) { + if (ctx->curr != PULL) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.vc.data = (uint8_t *)out + size; + } + return len; +} + static int parse_flex(struct context *ctx, const struct token *token, const char *str, unsigned int len, @@ -9822,6 +9873,9 @@ cmd_flow_parsed(const struct buffer *in) case PUSH: port_queue_flow_push(in->port, in->queue); break; + case PULL: + port_queue_flow_pull(in->port, in->queue); + break; case INDIRECT_ACTION_CREATE: port_action_handle_create( in->port, in->args.vc.attr.group, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 2e6343972b..6cc2c8527e 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -2446,14 +2446,12 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id, const struct rte_flow_action *actions) { struct rte_flow_q_ops_attr ops_attr = { .postpone = postpone }; - struct rte_flow_q_op_res comp = { 0 }; struct rte_flow *flow; struct rte_port *port; struct port_flow *pf; struct port_table *pt; uint32_t id = 0; bool found; - int ret = 0; struct rte_flow_error error = { RTE_FLOW_ERROR_TYPE_NONE, NULL, NULL }; struct rte_flow_action_age *age = age_action_get(actions); @@ -2516,16 +2514,6 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id, return port_flow_complain(&error); } - while (ret == 0) { - /* Poisoning to make sure PMDs update it in case of error. */ - memset(&error, 0x22, sizeof(error)); - ret = rte_flow_q_pull(port_id, queue_id, &comp, 1, &error); - if (ret < 0) { - printf("Failed to pull queue\n"); - return -EINVAL; - } - } - pf->next = port->flow_list; pf->id = id; pf->flow = flow; @@ -2540,7 +2528,6 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, bool postpone, uint32_t n, const uint32_t *rule) { struct rte_flow_q_ops_attr op_attr = { .postpone = postpone }; - struct rte_flow_q_op_res comp = { 0 }; struct rte_port *port; struct port_flow **tmp; uint32_t c = 0; @@ -2576,21 +2563,6 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, ret = port_flow_complain(&error); continue; } - - while (ret == 0) { - /* - * Poisoning to make sure PMD - * update it in case of error. - */ - memset(&error, 0x44, sizeof(error)); - ret = rte_flow_q_pull(port_id, queue_id, - &comp, 1, &error); - if (ret < 0) { - printf("Failed to pull queue\n"); - return -EINVAL; - } - } - printf("Flow rule #%u destruction enqueued\n", pf->id); *tmp = pf->next; free(pf); @@ -2631,6 +2603,52 @@ port_queue_flow_push(portid_t port_id, queueid_t queue_id) return ret; } +/** Pull queue operation results from the queue. */ +int +port_queue_flow_pull(portid_t port_id, queueid_t queue_id) +{ + struct rte_port *port; + struct rte_flow_q_op_res *res; + struct rte_flow_error error; + int ret = 0; + int success = 0; + int i; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + + if (queue_id >= port->queue_nb) { + printf("Queue #%u is invalid\n", queue_id); + return -EINVAL; + } + + res = calloc(port->queue_sz, sizeof(struct rte_flow_q_op_res)); + if (!res) { + printf("Failed to allocate memory for pulled results\n"); + return -ENOMEM; + } + + memset(&error, 0x66, sizeof(error)); + ret = rte_flow_q_pull(port_id, queue_id, res, + port->queue_sz, &error); + if (ret < 0) { + printf("Failed to pull a operation results\n"); + free(res); + return -EINVAL; + } + + for (i = 0; i < ret; i++) { + if (res[i].status == RTE_FLOW_Q_OP_SUCCESS) + success++; + } + printf("Queue #%u pulled %u operations (%u failed, %u succeeded)\n", + queue_id, ret, ret - success, success); + free(res); + return ret; +} + /** Create flow rule. */ int port_flow_create(portid_t port_id, diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 28c6680987..8526db6766 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -940,6 +940,7 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id, int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, bool postpone, uint32_t n, const uint32_t *rule); int port_queue_flow_push(portid_t port_id, queueid_t queue_id); +int port_queue_flow_pull(portid_t port_id, queueid_t queue_id); int port_flow_validate(portid_t port_id, const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index dd49e4d1bc..419e5805e8 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3376,6 +3376,10 @@ following sections. flow push {port_id} queue {queue_id} +- Pull all operations results from a queue:: + + flow pull {port_id} queue {queue_id} + - Create a flow rule:: flow create {port_id} @@ -3607,6 +3611,23 @@ The usual error message is shown when operations cannot be pushed:: Caught error type [...] ([...]): [...] +Pulling flow operations results +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow pull`` asks the underlying device about flow queue operations +results and return all the processed (successfully or not) operations. +It is bound to ``rte_flow_q_pull()``:: + + flow pull {port_id} queue {queue_id} + +If successful, it will show:: + + Queue #[...] pulled #[...] operations (#[...] failed, #[...] succeeded) + +The usual error message is shown when operations results cannot be pulled:: + + Caught error type [...] ([...]): [...] + Creating a tunnel stub for offload ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -3736,6 +3757,8 @@ Otherwise it will show an error message of the form:: This command uses the same pattern items and actions as ``flow create``, their format is described in `Creating flow rules`_. +``flow queue pull`` must be called to retrieve the operation status. + Attributes ^^^^^^^^^^ @@ -4469,6 +4492,8 @@ message is shown when a rule cannot be destroyed:: Caught error type [...] ([...]): [...] +``flow queue pull`` must be called to retrieve the operation status. + Querying flow rules ~~~~~~~~~~~~~~~~~~~ From patchwork Sun Feb 6 03:25:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 106903 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3C979A034E; Sun, 6 Feb 2022 04:27:05 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 07D0C411C9; Sun, 6 Feb 2022 04:26:19 +0100 (CET) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2080.outbound.protection.outlook.com [40.107.223.80]) by mails.dpdk.org (Postfix) with ESMTP id 4B31941190 for ; Sun, 6 Feb 2022 04:26:17 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=kj+0SEkZCx0qyyWumex+8cGXHWfoQyI5Qcwu/dYnFkjRAISDFuxhUtgej0cSvznOYGxIEI01fjP4ZvmnqT/0GLRQ29SsJ6N58yYKOiCTUU3JsdsxfTp5/AfwjDxhqbqD+u78Pl/kwCOwsdQGnjkeAC7BWCtN2tjldzhjJZ5ddlTbnenhJ5lIVzPJ3z9LVX26wI3+prvWUIsTJg1HbWJBUWmPAoksMTtUPhgSNTPYgQC+Xj2FvJlqWVl1aKDowLer9grLKfr8B3EaAq2vqrUu22DAj8zNejYgNDx5RQQYLKOSiyPlQQ7cprfi2XiHhEIM2y5qz3pBd8EZikDHQgIXBQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=05A2D/2rFSHr5z5UOa7/ePICn9Q4GvLmsyr28cBdEHo=; b=haxXvoAnR1cPjfIzVuPf+YED77OOgD3RhzMaD4GEg4V9EnyJ8VjxoNf5yvdj8ykFQ3lqtPrxRr5I7czxNa9ZG+6EX/jyZgKqncfAS7WCjqvqH3xXtDqMu8Rqs9f8GBKFcbBi1C/uSruy9VqbOzrPRUdeSUiSB+/57PyvWPBeiNvl2zRbo18H9upoman1CtudOndVhFHYngMB3tDlE9/AyyqkMDo5TUrKoFMmo1ZHXl5wUhQ/47/exG8sRVP8rR15UxPWpLuHriN2n6mgNTz9Cx8DF4vt0htm8rnP0aKMUIT5WrSpfhvslfDydxW9ziBsdw6HFKV1Vx0Lf9nViaJq/A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=05A2D/2rFSHr5z5UOa7/ePICn9Q4GvLmsyr28cBdEHo=; b=ZgMBhB0RQEHDjINpiisqbJ0Asif2lYWMnSMKTxZ4fqgG7UyS0gKCV/rZLID7BIqUf4qZUJ3QnoAoDK+cs54NnqT/pDCP0Lm09B2jsK0niBwjJXsb5l7KJai3kuCIA+YXEyrW4aO4iJqSZzpgVdFuP7Q23qstABaF3NyICrFq+A/eeW9tTqoQFeSi1jzPWomwhvttqPeH/FIQeOHdt/weF+vPh6TEzskeasL2w/B2QWg9CGOz5HZSLaxbQiixpgf3NGRwNQoLKVlshpIAQCAo+rIJaPjGpF3rzlPoRrTaUnM+2gsO/KHUQMSo9Bw+qmQAOPIRG/wDyLc3Se8QinqZ/g== Received: from MN2PR12MB4237.namprd12.prod.outlook.com (2603:10b6:208:1d6::7) by MN2PR12MB3502.namprd12.prod.outlook.com (2603:10b6:208:c9::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.17; Sun, 6 Feb 2022 03:26:14 +0000 Received: from MW4PR04CA0237.namprd04.prod.outlook.com (2603:10b6:303:87::32) by MN2PR12MB4237.namprd12.prod.outlook.com (2603:10b6:208:1d6::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12; Sun, 6 Feb 2022 03:26:14 +0000 Received: from CO1NAM11FT018.eop-nam11.prod.protection.outlook.com (2603:10b6:303:87:cafe::8c) by MW4PR04CA0237.outlook.office365.com (2603:10b6:303:87::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12 via Frontend Transport; Sun, 6 Feb 2022 03:26:13 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.236) by CO1NAM11FT018.mail.protection.outlook.com (10.13.175.16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4951.12 via Frontend Transport; Sun, 6 Feb 2022 03:26:13 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 6 Feb 2022 03:26:12 +0000 Received: from pegasus01.mtr.labs.mlnx (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Sat, 5 Feb 2022 19:26:09 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , Subject: [PATCH v3 10/10] app/testpmd: implement rte flow queue indirect actions Date: Sun, 6 Feb 2022 05:25:26 +0200 Message-ID: <20220206032526.816079-11-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220206032526.816079-1-akozyrev@nvidia.com> References: <20220118153027.3947448-1-akozyrev@nvidia.com> <20220206032526.816079-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 61b2e889-961e-4f79-b2d6-08d9e9206d5a X-MS-TrafficTypeDiagnostic: MN2PR12MB4237:EE_|MN2PR12MB3502:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4502; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 08RiK6JCkJn8KkTGIvOXJauT0K24s46v4WkdnUo+hYp4LL0qFTCOv6bgWyYRd4LXK0amfKBwBh2dogoIUxWbb6Lx9WgnNgqM7D8KysYOJRDZkbupdKoisVxvk4u9Lx3iSFVVFILSL+3m+IZzDHNabHW3sltQT/95UaStirM2YVH5wSkOG9VJ/NZvGERWI3hzRVz3FdTgDUyV9twAw2twLiuKafTnb87dVhjFrsMa9cn9d4+QKZ/BGS5v77hdrHqmEYEvHMkH/mUdJer9pTCkxDe1oJBJdwqTCKqJdrlvdUQZVXDPCrFVfNzkdQuy/rpQp1D6cYTYbGZeTvRmA1iESONEUNoL16rEdvobnMaOgCLUsokMavt1IjlhPgWWxgiXhLrKz04uqpHLrK8V0Xmm/t+escnqbLeIBGMEDP6WWPowRMo2LNIgli375Me1CHLo81Q2nD51I48Htdu8q3N1av9sfGvaKwnbh3t/gsPQopzQruK4T0/isAgAosE44m6mD9mZWCjXdf9mGe46koI4v6gPr6fQtYBAfvr7DMgXuSrTMK10jEdAP6wgUuCKM0sKCMte6o15SnYsaPDG49MP0LpZ+xT5NqcuNSNgZuqciz2Ecsvss48rXBzBUhBgg8q3Jjm3fcZKCNEKCJEMijMsgFOh7I0uzxxQDi1/wDPEFDS6s9jP6YaZI1WSnA3N4TEhPq5l3MuNlRHn1HquxzQp7w== X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(40470700004)(36840700001)(36860700001)(508600001)(16526019)(1076003)(2616005)(426003)(26005)(186003)(336012)(36756003)(4326008)(8676002)(5660300002)(8936002)(70206006)(70586007)(40460700003)(2906002)(86362001)(81166007)(54906003)(356005)(30864003)(316002)(6916009)(82310400004)(6666004)(47076005)(83380400001)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Feb 2022 03:26:13.1115 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 61b2e889-961e-4f79-b2d6-08d9e9206d5a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT018.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB3502 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add testpmd support for the rte_flow_q_action_handle API. Provide the command line interface for operations dequeue. Usage example: flow queue 0 indirect_action 0 create action_id 9 ingress postpone yes action rss / end flow queue 0 indirect_action 0 update action_id 9 action queue index 0 / end flow queue 0 indirect_action 0 destroy action_id 9 Signed-off-by: Alexander Kozyrev Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 276 ++++++++++++++++++++ app/test-pmd/config.c | 131 ++++++++++ app/test-pmd/testpmd.h | 10 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 65 +++++ 4 files changed, 482 insertions(+) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 35eb2a0997..1eea36d8d0 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -121,6 +121,7 @@ enum index { /* Queue arguments. */ QUEUE_CREATE, QUEUE_DESTROY, + QUEUE_INDIRECT_ACTION, /* Queue create arguments. */ QUEUE_CREATE_ID, @@ -134,6 +135,26 @@ enum index { QUEUE_DESTROY_ID, QUEUE_DESTROY_POSTPONE, + /* Queue indirect action arguments */ + QUEUE_INDIRECT_ACTION_CREATE, + QUEUE_INDIRECT_ACTION_UPDATE, + QUEUE_INDIRECT_ACTION_DESTROY, + + /* Queue indirect action create arguments */ + QUEUE_INDIRECT_ACTION_CREATE_ID, + QUEUE_INDIRECT_ACTION_INGRESS, + QUEUE_INDIRECT_ACTION_EGRESS, + QUEUE_INDIRECT_ACTION_TRANSFER, + QUEUE_INDIRECT_ACTION_CREATE_POSTPONE, + QUEUE_INDIRECT_ACTION_SPEC, + + /* Queue indirect action update arguments */ + QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE, + + /* Queue indirect action destroy arguments */ + QUEUE_INDIRECT_ACTION_DESTROY_ID, + QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE, + /* Push arguments. */ PUSH_QUEUE, @@ -1101,6 +1122,7 @@ static const enum index next_table_destroy_attr[] = { static const enum index next_queue_subcmd[] = { QUEUE_CREATE, QUEUE_DESTROY, + QUEUE_INDIRECT_ACTION, ZERO, }; @@ -1110,6 +1132,36 @@ static const enum index next_queue_destroy_attr[] = { ZERO, }; +static const enum index next_qia_subcmd[] = { + QUEUE_INDIRECT_ACTION_CREATE, + QUEUE_INDIRECT_ACTION_UPDATE, + QUEUE_INDIRECT_ACTION_DESTROY, + ZERO, +}; + +static const enum index next_qia_create_attr[] = { + QUEUE_INDIRECT_ACTION_CREATE_ID, + QUEUE_INDIRECT_ACTION_INGRESS, + QUEUE_INDIRECT_ACTION_EGRESS, + QUEUE_INDIRECT_ACTION_TRANSFER, + QUEUE_INDIRECT_ACTION_CREATE_POSTPONE, + QUEUE_INDIRECT_ACTION_SPEC, + ZERO, +}; + +static const enum index next_qia_update_attr[] = { + QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE, + QUEUE_INDIRECT_ACTION_SPEC, + ZERO, +}; + +static const enum index next_qia_destroy_attr[] = { + QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE, + QUEUE_INDIRECT_ACTION_DESTROY_ID, + END, + ZERO, +}; + static const enum index next_ia_create_attr[] = { INDIRECT_ACTION_CREATE_ID, INDIRECT_ACTION_INGRESS, @@ -2165,6 +2217,12 @@ static int parse_qo(struct context *, const struct token *, static int parse_qo_destroy(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_qia(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); +static int parse_qia_destroy(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); static int parse_push(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -2741,6 +2799,13 @@ static const struct token token_list[] = { .args = ARGS(ARGS_ENTRY(struct buffer, queue)), .call = parse_qo_destroy, }, + [QUEUE_INDIRECT_ACTION] = { + .name = "indirect_action", + .help = "queue indirect actions", + .next = NEXT(next_qia_subcmd, NEXT_ENTRY(COMMON_QUEUE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, queue)), + .call = parse_qia, + }, /* Queue arguments. */ [QUEUE_TABLE] = { .name = "table", @@ -2794,6 +2859,90 @@ static const struct token token_list[] = { args.destroy.rule)), .call = parse_qo_destroy, }, + /* Queue indirect action arguments */ + [QUEUE_INDIRECT_ACTION_CREATE] = { + .name = "create", + .help = "create indirect action", + .next = NEXT(next_qia_create_attr), + .call = parse_qia, + }, + [QUEUE_INDIRECT_ACTION_UPDATE] = { + .name = "update", + .help = "update indirect action", + .next = NEXT(next_qia_update_attr, + NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, args.vc.attr.group)), + .call = parse_qia, + }, + [QUEUE_INDIRECT_ACTION_DESTROY] = { + .name = "destroy", + .help = "destroy indirect action", + .next = NEXT(next_qia_destroy_attr), + .call = parse_qia_destroy, + }, + /* Indirect action destroy arguments. */ + [QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE] = { + .name = "postpone", + .help = "postpone destroy operation", + .next = NEXT(next_qia_destroy_attr, + NEXT_ENTRY(COMMON_BOOLEAN)), + .args = ARGS(ARGS_ENTRY(struct buffer, postpone)), + }, + [QUEUE_INDIRECT_ACTION_DESTROY_ID] = { + .name = "action_id", + .help = "specify a indirect action id to destroy", + .next = NEXT(next_qia_destroy_attr, + NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)), + .args = ARGS(ARGS_ENTRY_PTR(struct buffer, + args.ia_destroy.action_id)), + .call = parse_qia_destroy, + }, + /* Indirect action update arguments. */ + [QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE] = { + .name = "postpone", + .help = "postpone update operation", + .next = NEXT(next_qia_update_attr, + NEXT_ENTRY(COMMON_BOOLEAN)), + .args = ARGS(ARGS_ENTRY(struct buffer, postpone)), + }, + /* Indirect action create arguments. */ + [QUEUE_INDIRECT_ACTION_CREATE_ID] = { + .name = "action_id", + .help = "specify a indirect action id to create", + .next = NEXT(next_qia_create_attr, + NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, args.vc.attr.group)), + }, + [QUEUE_INDIRECT_ACTION_INGRESS] = { + .name = "ingress", + .help = "affect rule to ingress", + .next = NEXT(next_qia_create_attr), + .call = parse_qia, + }, + [QUEUE_INDIRECT_ACTION_EGRESS] = { + .name = "egress", + .help = "affect rule to egress", + .next = NEXT(next_qia_create_attr), + .call = parse_qia, + }, + [QUEUE_INDIRECT_ACTION_TRANSFER] = { + .name = "transfer", + .help = "affect rule to transfer", + .next = NEXT(next_qia_create_attr), + .call = parse_qia, + }, + [QUEUE_INDIRECT_ACTION_CREATE_POSTPONE] = { + .name = "postpone", + .help = "postpone create operation", + .next = NEXT(next_qia_create_attr, + NEXT_ENTRY(COMMON_BOOLEAN)), + .args = ARGS(ARGS_ENTRY(struct buffer, postpone)), + }, + [QUEUE_INDIRECT_ACTION_SPEC] = { + .name = "action", + .help = "specify action to create indirect handle", + .next = NEXT(next_action), + }, /* Top-level command. */ [PUSH] = { .name = "push", @@ -6193,6 +6342,110 @@ parse_ia_destroy(struct context *ctx, const struct token *token, return len; } +/** Parse tokens for indirect action commands. */ +static int +parse_qia(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command) { + if (ctx->curr != QUEUE) + return -1; + if (sizeof(*out) > size) + return -1; + out->args.vc.data = (uint8_t *)out + size; + return len; + } + switch (ctx->curr) { + case QUEUE_INDIRECT_ACTION: + return len; + case QUEUE_INDIRECT_ACTION_CREATE: + case QUEUE_INDIRECT_ACTION_UPDATE: + out->args.vc.actions = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + out->args.vc.attr.group = UINT32_MAX; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + return len; + case QUEUE_INDIRECT_ACTION_EGRESS: + out->args.vc.attr.egress = 1; + return len; + case QUEUE_INDIRECT_ACTION_INGRESS: + out->args.vc.attr.ingress = 1; + return len; + case QUEUE_INDIRECT_ACTION_TRANSFER: + out->args.vc.attr.transfer = 1; + return len; + case QUEUE_INDIRECT_ACTION_CREATE_POSTPONE: + return len; + default: + return -1; + } +} + +/** Parse tokens for indirect action destroy command. */ +static int +parse_qia_destroy(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + uint32_t *action_id; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command || out->command == QUEUE) { + if (ctx->curr != QUEUE_INDIRECT_ACTION_DESTROY) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.ia_destroy.action_id = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + return len; + } + switch (ctx->curr) { + case QUEUE_INDIRECT_ACTION: + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + return len; + case QUEUE_INDIRECT_ACTION_DESTROY_ID: + action_id = out->args.ia_destroy.action_id + + out->args.ia_destroy.action_id_n++; + if ((uint8_t *)action_id > (uint8_t *)out + size) + return -1; + ctx->objdata = 0; + ctx->object = action_id; + ctx->objmask = NULL; + return len; + case QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE: + return len; + default: + return -1; + } +} + /** Parse tokens for meter policy action commands. */ static int parse_mp(struct context *ctx, const struct token *token, @@ -9876,6 +10129,29 @@ cmd_flow_parsed(const struct buffer *in) case PULL: port_queue_flow_pull(in->port, in->queue); break; + case QUEUE_INDIRECT_ACTION_CREATE: + port_queue_action_handle_create( + in->port, in->queue, in->postpone, + in->args.vc.attr.group, + &((const struct rte_flow_indir_action_conf) { + .ingress = in->args.vc.attr.ingress, + .egress = in->args.vc.attr.egress, + .transfer = in->args.vc.attr.transfer, + }), + in->args.vc.actions); + break; + case QUEUE_INDIRECT_ACTION_DESTROY: + port_queue_action_handle_destroy(in->port, + in->queue, in->postpone, + in->args.ia_destroy.action_id_n, + in->args.ia_destroy.action_id); + break; + case QUEUE_INDIRECT_ACTION_UPDATE: + port_queue_action_handle_update(in->port, + in->queue, in->postpone, + in->args.vc.attr.group, + in->args.vc.actions); + break; case INDIRECT_ACTION_CREATE: port_action_handle_create( in->port, in->args.vc.attr.group, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 6cc2c8527e..fbcd42355e 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -2575,6 +2575,137 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, return ret; } +/** Enqueue indirect action create operation. */ +int +port_queue_action_handle_create(portid_t port_id, uint32_t queue_id, + bool postpone, uint32_t id, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *action) +{ + const struct rte_flow_q_ops_attr attr = { .postpone = postpone}; + struct rte_port *port; + struct port_indirect_action *pia; + int ret; + struct rte_flow_error error; + + ret = action_alloc(port_id, id, &pia); + if (ret) + return ret; + + port = &ports[port_id]; + if (queue_id >= port->queue_nb) { + printf("Queue #%u is invalid\n", queue_id); + return -EINVAL; + } + + if (action->type == RTE_FLOW_ACTION_TYPE_AGE) { + struct rte_flow_action_age *age = + (struct rte_flow_action_age *)(uintptr_t)(action->conf); + + pia->age_type = ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION; + age->context = &pia->age_type; + } + /* Poisoning to make sure PMDs update it in case of error. */ + memset(&error, 0x88, sizeof(error)); + pia->handle = rte_flow_q_action_handle_create(port_id, queue_id, &attr, + conf, action, &error); + if (!pia->handle) { + uint32_t destroy_id = pia->id; + port_queue_action_handle_destroy(port_id, queue_id, + postpone, 1, &destroy_id); + return port_flow_complain(&error); + } + pia->type = action->type; + printf("Indirect action #%u creation queued\n", pia->id); + return 0; +} + +/** Enqueue indirect action destroy operation. */ +int +port_queue_action_handle_destroy(portid_t port_id, + uint32_t queue_id, bool postpone, + uint32_t n, const uint32_t *actions) +{ + const struct rte_flow_q_ops_attr attr = { .postpone = postpone}; + struct rte_port *port; + struct port_indirect_action **tmp; + uint32_t c = 0; + int ret = 0; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + + if (queue_id >= port->queue_nb) { + printf("Queue #%u is invalid\n", queue_id); + return -EINVAL; + } + + tmp = &port->actions_list; + while (*tmp) { + uint32_t i; + + for (i = 0; i != n; ++i) { + struct rte_flow_error error; + struct port_indirect_action *pia = *tmp; + + if (actions[i] != pia->id) + continue; + /* + * Poisoning to make sure PMDs update it in case + * of error. + */ + memset(&error, 0x99, sizeof(error)); + + if (pia->handle && + rte_flow_q_action_handle_destroy(port_id, queue_id, + &attr, pia->handle, &error)) { + ret = port_flow_complain(&error); + continue; + } + *tmp = pia->next; + printf("Indirect action #%u destruction queued\n", + pia->id); + free(pia); + break; + } + if (i == n) + tmp = &(*tmp)->next; + ++c; + } + return ret; +} + +/** Enqueue indirect action update operation. */ +int +port_queue_action_handle_update(portid_t port_id, + uint32_t queue_id, bool postpone, uint32_t id, + const struct rte_flow_action *action) +{ + const struct rte_flow_q_ops_attr attr = { .postpone = postpone}; + struct rte_port *port; + struct rte_flow_error error; + struct rte_flow_action_handle *action_handle; + + action_handle = port_action_handle_get_by_id(port_id, id); + if (!action_handle) + return -EINVAL; + + port = &ports[port_id]; + if (queue_id >= port->queue_nb) { + printf("Queue #%u is invalid\n", queue_id); + return -EINVAL; + } + + if (rte_flow_q_action_handle_update(port_id, queue_id, &attr, + action_handle, action, &error)) { + return port_flow_complain(&error); + } + printf("Indirect action #%u update queued\n", id); + return 0; +} + /** Push all the queue operations in the queue to the NIC. */ int port_queue_flow_push(portid_t port_id, queueid_t queue_id) diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 8526db6766..3da5201014 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -939,6 +939,16 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id, const struct rte_flow_action *actions); int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, bool postpone, uint32_t n, const uint32_t *rule); +int port_queue_action_handle_create(portid_t port_id, uint32_t queue_id, + bool postpone, uint32_t id, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *action); +int port_queue_action_handle_destroy(portid_t port_id, + uint32_t queue_id, bool postpone, + uint32_t n, const uint32_t *action); +int port_queue_action_handle_update(portid_t port_id, uint32_t queue_id, + bool postpone, uint32_t id, + const struct rte_flow_action *action); int port_queue_flow_push(portid_t port_id, queueid_t queue_id); int port_queue_flow_pull(portid_t port_id, queueid_t queue_id); int port_flow_validate(portid_t port_id, diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 419e5805e8..0d04435eb7 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -4753,6 +4753,31 @@ port 0:: testpmd> flow indirect_action 0 create action_id \ ingress action rss queues 0 1 end / end +Enqueueing creation of indirect actions +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow queue indirect_action create`` adds creation operation of an indirect +action to a queue. It is bound to ``rte_flow_q_action_handle_create()``:: + + flow queue {port_id} create {queue_id} [postpone {boolean}] + table {table_id} item_template {item_template_id} + action_template {action_template_id} + pattern {item} [/ {item} [...]] / end + actions {action} [/ {action} [...]] / end + +If successful, it will show:: + + Indirect action #[...] creation queued + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + +This command uses the same parameters as ``flow indirect_action create``, +described in `Creating indirect actions`_. + +``flow queue pull`` must be called to retrieve the operation status. + Updating indirect actions ~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -4782,6 +4807,25 @@ Update indirect rss action having id 100 on port 0 with rss to queues 0 and 3 testpmd> flow indirect_action 0 update 100 action rss queues 0 3 end / end +Enqueueing update of indirect actions +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow queue indirect_action update`` adds update operation for an indirect +action to a queue. It is bound to ``rte_flow_q_action_handle_update()``:: + + flow queue {port_id} indirect_action {queue_id} update + {indirect_action_id} [postpone {boolean}] action {action} / end + +If successful, it will show:: + + Indirect action #[...] update queued + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + +``flow queue pull`` must be called to retrieve the operation status. + Destroying indirect actions ~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -4805,6 +4849,27 @@ Destroy indirect actions having id 100 & 101:: testpmd> flow indirect_action 0 destroy action_id 100 action_id 101 +Enqueueing destruction of indirect actions +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow queue indirect_action destroy`` adds destruction operation to destroy +one or more indirect actions from their indirect action IDs (as returned by +``flow queue {port_id} indirect_action {queue_id} create``) to a queue. +It is bound to ``rte_flow_q_action_handle_destroy()``:: + + flow queue {port_id} indirect_action {queue_id} destroy + [postpone {boolean}] action_id {indirect_action_id} [...] + +If successful, it will show:: + + Indirect action #[...] destruction queued + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + +``flow queue pull`` must be called to retrieve the operation status. + Query indirect actions ~~~~~~~~~~~~~~~~~~~~~~