From patchwork Tue Jan 18 15:30:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 106033 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 98229A034C; Tue, 18 Jan 2022 16:31:30 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 047EB42788; Tue, 18 Jan 2022 16:31:09 +0100 (CET) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2077.outbound.protection.outlook.com [40.107.236.77]) by mails.dpdk.org (Postfix) with ESMTP id 7EB7442786 for ; Tue, 18 Jan 2022 16:31:07 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=LPRkjHNwFkExMcQeU5wV4VPzHnVlcplLqt7eDCHixVNSo8AVn6f7LL8ptDlNr4dvlsLj5/cWRS4dUkCxRNNjqlvmqoA6X7DaPQk5qb0d/kxKF/Ya31Jqe4yCQxoJIF5Pf/JpL1kqERJza67SizuUBJNKAR1SjrWOydBGtloqXU9+IS8EEppGlnkKn1jGh+nDa3DE4/66rCaYuy5NvY9GmK4Rna89zaN+pMPN7LyjlXjrhlYnb9CPbVLS/4JQhOFIClrcr44S9i0Kg1OtgKfpkCcu70o/mEoffGgjjTh8CtG0SPFOb3eDoIB0qm/DaECFbZkA0qUd0hAmYkfa4AjMNQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=FvN0KM5JZaCB/g6dIwzBKwJnmR8zfYpNq9LmEndfKC8=; b=PCrC7wMSHZiAUcV6ceQBm6meYEX41SJ0/NgIybCxKnCJSoKaRdxsxgAsZRST/Im69iSS+dr6yb5ReTGOQZJZbXVkgWajB3rYyxp2GuNWgyWyI5Wt2+eGSJWJhIKyXK8rP61uW9Yj4oPV7bBSDB6/zCPaw+LwYkl7wHG0BNRVouihjIdI0i4CuYym2YGkoS8LJX/tQByNPzkgKcs3J0eZl4GTS1ZTfzqsVTeaNizR3jSR+vhZZ+EATo7dyDwM5LbNcvb3giFyUifJ0rfccgLEPUJRk6qSQWwUoCyKWUVRoJagtbg9qkGcrk3R0sQGoE7uG1/cel3TAn1XgXXRZ9Xa3g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=FvN0KM5JZaCB/g6dIwzBKwJnmR8zfYpNq9LmEndfKC8=; b=U056GKUG5VNd4kNmigr8fYdKg8dTF98ho9tl1k7bYzAmM4HXmesP3zKdWtHKSIdDYM6QOpOaMbJOo6rbyJoRWIqHq2ufm0gd2XusQ08D3grn8bmbbgGthhvWdHIztqcM8ETIiyvdidQYnb0IRdixl67XxHkCBvXFZ6ZXYwKJfVn+Rgsz/SXSQx86jRatIHQh861kxwQ9iYGWvOTD/i5QJFWeU8RnaNsX9SeIr+cunxt7XRbv+4OXkhlYFVQBflfZEtKw6J+PECPfjVaNTaQWlT2hobhwraSR5xZNKiCxix6F3nGSeyIjusT0p+Dc9PteBNHIUBlJA35mT4U+zJVpbA== Received: from DM5PR12MB2343.namprd12.prod.outlook.com (2603:10b6:4:b3::38) by DM5PR12MB1945.namprd12.prod.outlook.com (2603:10b6:3:10f::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4888.11; Tue, 18 Jan 2022 15:31:01 +0000 Received: from DM3PR12CA0046.namprd12.prod.outlook.com (2603:10b6:0:56::14) by DM5PR12MB2343.namprd12.prod.outlook.com (2603:10b6:4:b3::38) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4888.12; Tue, 18 Jan 2022 15:30:52 +0000 Received: from DM6NAM11FT013.eop-nam11.prod.protection.outlook.com (2603:10b6:0:56:cafe::e) by DM3PR12CA0046.outlook.office365.com (2603:10b6:0:56::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4888.10 via Frontend Transport; Tue, 18 Jan 2022 15:30:52 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.235) by DM6NAM11FT013.mail.protection.outlook.com (10.13.173.142) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4888.9 via Frontend Transport; Tue, 18 Jan 2022 15:30:51 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 18 Jan 2022 15:30:51 +0000 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.986.9; Tue, 18 Jan 2022 07:30:48 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , Subject: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints Date: Tue, 18 Jan 2022 17:30:18 +0200 Message-ID: <20220118153027.3947448-2-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220118153027.3947448-1-akozyrev@nvidia.com> References: <20211006044835.3936226-1-akozyrev@nvidia.com> <20220118153027.3947448-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 32a9bb6b-e166-44b1-fa81-08d9da9782db X-MS-TrafficTypeDiagnostic: DM5PR12MB2343:EE_|DM5PR12MB1945:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:10000; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: JrNItxdwq4DwQEmMu7yjGjxbBg28QdrhNSD1h3lcuFi1tqZiGf28m1W3qaQFyXvrVblIILBBlHYfzr3AFAoFMaTKCJGrAIrYzu5bi4bDAzotC9lvKsT/0zbDFWYUnb53E6xRVUJxoGZRnQfWUuX7OgaU8goVHtJ/7FiYQcqf1a3UPrLsBH7fsHoiK1BomHGYegxUwLRgdwQF1cpNUjZr7NnRWBj/4TXkpDs3V5x1FUmX/ugmFuxvwmvywwhx8XXC/i7NcjYV7caKJ0ZoShjEFvRKN44/0wIeeVX7BPsCGkSJ+N7ESM32k6pXaoNy+YGdFYar/g4pRarn87VMmY3H4CR68bShPSbs6h2aKmWrSZnMMQRYKlllzibGzlvqRkp9NbKR5gYN4TucBPcjyYMVY/SwGdKF9TlOvzyNToDpH288zEDFOr9QIUgRyeD9e0tM4/B/IubLesNYTvArFybEkD5hStc95juDnpTwdTw+kdQceJgRuyUzTNAAfriwAjtXQUF42XrQiw+FO8d2o4vTlr2GUUJxRSlwwBs36Og9kZHAMUblcdAcT+3HU4z7GjLHEx+7pgMEKQ7WQdmzn8PTgrQbgbknXdy1j6O5351Zn+mVoST75TldwHf1jqz7UXHeB5PqtTn5W+Z63baeGIsySed4eNTZYmLfE3bxNno7THtp4e4NPlOkiAQl9SKuISKHxNPIa/OOPgpzqvl2Dt4j+NLf2iIykAQpLdBUnCWUsk9KC5tvXGSc9HAWztXjjmH9OhDUN1RxEFrsHhnIpMcuR+1nC8dmsqlEfPnP0rTLSx4= X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(4636009)(36840700001)(40470700002)(46966006)(54906003)(83380400001)(2906002)(2616005)(86362001)(8936002)(40460700001)(47076005)(336012)(426003)(4326008)(82310400004)(70586007)(5660300002)(36860700001)(6916009)(356005)(8676002)(70206006)(316002)(7696005)(26005)(6666004)(186003)(16526019)(81166007)(55016003)(1076003)(6286002)(508600001)(36756003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2022 15:30:51.8913 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 32a9bb6b-e166-44b1-fa81-08d9da9782db X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT013.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1945 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The flow rules creation/destruction at a large scale incurs a performance penalty and may negatively impact the packet processing when used as part of the datapath logic. This is mainly because software/hardware resources are allocated and prepared during the flow rule creation. In order to optimize the insertion rate, PMD may use some hints provided by the application at the initialization phase. The rte_flow_configure() function allows to pre-allocate all the needed resources beforehand. These resources can be used at a later stage without costly allocations. Every PMD may use only the subset of hints and ignore unused ones or fail in case the requested configuration is not supported. Signed-off-by: Alexander Kozyrev --- doc/guides/prog_guide/rte_flow.rst | 37 +++++++++++++++ doc/guides/rel_notes/release_22_03.rst | 2 + lib/ethdev/rte_flow.c | 20 ++++++++ lib/ethdev/rte_flow.h | 63 ++++++++++++++++++++++++++ lib/ethdev/rte_flow_driver.h | 5 ++ lib/ethdev/version.map | 3 ++ 6 files changed, 130 insertions(+) diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index b4aa9c47c2..86f8c8bda2 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -3589,6 +3589,43 @@ Return values: - 0 on success, a negative errno value otherwise and ``rte_errno`` is set. +Rules management configuration +------------------------------ + +Configure flow rules management. + +An application may provide some hints at the initialization phase about +rules management configuration and/or expected flow rules characteristics. +These hints may be used by PMD to pre-allocate resources and configure NIC. + +Configuration +~~~~~~~~~~~~~ + +This function performs the flow rules management configuration and +pre-allocates needed resources beforehand to avoid costly allocations later. +Hints about the expected number of counters or meters in an application, +for example, allow PMD to prepare and optimize NIC memory layout in advance. +``rte_flow_configure()`` must be called before any flow rule is created, +but after an Ethernet device is configured. + +.. code-block:: c + + int + rte_flow_configure(uint16_t port_id, + const struct rte_flow_port_attr *port_attr, + struct rte_flow_error *error); + +Arguments: + +- ``port_id``: port identifier of Ethernet device. +- ``port_attr``: port attributes for flow management library. +- ``error``: perform verbose error reporting if not NULL. PMDs initialize + this structure in case of error only. + +Return values: + +- 0 on success, a negative errno value otherwise and ``rte_errno`` is set. + .. _flow_isolated_mode: Flow isolated mode diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst index 16c66c0641..71b3f0a651 100644 --- a/doc/guides/rel_notes/release_22_03.rst +++ b/doc/guides/rel_notes/release_22_03.rst @@ -55,6 +55,8 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= +* ethdev: Added ``rte_flow_configure`` API to configure Flow Management + library, allowing to pre-allocate some resources for better performance. Removed Items ------------- diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index a93f68abbc..5b78780ef9 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -1391,3 +1391,23 @@ rte_flow_flex_item_release(uint16_t port_id, ret = ops->flex_item_release(dev, handle, error); return flow_err(port_id, ret, error); } + +int +rte_flow_configure(uint16_t port_id, + const struct rte_flow_port_attr *port_attr, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->configure)) { + return flow_err(port_id, + ops->configure(dev, port_attr, error), + error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 1031fb246b..e145e68525 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -4853,6 +4853,69 @@ rte_flow_flex_item_release(uint16_t port_id, const struct rte_flow_item_flex_handle *handle, struct rte_flow_error *error); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Flow engine port configuration attributes. + */ +__extension__ +struct rte_flow_port_attr { + /** + * Version of the struct layout, should be 0. + */ + uint32_t version; + /** + * Number of counter actions pre-configured. + * If set to 0, PMD will allocate counters dynamically. + * @see RTE_FLOW_ACTION_TYPE_COUNT + */ + uint32_t nb_counters; + /** + * Number of aging actions pre-configured. + * If set to 0, PMD will allocate aging dynamically. + * @see RTE_FLOW_ACTION_TYPE_AGE + */ + uint32_t nb_aging; + /** + * Number of traffic metering actions pre-configured. + * If set to 0, PMD will allocate meters dynamically. + * @see RTE_FLOW_ACTION_TYPE_METER + */ + uint32_t nb_meters; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Configure flow rules module. + * To pre-allocate resources as per the flow port attributes + * this configuration function must be called before any flow rule is created. + * Must be called only after Ethernet device is configured, but may be called + * before or after the device is started as long as there are no flow rules. + * No other rte_flow function should be called while this function is invoked. + * This function can be called again to change the configuration. + * Some PMDs may not support re-configuration at all, + * or may only allow increasing the number of resources allocated. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] port_attr + * Port configuration attributes. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_configure(uint16_t port_id, + const struct rte_flow_port_attr *port_attr, + struct rte_flow_error *error); + #ifdef __cplusplus } #endif diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h index f691b04af4..5f722f1a39 100644 --- a/lib/ethdev/rte_flow_driver.h +++ b/lib/ethdev/rte_flow_driver.h @@ -152,6 +152,11 @@ struct rte_flow_ops { (struct rte_eth_dev *dev, const struct rte_flow_item_flex_handle *handle, struct rte_flow_error *error); + /** See rte_flow_configure() */ + int (*configure) + (struct rte_eth_dev *dev, + const struct rte_flow_port_attr *port_attr, + struct rte_flow_error *err); }; /** diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index c2fb0669a4..7645796739 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -256,6 +256,9 @@ EXPERIMENTAL { rte_flow_flex_item_create; rte_flow_flex_item_release; rte_flow_pick_transfer_proxy; + + # added in 22.03 + rte_flow_configure; }; INTERNAL { From patchwork Tue Jan 18 15:30:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 106030 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D1DABA034C; Tue, 18 Jan 2022 16:31:01 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5BBC44272F; Tue, 18 Jan 2022 16:31:01 +0100 (CET) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2064.outbound.protection.outlook.com [40.107.236.64]) by mails.dpdk.org (Postfix) with ESMTP id 5489642727 for ; Tue, 18 Jan 2022 16:31:00 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=OK+pPm0S0YnmYSLT3xMo1Qd7P8box9H1M4JZCHxUWKzu6xzNMkWoB6+Ir9hPQDQJqfhYcQCQbRXCl0nKCO1hxLBn9pMv6ECSL5Z1B0An0SFuKsS2cvTyXOCoJI8nUUkWklL0eeP0YyUPaARQKoGkAWC9nxJeShCptCXRZyUiAR9ehP0sy/PeNS4jC4+ERDgerlDSlFrKyUyxiJv8V3S6Fyhc+djYDY3u+gp24xj6wMp7OGKA8PKITI3HgtHA4F7311Zw85GajJsXSbOPQBeNJCfHhTV970fnZzkXxp3jDTTPia7VyFeck4YzktVQ6dxgi39iaYQ0qJjk04zRS4c79w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=dXfP4jFtialrPh4QJKiWk7nY5l2y60iLSSpqcu8iCbI=; b=FovdVicVX6u1k1W+GZQppAqsMcc3s41aFSGZGydC3titF76mgB17x1j02X4rTnkTfZ1UNSPx2tGT+q/OmOY8ChmagJsBItMUtYJ8e8W01eGQxLzitN3KYpQEhvMjUTprjYFKHJzCDw2+Y+egRkuPVEVYBElWNOj8t10jcjQ8BlhnVUTWgtGzjC3V4gEbDXQN0rov6NarmvobIw2bJxrjNwQpkB5boVRGGHBbMCwcUxL0zMlnMjQYt/rm/EdpJA7Dtj2c7VPdYDzU09gPr0kZYS9qUYoSDOMomq40mfnfkkXRkAi/9e8LA73QuWZejEhMQ0mv6qKa7SB5xI2c7zs02w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=dXfP4jFtialrPh4QJKiWk7nY5l2y60iLSSpqcu8iCbI=; b=ILftn5rYub3A/CP5E4mtvFh/LvmPn+5bvtX7koPaTrDgV3PUF6zVIN0wMONl4hBjP4zRtUtiOTWcZJsAdPK0gKF0SVUFDGLklTmonqPjEMu5YE9uw0RHx9AItfQuoFb6o7EFdkyUpVs3BxWgnMpcd7ZsYWTZcmNJx4VgZ6f6UWt2tGLEhqsAp1ZVXwyf2kXjDuqs+ZXavMyvlqrAE8CeQFB0nA1SmuIqarkvnA3jHnkcPEmmn5h1eiZvFoyASq4tRFhJkm2TAAiGBSw8gF5HtRYgseX2/k5DkCvcXb1ctj7N2jjhs6Gq2takcKlMuaIc05hD09qd0Vq2h1BLdzWtDA== Received: from BY5PR12MB3844.namprd12.prod.outlook.com (2603:10b6:a03:1ad::24) by DM6PR12MB3913.namprd12.prod.outlook.com (2603:10b6:5:1cc::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4888.11; Tue, 18 Jan 2022 15:30:56 +0000 Received: from BN9PR03CA0595.namprd03.prod.outlook.com (2603:10b6:408:10d::30) by BY5PR12MB3844.namprd12.prod.outlook.com (2603:10b6:a03:1ad::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4909.7; Tue, 18 Jan 2022 15:30:55 +0000 Received: from BN8NAM11FT008.eop-nam11.prod.protection.outlook.com (2603:10b6:408:10d:cafe::4b) by BN9PR03CA0595.outlook.office365.com (2603:10b6:408:10d::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4909.7 via Frontend Transport; Tue, 18 Jan 2022 15:30:55 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by BN8NAM11FT008.mail.protection.outlook.com (10.13.177.95) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4888.9 via Frontend Transport; Tue, 18 Jan 2022 15:30:55 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 18 Jan 2022 15:30:54 +0000 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.986.9; Tue, 18 Jan 2022 07:30:51 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , Subject: [PATCH v2 02/10] ethdev: add flow item/action templates Date: Tue, 18 Jan 2022 17:30:19 +0200 Message-ID: <20220118153027.3947448-3-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220118153027.3947448-1-akozyrev@nvidia.com> References: <20211006044835.3936226-1-akozyrev@nvidia.com> <20220118153027.3947448-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 103aa3c2-8358-45b0-1efe-08d9da9784e4 X-MS-TrafficTypeDiagnostic: BY5PR12MB3844:EE_|DM6PR12MB3913:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:8273; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: n/L5KJS78a3EeoAeymsHTtqIzsSPnxD5PVF1c60nKhTTd2bYxpTZeDXKW5Mt+rQk5g9QS+XBEn9K9+DPAdMUf+TCjAZqBSx6qlR2XdyQaZSv7LrHShHCSCFZ1oX4ZZqjq7vfzT1I31/+IQsnLm1Tk1AHrO2+WT0sbeHp0UDC5uapo4mJdeVNDoX+xbaPToy0UAm1sz5im+Rp9UGx1WpFIZQL8jkUlMb/29xzR6i/7ojlJUsIvXp0G3/YthEkD1UaTDLqXkyq/S9Xf02yvPZ4/VV9v7yKPUcH35+AlQ0tcFNL9i3aurDlDAe6+0WFaj82rIA7ryS/Hd3je3BJS4tdOCkKi1BmB2v8OFh4d5F6EYJXa7JM290AnWq9X/adMDSaIGgfcCTKQrmopE49iNDMnzJ38+Tlx8tKYJxt1awVcynTWYQ3UYNGfdBslQkO/nl1ISi++QRCEgYSD9/5MjsnMsoSRkC7k0bONNm7cRQnY4C7ImNkvw1WViRytQRSLeDAWvfNBlBb+SnLnhWMB4x15z6m+QrtgmptZlzifRuinMed/csIO6YiwWPGar+aGkaVVBbr9G0CMW9fbE0w+zFJrnA9UzvTXIlb4I4D6+CEt9Uw0+sQcvhA68RnH2b47n5b4PMzMxtAi4XSK2FdJmE9y9FJDFX2XYn2WMvTAWaizgt6BbQf6Y4nrcaIaSw8cocxncmabv52oL5QRQSG/JNnPyMdaLprgv3kfRPoP0Tdf2ACdmDDoYcDB08GJTyJr+RVqYnspqVEOU1scEz3jaWIVBZMq15xkDdWOvf8vTEtTFg= X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(40470700002)(508600001)(81166007)(40460700001)(2906002)(1076003)(6666004)(8676002)(36756003)(7696005)(2616005)(26005)(8936002)(6916009)(5660300002)(336012)(6286002)(47076005)(16526019)(30864003)(86362001)(426003)(70586007)(83380400001)(54906003)(186003)(316002)(4326008)(82310400004)(55016003)(356005)(70206006)(36860700001)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2022 15:30:55.1595 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 103aa3c2-8358-45b0-1efe-08d9da9784e4 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT008.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3913 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Treating every single flow rule as a completely independent and separate entity negatively impacts the flow rules insertion rate. Oftentimes in an application, many flow rules share a common structure (the same item mask and/or action list) so they can be grouped and classified together. This knowledge may be used as a source of optimization by a PMD/HW. The item template defines common matching fields (the item mask) without values. The action template holds a list of action types that will be used together in the same rule. The specific values for items and actions will be given only during the rule creation. A table combines item and action templates along with shared flow rule attributes (group ID, priority and traffic direction). This way a PMD/HW can prepare all the resources needed for efficient flow rules creation in the datapath. To avoid any hiccups due to memory reallocation, the maximum number of flow rules is defined at table creation time. The flow rule creation is done by selecting a table, an item template and an action template (which are bound to the table), and setting unique values for the items and actions. Signed-off-by: Alexander Kozyrev --- doc/guides/prog_guide/rte_flow.rst | 124 ++++++++++++ doc/guides/rel_notes/release_22_03.rst | 8 + lib/ethdev/rte_flow.c | 141 +++++++++++++ lib/ethdev/rte_flow.h | 269 +++++++++++++++++++++++++ lib/ethdev/rte_flow_driver.h | 37 ++++ lib/ethdev/version.map | 6 + 6 files changed, 585 insertions(+) diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 86f8c8bda2..aa9d4e9573 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -3626,6 +3626,130 @@ Return values: - 0 on success, a negative errno value otherwise and ``rte_errno`` is set. +Flow templates +~~~~~~~~~~~~~~ + +Oftentimes in an application, many flow rules share a common structure +(the same pattern and/or action list) so they can be grouped and classified +together. This knowledge may be used as a source of optimization by a PMD/HW. +The flow rule creation is done by selecting a table, an item template +and an action template (which are bound to the table), and setting unique +values for the items and actions. This API is not thread-safe. + +Item templates +^^^^^^^^^^^^^^ + +The item template defines a common pattern (the item mask) without values. +The mask value is used to select a field to match on, spec/last are ignored. +The item template may be used by multiple tables and must not be destroyed +until all these tables are destroyed first. + +.. code-block:: c + + struct rte_flow_item_template * + rte_flow_item_template_create(uint16_t port_id, + const struct rte_flow_item_template_attr *it_attr, + const struct rte_flow_item items[], + struct rte_flow_error *error); + +For example, to create an item template to match on the destination MAC: + +.. code-block:: c + + struct rte_flow_item root_items[2] = {{0}}; + struct rte_flow_item_eth eth_m = {0}; + items[0].type = RTE_FLOW_ITEM_TYPE_ETH; + eth_m.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff"; + items[0].mask = ð_m; + items[1].type = RTE_FLOW_ITEM_TYPE_END; + + struct rte_flow_item_template *it = + rte_flow_item_template_create(port, &itr, &items, &error); + +The concrete value to match on will be provided at the rule creation. + +Action templates +^^^^^^^^^^^^^^^^ + +The action template holds a list of action types to be used in flow rules. +The mask parameter allows specifying a shared constant value for every rule. +The action template may be used by multiple tables and must not be destroyed +until all these tables are destroyed first. + +.. code-block:: c + + struct rte_flow_action_template * + rte_flow_action_template_create(uint16_t port_id, + const struct rte_flow_action_template_attr *at_attr, + const struct rte_flow_action actions[], + const struct rte_flow_action masks[], + struct rte_flow_error *error); + +For example, to create an action template with the same Mark ID +but different Queue Index for every rule: + +.. code-block:: c + + struct rte_flow_action actions[] = { + /* Mark ID is constant (4) for every rule, Queue Index is unique */ + [0] = {.type = RTE_FLOW_ACTION_TYPE_MARK, + .conf = &(struct rte_flow_action_mark){.id = 4}}, + [1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE}, + [2] = {.type = RTE_FLOW_ACTION_TYPE_END,}, + }; + struct rte_flow_action masks[] = { + /* Assign to MARK mask any non-zero value to make it constant */ + [0] = {.type = RTE_FLOW_ACTION_TYPE_MARK, + .conf = &(struct rte_flow_action_mark){.id = 1}}, + [1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE}, + [2] = {.type = RTE_FLOW_ACTION_TYPE_END,}, + }; + + struct rte_flow_action_template *at = + rte_flow_action_template_create(port, &atr, &actions, &masks, &error); + +The concrete value for Queue Index will be provided at the rule creation. + +Flow table +^^^^^^^^^^ + +A table combines a number of item and action templates along with shared flow +rule attributes (group ID, priority and traffic direction). This way a PMD/HW +can prepare all the resources needed for efficient flow rules creation in +the datapath. To avoid any hiccups due to memory reallocation, the maximum +number of flow rules is defined at table creation time. Any flow rule +creation beyond the maximum table size is rejected. Application may create +another table to accommodate more rules in this case. + +.. code-block:: c + + struct rte_flow_table * + rte_flow_table_create(uint16_t port_id, + const struct rte_flow_table_attr *table_attr, + struct rte_flow_item_template *item_templates[], + uint8_t nb_item_templates, + struct rte_flow_action_template *action_templates[], + uint8_t nb_action_templates, + struct rte_flow_error *error); + +A table can be created only after the Flow Rules management is configured +and item and action templates are created. + +.. code-block:: c + + rte_flow_configure(port, *port_attr, *error); + + struct rte_flow_item_template *it[0] = + rte_flow_item_template_create(port, &itr, &items, &error); + struct rte_flow_action_template *at[0] = + rte_flow_action_template_create(port, &atr, &actions, &masks, &error); + + struct rte_flow_table *table = + rte_flow_table_create(port, *table_attr, + *it, nb_item_templates, + *at, nb_action_templates, + *error); + .. _flow_isolated_mode: Flow isolated mode diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst index 71b3f0a651..af56f54bc4 100644 --- a/doc/guides/rel_notes/release_22_03.rst +++ b/doc/guides/rel_notes/release_22_03.rst @@ -58,6 +58,14 @@ New Features * ethdev: Added ``rte_flow_configure`` API to configure Flow Management library, allowing to pre-allocate some resources for better performance. +* ethdev: Added ``rte_flow_table_create`` API to group flow rules with + the same flow attributes and common matching patterns and actions + defined by ``rte_flow_item_template_create`` and + ``rte_flow_action_template_create`` respectively. + Corresponding functions to destroy these entities are: + ``rte_flow_table_destroy``, ``rte_flow_item_template_destroy`` + and ``rte_flow_action_template_destroy`` respectively. + Removed Items ------------- diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 5b78780ef9..20613f6bed 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -1411,3 +1411,144 @@ rte_flow_configure(uint16_t port_id, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, rte_strerror(ENOTSUP)); } + +struct rte_flow_item_template * +rte_flow_item_template_create(uint16_t port_id, + const struct rte_flow_item_template_attr *it_attr, + const struct rte_flow_item items[], + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + struct rte_flow_item_template *template; + + if (unlikely(!ops)) + return NULL; + if (likely(!!ops->item_template_create)) { + template = ops->item_template_create(dev, it_attr, + items, error); + if (template == NULL) + flow_err(port_id, -rte_errno, error); + return template; + } + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); + return NULL; +} + +int +rte_flow_item_template_destroy(uint16_t port_id, + struct rte_flow_item_template *it, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->item_template_destroy)) { + return flow_err(port_id, + ops->item_template_destroy(dev, it, error), + error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} + +struct rte_flow_action_template * +rte_flow_action_template_create(uint16_t port_id, + const struct rte_flow_action_template_attr *at_attr, + const struct rte_flow_action actions[], + const struct rte_flow_action masks[], + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + struct rte_flow_action_template *template; + + if (unlikely(!ops)) + return NULL; + if (likely(!!ops->action_template_create)) { + template = ops->action_template_create(dev, at_attr, + actions, masks, error); + if (template == NULL) + flow_err(port_id, -rte_errno, error); + return template; + } + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); + return NULL; +} + +int +rte_flow_action_template_destroy(uint16_t port_id, + struct rte_flow_action_template *at, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->action_template_destroy)) { + return flow_err(port_id, + ops->action_template_destroy(dev, at, error), + error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} + +struct rte_flow_table * +rte_flow_table_create(uint16_t port_id, + const struct rte_flow_table_attr *table_attr, + struct rte_flow_item_template *item_templates[], + uint8_t nb_item_templates, + struct rte_flow_action_template *action_templates[], + uint8_t nb_action_templates, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + struct rte_flow_table *table; + + if (unlikely(!ops)) + return NULL; + if (likely(!!ops->table_create)) { + table = ops->table_create(dev, table_attr, + item_templates, nb_item_templates, + action_templates, nb_action_templates, + error); + if (table == NULL) + flow_err(port_id, -rte_errno, error); + return table; + } + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); + return NULL; +} + +int +rte_flow_table_destroy(uint16_t port_id, + struct rte_flow_table *table, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->table_destroy)) { + return flow_err(port_id, + ops->table_destroy(dev, table, error), + error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index e145e68525..2e54e9d0e3 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -4916,6 +4916,275 @@ rte_flow_configure(uint16_t port_id, const struct rte_flow_port_attr *port_attr, struct rte_flow_error *error); +/** + * Opaque type returned after successful creation of item template. + * This handle can be used to manage the created item template. + */ +struct rte_flow_item_template; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Flow item template attributes. + */ +__extension__ +struct rte_flow_item_template_attr { + /** + * Version of the struct layout, should be 0. + */ + uint32_t version; + /** + * Relaxed matching policy, PMD may match only on items + * with mask member set and skip matching on protocol + * layers specified without any masks. + * If not set, PMD will match on protocol layers + * specified without any masks as well. + * Packet data must be stacked in the same order as the + * protocol layers to match inside packets, + * starting from the lowest. + */ + uint32_t relaxed_matching:1; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Create item template. + * The item template defines common matching fields (item mask) without values. + * For example, matching on 5 tuple TCP flow, the template will be + * eth(null) + IPv4(source + dest) + TCP(s_port + d_port), + * while values for each rule will be set during the flow rule creation. + * The number and order of items in the template must be the same + * at the rule creation. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] it_attr + * Item template attributes. + * @param[in] items + * Pattern specification (list terminated by the END pattern item). + * The spec member of an item is not used unless the end member is used. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * Handle on success, NULL otherwise and rte_errno is set. + */ +__rte_experimental +struct rte_flow_item_template * +rte_flow_item_template_create(uint16_t port_id, + const struct rte_flow_item_template_attr *it_attr, + const struct rte_flow_item items[], + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Destroy item template. + * This function may be called only when + * there are no more tables referencing this template. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] it + * Handle of the template to be destroyed. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_item_template_destroy(uint16_t port_id, + struct rte_flow_item_template *it, + struct rte_flow_error *error); + +/** + * Opaque type returned after successful creation of action template. + * This handle can be used to manage the created action template. + */ +struct rte_flow_action_template; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Flow action template attributes. + */ +struct rte_flow_action_template_attr { + /** + * Version of the struct layout, should be 0. + */ + uint32_t version; + /* No attributes so far. */ +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Create action template. + * The action template holds a list of action types without values. + * For example, the template to change TCP ports is TCP(s_port + d_port), + * while values for each rule will be set during the flow rule creation. + * The number and order of actions in the template must be the same + * at the rule creation. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] at_attr + * Template attributes. + * @param[in] actions + * Associated actions (list terminated by the END action). + * The spec member is only used if @p masks spec is non-zero. + * @param[in] masks + * List of actions that marks which of the action's member is constant. + * A mask has the same format as the corresponding action. + * If the action field in @p masks is not 0, + * the corresponding value in an action from @p actions will be the part + * of the template and used in all flow rules. + * The order of actions in @p masks is the same as in @p actions. + * In case of indirect actions present in @p actions, + * the actual action type should be present in @p mask. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * Handle on success, NULL otherwise and rte_errno is set. + */ +__rte_experimental +struct rte_flow_action_template * +rte_flow_action_template_create(uint16_t port_id, + const struct rte_flow_action_template_attr *at_attr, + const struct rte_flow_action actions[], + const struct rte_flow_action masks[], + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Destroy action template. + * This function may be called only when + * there are no more tables referencing this template. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] at + * Handle to the template to be destroyed. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_action_template_destroy(uint16_t port_id, + struct rte_flow_action_template *at, + struct rte_flow_error *error); + + +/** + * Opaque type returned after successful creation of table. + * This handle can be used to manage the created table. + */ +struct rte_flow_table; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Table attributes. + */ +struct rte_flow_table_attr { + /** + * Version of the struct layout, should be 0. + */ + uint32_t version; + /** + * Flow attributes that will be used in the table. + */ + struct rte_flow_attr flow_attr; + /** + * Maximum number of flow rules that this table holds. + */ + uint32_t nb_flows; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Create table. + * Table is a group of flow rules with the same flow attributes + * (group ID, priority and traffic direction) defined for it. + * The table holds multiple item and action templates to build a flow rule. + * Each rule is free to use any combination of item and action templates + * and specify particular values for items and actions it would like to change. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] table_attr + * Table attributes. + * @param[in] item_templates + * Array of item templates to be used in this table. + * @param[in] nb_item_templates + * The number of item templates in the item_templates array. + * @param[in] action_templates + * Array of action templates to be used in this table. + * @param[in] nb_action_templates + * The number of action templates in the action_templates array. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * Handle on success, NULL otherwise and rte_errno is set. + */ +__rte_experimental +struct rte_flow_table * +rte_flow_table_create(uint16_t port_id, + const struct rte_flow_table_attr *table_attr, + struct rte_flow_item_template *item_templates[], + uint8_t nb_item_templates, + struct rte_flow_action_template *action_templates[], + uint8_t nb_action_templates, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Destroy table. + * This function may be called only when + * there are no more flow rules referencing this table. + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] table + * Handle to the table to be destroyed. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_table_destroy(uint16_t port_id, + struct rte_flow_table *table, + struct rte_flow_error *error); + #ifdef __cplusplus } #endif diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h index 5f722f1a39..cda021c302 100644 --- a/lib/ethdev/rte_flow_driver.h +++ b/lib/ethdev/rte_flow_driver.h @@ -157,6 +157,43 @@ struct rte_flow_ops { (struct rte_eth_dev *dev, const struct rte_flow_port_attr *port_attr, struct rte_flow_error *err); + /** See rte_flow_item_template_create() */ + struct rte_flow_item_template *(*item_template_create) + (struct rte_eth_dev *dev, + const struct rte_flow_item_template_attr *it_attr, + const struct rte_flow_item items[], + struct rte_flow_error *err); + /** See rte_flow_item_template_destroy() */ + int (*item_template_destroy) + (struct rte_eth_dev *dev, + struct rte_flow_item_template *it, + struct rte_flow_error *err); + /** See rte_flow_action_template_create() */ + struct rte_flow_action_template *(*action_template_create) + (struct rte_eth_dev *dev, + const struct rte_flow_action_template_attr *at_attr, + const struct rte_flow_action actions[], + const struct rte_flow_action masks[], + struct rte_flow_error *err); + /** See rte_flow_action_template_destroy() */ + int (*action_template_destroy) + (struct rte_eth_dev *dev, + struct rte_flow_action_template *at, + struct rte_flow_error *err); + /** See rte_flow_table_create() */ + struct rte_flow_table *(*table_create) + (struct rte_eth_dev *dev, + const struct rte_flow_table_attr *table_attr, + struct rte_flow_item_template *item_templates[], + uint8_t nb_item_templates, + struct rte_flow_action_template *action_templates[], + uint8_t nb_action_templates, + struct rte_flow_error *err); + /** See rte_flow_table_destroy() */ + int (*table_destroy) + (struct rte_eth_dev *dev, + struct rte_flow_table *table, + struct rte_flow_error *err); }; /** diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 7645796739..cfd5e2a3e4 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -259,6 +259,12 @@ EXPERIMENTAL { # added in 22.03 rte_flow_configure; + rte_flow_item_template_create; + rte_flow_item_template_destroy; + rte_flow_action_template_create; + rte_flow_action_template_destroy; + rte_flow_table_create; + rte_flow_table_destroy; }; INTERNAL { From patchwork Tue Jan 18 15:30:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 106031 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 19F78A034C; Tue, 18 Jan 2022 16:31:12 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B767F42736; Tue, 18 Jan 2022 16:31:04 +0100 (CET) Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam08on2073.outbound.protection.outlook.com [40.107.100.73]) by mails.dpdk.org (Postfix) with ESMTP id 51F264068E for ; Tue, 18 Jan 2022 16:31:03 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=fLCyQ1qRvbu6RhZEEUwk9s+WHyDig1w+1zVCgbKg8HZwRsVAB3c9Z5Lqn7NDOZLTt3/CdzutG63ev95pkf1r1CwNedy0KXssQKSwXVPpYmND9FwV6TX9Yy/QJiS6vxvWBQAc/ZVI22k4bPvN1RnhuivAny9cFkL4FaKPUfg3zyS5SHDMBuup43TAnwy24Nyic74xZQ56rLoEO71wkHlJSsMNF5+/AsmImeezLAXGc+sZ+mIGupaNucc2UclYAPl5k2AbHbuxr5y+buM528QjBfsUCogvEueoLVdNhToalU2NzW+uN02/Jag/58UQj0w5XmudAcaFZmwDt8eZc28yXQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=fStBP0ZOA7NSCFzqoMM6Qi5DGZWXDsNQZQ1+vkNnOuc=; b=f35XBcMohhPgPPb70/DDs1YYAsXls8JLkWJMRsWlrmUrIaYvrPEZvFgKR6f2v3VXws5P0FwiQOr4MkrBACDbwhgEkoBhmova47O4+tTas3xwP2Ii+LLEFYDWyQ4f0ytvbVyoGBDN3v8XaDJzZTJKoDcqeftdRSuRYUpphwS6aHY3NnEzmO6jjKwGEgTd/GsFNxKFkyKrWI/6vls/2gZH4KNbkoK4iLhppXsx8I0dhpHR+qKvQyO89hqEKbwIcjk7hiHRhXqTjsUGrTrIxRbEdzDopAwA5KZS/5QxNxoSw8OPPPZjILVxToWHo21aLPdjHAkZgzEDNcJgRO5cNPAjBw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=fStBP0ZOA7NSCFzqoMM6Qi5DGZWXDsNQZQ1+vkNnOuc=; b=lFNZoUvKv7v4HZe8qX+yhCABYtBGYnK9mbqY5U320duDdZD1C7453z81o3x7C983iIMEL9p8poh7S9h2vos36xbWIVbyl1+GxjUXwesvUeuvp+58NiiuSoB9hVcf5Kk3+2dvAD8wOLkOv3W11TuSsPtFtv4SJ+WIFk1PiOsbzz6lYQlCY9Ci1FgNtumeyw+lD/MAxXcTmlRaBUeAiKwqr62M7jeswZu/7eLUQv8sPUVyJhPQbG9W7b6Ycl9NCHouo+cTo6zIxJ+GVMyssbAltNyDexHyJBY9y8TKz8NpHBcNsUc2cX3cWpvRVaG9Kkgublz7VIBGXvZYHmV7etysDQ== Received: from BYAPR12MB4758.namprd12.prod.outlook.com (2603:10b6:a03:a5::28) by DM6PR12MB3740.namprd12.prod.outlook.com (2603:10b6:5:1c3::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4888.10; Tue, 18 Jan 2022 15:31:00 +0000 Received: from DM3PR03CA0018.namprd03.prod.outlook.com (2603:10b6:0:50::28) by BYAPR12MB4758.namprd12.prod.outlook.com (2603:10b6:a03:a5::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4888.12; Tue, 18 Jan 2022 15:30:58 +0000 Received: from DM6NAM11FT009.eop-nam11.prod.protection.outlook.com (2603:10b6:0:50:cafe::4f) by DM3PR03CA0018.outlook.office365.com (2603:10b6:0:50::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4909.7 via Frontend Transport; Tue, 18 Jan 2022 15:30:58 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.234) by DM6NAM11FT009.mail.protection.outlook.com (10.13.173.20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4888.9 via Frontend Transport; Tue, 18 Jan 2022 15:30:57 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 18 Jan 2022 15:30:57 +0000 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.986.9; Tue, 18 Jan 2022 07:30:54 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , Subject: [PATCH v2 03/10] ethdev: bring in async queue-based flow rules operations Date: Tue, 18 Jan 2022 17:30:20 +0200 Message-ID: <20220118153027.3947448-4-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220118153027.3947448-1-akozyrev@nvidia.com> References: <20211006044835.3936226-1-akozyrev@nvidia.com> <20220118153027.3947448-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: f4632f78-a9c5-4ab5-908f-08d9da978686 X-MS-TrafficTypeDiagnostic: BYAPR12MB4758:EE_|DM6PR12MB3740:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:10000; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: e1l0EDX0AsO9SCy81zT9WXA1aDiI3v6ASQXyec2Cb6d+4oz3RIsi+gz4uViZwIFdXnDIF/3FsW9RctQlODmmpLRpPgkOIE7A3z7UsFbgkPyDmliZ0ajVuDkUq0ajd6Q6ayztvE2dG94ROOHpSXWI34rd267eh24RBXVDR0kp/JzbsdRKsFkC13QcRA6MZUL7+iFPZ20uv8/N5UStA1VbYw5NSgSbCquZT5CkMVTfO3xXA6krHIZFDNhbMeWAwomDJkM6tJ9Zq+//zE/YzPT6SzucCFa5CEseVo3YEgrZrFniGj26j8L89gNPQu1OZo7F3+K/0gQR1Fm2fYuJ3h+3wMOpjb9qGKPUSmVlMpCv8heGOian7chMRC8+tBGyy1gslg0xJRoYD16MowQE2podyv4jvj2IP4eVaeso8cxdFxgB4Ak8yqEOrO8frM9msJLJpj5Oe9scFfg1bKgfqXNmV+9vaQtMzruubs+RB5LYI6uOZtSvT/sZjWkk8VGzpL5jKV2XutREF332nzzQq3WII8zuFQxAFLZUqWB1DKU+fFLNL6mC7DnEyGNoDEkoMMjw/+17RJBPhpx2sWenlk+6WCra4E6UtMP1Mxa7GfRH/F+TdO8By3fn3eKrk29d+ik2VtTNllmpjBSXux3W3QhXefEYkBgNsLRe+bIRPEMcSP+lSQD6Z5YI+/9mOcE3kn56crV6NlCj471xMol4ChSA0UZKMgfnVS/7SJsFRjVGOmTes3bC8OMnnk/RdZqGRxkHLtNKxfIfwNeeFcrhsND8n8gby23L9Lvb0IfA+4Oxwq3etTwhwuhOnT6DQcMw4IzU1NSOTyiXnJ5qAMClkciv/qCw0z3pCWWBHcEs+SybKsd4Dox4a4UdsgYl6C7V0KtNXIyZz2a6A2jopkz6FMxCMFbyAUaFYkgbYheLmg6XL0g= X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(4636009)(40470700002)(36840700001)(46966006)(7696005)(86362001)(30864003)(81166007)(6666004)(6286002)(26005)(426003)(2616005)(336012)(2906002)(16526019)(8936002)(4326008)(47076005)(316002)(508600001)(1076003)(8676002)(36756003)(83380400001)(356005)(82310400004)(5660300002)(54906003)(36860700001)(6916009)(70206006)(40460700001)(70586007)(186003)(55016003)(36900700001)(579004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2022 15:30:57.9799 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f4632f78-a9c5-4ab5-908f-08d9da978686 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT009.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3740 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org A new, faster, queue-based flow rules management mechanism is needed for applications offloading rules inside the datapath. This asynchronous and lockless mechanism frees the CPU for further packet processing and reduces the performance impact of the flow rules creation/destruction on the datapath. Note that queues are not thread-safe and the queue should be accessed from the same thread for all queue operations. It is the responsibility of the app to sync the queue functions in case of multi-threaded access to the same queue. The rte_flow_q_flow_create() function enqueues a flow creation to the requested queue. It benefits from already configured resources and sets unique values on top of item and action templates. A flow rule is enqueued on the specified flow queue and offloaded asynchronously to the hardware. The function returns immediately to spare CPU for further packet processing. The application must invoke the rte_flow_q_dequeue() function to complete the flow rule operation offloading, to clear the queue, and to receive the operation status. The rte_flow_q_flow_destroy() function enqueues a flow destruction to the requested queue. Signed-off-by: Alexander Kozyrev --- doc/guides/prog_guide/img/rte_flow_q_init.svg | 71 ++++ .../prog_guide/img/rte_flow_q_usage.svg | 60 +++ doc/guides/prog_guide/rte_flow.rst | 158 ++++++++ doc/guides/rel_notes/release_22_03.rst | 9 + lib/ethdev/rte_flow.c | 173 ++++++++- lib/ethdev/rte_flow.h | 348 ++++++++++++++++++ lib/ethdev/rte_flow_driver.h | 61 +++ lib/ethdev/version.map | 7 + 8 files changed, 886 insertions(+), 1 deletion(-) create mode 100644 doc/guides/prog_guide/img/rte_flow_q_init.svg create mode 100644 doc/guides/prog_guide/img/rte_flow_q_usage.svg diff --git a/doc/guides/prog_guide/img/rte_flow_q_init.svg b/doc/guides/prog_guide/img/rte_flow_q_init.svg new file mode 100644 index 0000000000..994e85521c --- /dev/null +++ b/doc/guides/prog_guide/img/rte_flow_q_init.svg @@ -0,0 +1,71 @@ + + + + + + + + + + + + + + + rte_eth_dev_configure + () + + + + rte_flow_configure() + + + + rte_flow_item_template_create() + + + + rte_flow_action_template_create + () + + + + rte_eal_init + () + + + + + + + rte_flow_table_create + ( + ) + + + + + + rte_eth_dev_start() + + + \ No newline at end of file diff --git a/doc/guides/prog_guide/img/rte_flow_q_usage.svg b/doc/guides/prog_guide/img/rte_flow_q_usage.svg new file mode 100644 index 0000000000..14447ef8eb --- /dev/null +++ b/doc/guides/prog_guide/img/rte_flow_q_usage.svg @@ -0,0 +1,60 @@ + + + + + + + + + + + + + + rte_eth_rx_burst() + + analyze packet + + rte_flow_q_create_flow() + + more packets? + + + + + + + add new rule? + + + yes + + no + + + destroy the rule? + + + rte_flow_q_destroy_flow() + + + + + rte_flow_q_dequeue() + + rte_flow_q_drain() + + + no + + yes + + no + + yes + + \ No newline at end of file diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index aa9d4e9573..b004811a20 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -3607,18 +3607,22 @@ Hints about the expected number of counters or meters in an application, for example, allow PMD to prepare and optimize NIC memory layout in advance. ``rte_flow_configure()`` must be called before any flow rule is created, but after an Ethernet device is configured. +It also creates flow queues for asynchronous flow rules operations via +queue-based API, see `Asynchronous operations`_ section. .. code-block:: c int rte_flow_configure(uint16_t port_id, const struct rte_flow_port_attr *port_attr, + const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *error); Arguments: - ``port_id``: port identifier of Ethernet device. - ``port_attr``: port attributes for flow management library. +- ``queue_attr``: queue attributes for asynchronous operations. - ``error``: perform verbose error reporting if not NULL. PMDs initialize this structure in case of error only. @@ -3750,6 +3754,160 @@ and item and action templates are created. *at, nb_action_templates, *error); +Asynchronous operations +----------------------- + +Flow rules creation/destruction can be done by using lockless flow queues. +An application configures the number of queues during the initialization stage. +Then create/destroy operations are enqueued without any locks asynchronously. +By adopting an asynchronous queue-based approach, the packet processing can +continue with handling next packets while insertion/destruction of a flow rule +is processed inside the hardware. The application is expected to poll for +results later to see if the flow rule is successfully inserted/destroyed. +User data is returned as part of the result to identify the enqueued operation. +Polling must be done before the queue is overflowed or periodically. +Operations can be reordered inside a queue, so the result of the rule creation +needs to be polled first before enqueueing the destroy operation for the rule. +Flow handle is valid once the create operation is enqueued and must be +destroyed even if the operation is not successful and the rule is not inserted. + +The asynchronous flow rule insertion logic can be broken into two phases. + +1. Initialization stage as shown here: + +.. _figure_rte_flow_q_init: + +.. figure:: img/rte_flow_q_init.* + +2. Main loop as presented on a datapath application example: + +.. _figure_rte_flow_q_usage: + +.. figure:: img/rte_flow_q_usage.* + +Enqueue creation operation +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Enqueueing a flow rule creation operation is similar to simple creation. + +.. code-block:: c + + struct rte_flow * + rte_flow_q_flow_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_table *table, + const struct rte_flow_item items[], + uint8_t item_template_index, + const struct rte_flow_action actions[], + uint8_t action_template_index, + struct rte_flow_error *error); + +A valid handle in case of success is returned. It must be destroyed later +by calling ``rte_flow_q_flow_destroy()`` even if the rule is rejected by HW. + +Enqueue destruction operation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Enqueueing a flow rule destruction operation is similar to simple destruction. + +.. code-block:: c + + int + rte_flow_q_flow_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow *flow, + struct rte_flow_error *error); + +Drain a queue +~~~~~~~~~~~~~ + +Function to drain the queue and push all internally stored rules to the NIC. + +.. code-block:: c + + int + rte_flow_q_drain(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_error *error); + +There is the drain attribute in the queue operation attributes. +When set, the requested operation must be sent to the HW without any delay. +If not, multiple operations can be bulked together and not sent to HW right +away to save SW/HW interactions and prioritize throughput over latency. +The application must invoke this function to actually push all outstanding +operations to HW in the latter case. + +Dequeue operations +~~~~~~~~~~~~~~~~~~ + +Dequeue rte flow operations. + +The application must invoke this function in order to complete the asynchronous +flow rule operation and to receive the flow rule operation status. + +.. code-block:: c + + int + rte_flow_q_dequeue(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_q_op_res res[], + uint16_t n_res, + struct rte_flow_error *error); + +Multiple outstanding operations can be dequeued simultaneously. +User data may be provided during a flow creation/destruction in order +to distinguish between multiple operations. User data is returned as part +of the result to provide a method to detect which operation is completed. + +Enqueue indirect action creation operation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Asynchronous version of indirect action creation API. + +.. code-block:: c + + struct rte_flow_action_handle * + rte_flow_q_action_handle_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + const struct rte_flow_indir_action_conf *indir_action_conf, + const struct rte_flow_action *action, + struct rte_flow_error *error); + +A valid handle in case of success is returned. It must be destroyed later by +calling ``rte_flow_q_action_handle_destroy()`` even if the rule is rejected. + +Enqueue indirect action destruction operation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Asynchronous version of indirect action destruction API. + +.. code-block:: c + + int + rte_flow_q_action_handle_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_action_handle *action_handle, + struct rte_flow_error *error); + +Enqueue indirect action update operation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Asynchronous version of indirect action update API. + +.. code-block:: c + + int + rte_flow_q_action_handle_update(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_action_handle *action_handle, + const void *update, + struct rte_flow_error *error); + .. _flow_isolated_mode: Flow isolated mode diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst index af56f54bc4..7ccac912a3 100644 --- a/doc/guides/rel_notes/release_22_03.rst +++ b/doc/guides/rel_notes/release_22_03.rst @@ -66,6 +66,15 @@ New Features ``rte_flow_table_destroy``, ``rte_flow_item_template_destroy`` and ``rte_flow_action_template_destroy`` respectively. +* ethdev: Added ``rte_flow_q_flow_create`` and ``rte_flow_q_flow_destroy`` API + to enqueue flow creaion/destruction operations asynchronously as well as + ``rte_flow_q_dequeue`` to poll results of these operations and + ``rte_flow_q_drain`` to drain the flow queue and pass all operations to NIC. + Introduced asynchronous API for indirect actions management as well: + ``rte_flow_q_action_handle_create``, ``rte_flow_q_action_handle_destroy`` and + ``rte_flow_q_action_handle_update``. + + Removed Items ------------- diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 20613f6bed..6da899c5df 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -1395,6 +1395,7 @@ rte_flow_flex_item_release(uint16_t port_id, int rte_flow_configure(uint16_t port_id, const struct rte_flow_port_attr *port_attr, + const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; @@ -1404,7 +1405,8 @@ rte_flow_configure(uint16_t port_id, return -rte_errno; if (likely(!!ops->configure)) { return flow_err(port_id, - ops->configure(dev, port_attr, error), + ops->configure(dev, port_attr, + queue_attr, error), error); } return rte_flow_error_set(error, ENOTSUP, @@ -1552,3 +1554,172 @@ rte_flow_table_destroy(uint16_t port_id, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, rte_strerror(ENOTSUP)); } + +struct rte_flow * +rte_flow_q_flow_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_table *table, + const struct rte_flow_item items[], + uint8_t item_template_index, + const struct rte_flow_action actions[], + uint8_t action_template_index, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + struct rte_flow *flow; + + if (unlikely(!ops)) + return NULL; + if (likely(!!ops->q_flow_create)) { + flow = ops->q_flow_create(dev, queue_id, q_ops_attr, table, + items, item_template_index, + actions, action_template_index, + error); + if (flow == NULL) + flow_err(port_id, -rte_errno, error); + return flow; + } + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); + return NULL; +} + +int +rte_flow_q_flow_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow *flow, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->q_flow_destroy)) { + return flow_err(port_id, + ops->q_flow_destroy(dev, queue_id, + q_ops_attr, flow, error), + error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} + +struct rte_flow_action_handle * +rte_flow_q_action_handle_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + const struct rte_flow_indir_action_conf *indir_action_conf, + const struct rte_flow_action *action, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + struct rte_flow_action_handle *handle; + + if (unlikely(!ops)) + return NULL; + if (unlikely(!ops->q_action_handle_create)) { + rte_flow_error_set(error, ENOSYS, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENOSYS)); + return NULL; + } + handle = ops->q_action_handle_create(dev, queue_id, q_ops_attr, + indir_action_conf, action, error); + if (handle == NULL) + flow_err(port_id, -rte_errno, error); + return handle; +} + +int +rte_flow_q_action_handle_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_action_handle *action_handle, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + int ret; + + if (unlikely(!ops)) + return -rte_errno; + if (unlikely(!ops->q_action_handle_destroy)) + return rte_flow_error_set(error, ENOSYS, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOSYS)); + ret = ops->q_action_handle_destroy(dev, queue_id, q_ops_attr, + action_handle, error); + return flow_err(port_id, ret, error); +} + +int +rte_flow_q_action_handle_update(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_action_handle *action_handle, + const void *update, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + int ret; + + if (unlikely(!ops)) + return -rte_errno; + if (unlikely(!ops->q_action_handle_update)) + return rte_flow_error_set(error, ENOSYS, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOSYS)); + ret = ops->q_action_handle_update(dev, queue_id, q_ops_attr, + action_handle, update, error); + return flow_err(port_id, ret, error); +} + +int +rte_flow_q_drain(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->q_drain)) { + return flow_err(port_id, + ops->q_drain(dev, queue_id, error), + error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} + +int +rte_flow_q_dequeue(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_q_op_res res[], + uint16_t n_res, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + int ret; + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->q_dequeue)) { + ret = ops->q_dequeue(dev, queue_id, res, n_res, error); + return ret ? ret : flow_err(port_id, ret, error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 2e54e9d0e3..07193090f2 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -4865,6 +4865,13 @@ struct rte_flow_port_attr { * Version of the struct layout, should be 0. */ uint32_t version; + /** + * Number of flow queues to be configured. + * Flow queues are used for asynchronous flow rule operations. + * The order of operations is not guaranteed inside a queue. + * Flow queues are not thread-safe. + */ + uint16_t nb_queues; /** * Number of counter actions pre-configured. * If set to 0, PMD will allocate counters dynamically. @@ -4885,6 +4892,21 @@ struct rte_flow_port_attr { uint32_t nb_meters; }; +/** + * Flow engine queue configuration. + */ +__extension__ +struct rte_flow_queue_attr { + /** + * Version of the struct layout, should be 0. + */ + uint32_t version; + /** + * Number of flow rule operations a queue can hold. + */ + uint32_t size; +}; + /** * @warning * @b EXPERIMENTAL: this API may change without prior notice. @@ -4903,6 +4925,9 @@ struct rte_flow_port_attr { * Port identifier of Ethernet device. * @param[in] port_attr * Port configuration attributes. + * @param[in] queue_attr + * Array that holds attributes for each flow queue. + * Number of elements is set in @p port_attr.nb_queues. * @param[out] error * Perform verbose error reporting if not NULL. * PMDs initialize this structure in case of error only. @@ -4914,6 +4939,7 @@ __rte_experimental int rte_flow_configure(uint16_t port_id, const struct rte_flow_port_attr *port_attr, + const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *error); /** @@ -5185,6 +5211,328 @@ rte_flow_table_destroy(uint16_t port_id, struct rte_flow_table *table, struct rte_flow_error *error); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Queue operation attributes. + */ +struct rte_flow_q_ops_attr { + /** + * Version of the struct layout, should be 0. + */ + uint32_t version; + /** + * The user data that will be returned on the completion events. + */ + void *user_data; + /** + * When set, the requested action must be sent to the HW without + * any delay. Any prior requests must be also sent to the HW. + * If this bit is cleared, the application must call the + * rte_flow_queue_drain API to actually send the request to the HW. + */ + uint32_t drain:1; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue rule creation operation. + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue_id + * Flow queue used to insert the rule. + * @param[in] q_ops_attr + * Rule creation operation attributes. + * @param[in] table + * Table to select templates from. + * @param[in] items + * List of pattern items to be used. + * The list order should match the order in the item template. + * The spec is the only relevant member of the item that is being used. + * @param[in] item_template_index + * Item template index in the table. + * @param[in] actions + * List of actions to be used. + * The list order should match the order in the action template. + * @param[in] action_template_index + * Action template index in the table. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * Handle on success, NULL otherwise and rte_errno is set. + * The rule handle doesn't mean that the rule was offloaded. + * Only completion result indicates that the rule was offloaded. + */ +__rte_experimental +struct rte_flow * +rte_flow_q_flow_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_table *table, + const struct rte_flow_item items[], + uint8_t item_template_index, + const struct rte_flow_action actions[], + uint8_t action_template_index, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue rule destruction operation. + * + * This function enqueues a destruction operation on the queue. + * Application should assume that after calling this function + * the rule handle is not valid anymore. + * Completion indicates the full removal of the rule from the HW. + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue_id + * Flow queue which is used to destroy the rule. + * This must match the queue on which the rule was created. + * @param[in] q_ops_attr + * Rule destroy operation attributes. + * @param[in] flow + * Flow handle to be destroyed. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_q_flow_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow *flow, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue indirect action creation operation. + * @see rte_flow_action_handle_create + * + * @param[in] port_id + * Port identifier of Ethernet device. + * @param[in] queue_id + * Flow queue which is used to create the rule. + * @param[in] q_ops_attr + * Queue operation attributes. + * @param[in] indir_action_conf + * Action configuration for the indirect action object creation. + * @param[in] action + * Specific configuration of the indirect action object. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * - (0) if success. + * - (-ENODEV) if *port_id* invalid. + * - (-ENOSYS) if underlying device does not support this functionality. + * - (-EIO) if underlying device is removed. + * - (-ENOENT) if action pointed by *action* handle was not found. + * - (-EBUSY) if action pointed by *action* handle still used by some rules + * rte_errno is also set. + */ +__rte_experimental +struct rte_flow_action_handle * +rte_flow_q_action_handle_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + const struct rte_flow_indir_action_conf *indir_action_conf, + const struct rte_flow_action *action, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue indirect action destruction operation. + * The destroy queue must be the same + * as the queue on which the action was created. + * + * @param[in] port_id + * Port identifier of Ethernet device. + * @param[in] queue_id + * Flow queue which is used to destroy the rule. + * @param[in] q_ops_attr + * Queue operation attributes. + * @param[in] action_handle + * Handle for the indirect action object to be destroyed. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * - (0) if success. + * - (-ENODEV) if *port_id* invalid. + * - (-ENOSYS) if underlying device does not support this functionality. + * - (-EIO) if underlying device is removed. + * - (-ENOENT) if action pointed by *action* handle was not found. + * - (-EBUSY) if action pointed by *action* handle still used by some rules + * rte_errno is also set. + */ +__rte_experimental +int +rte_flow_q_action_handle_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_action_handle *action_handle, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue indirect action update operation. + * @see rte_flow_action_handle_create + * + * @param[in] port_id + * Port identifier of Ethernet device. + * @param[in] queue_id + * Flow queue which is used to update the rule. + * @param[in] q_ops_attr + * Queue operation attributes. + * @param[in] action_handle + * Handle for the indirect action object to be updated. + * @param[in] update + * Update profile specification used to modify the action pointed by handle. + * *update* could be with the same type of the immediate action corresponding + * to the *handle* argument when creating, or a wrapper structure includes + * action configuration to be updated and bit fields to indicate the member + * of fields inside the action to update. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * - (0) if success. + * - (-ENODEV) if *port_id* invalid. + * - (-ENOSYS) if underlying device does not support this functionality. + * - (-EIO) if underlying device is removed. + * - (-ENOENT) if action pointed by *action* handle was not found. + * - (-EBUSY) if action pointed by *action* handle still used by some rules + * rte_errno is also set. + */ +__rte_experimental +int +rte_flow_q_action_handle_update(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_action_handle *action_handle, + const void *update, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Drain the queue and push all internally stored rules to the HW. + * Non-drained rules are rules that were inserted without the drain flag set. + * Can be used to notify the HW about batch of rules prepared by the SW to + * reduce the number of communications between the HW and SW. + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue_id + * Flow queue to be drained. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_q_drain(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Dequeue operation status. + */ +enum rte_flow_q_op_status { + /** + * The operation was completed successfully. + */ + RTE_FLOW_Q_OP_SUCCESS, + /** + * The operation was not completed successfully. + */ + RTE_FLOW_Q_OP_ERROR, +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Queue operation attributes + */ +__extension__ +struct rte_flow_q_op_res { + /** + * Version of the struct layout, should be 0. + */ + uint32_t version; + /** + * Returns the status of the operation that this completion signals. + */ + enum rte_flow_q_op_status status; + /** + * The user data that will be returned on the completion events. + */ + void *user_data; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Dequeue a rte flow operation. + * The application must invoke this function in order to complete + * the flow rule offloading and to receive the flow rule operation status. + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue_id + * Flow queue which is used to dequeue the operation. + * @param[out] res + * Array of results that will be set. + * @param[in] n_res + * Maximum number of results that can be returned. + * This value is equal to the size of the res array. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * Number of results that were dequeued, + * a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_q_dequeue(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_q_op_res res[], + uint16_t n_res, + struct rte_flow_error *error); + #ifdef __cplusplus } #endif diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h index cda021c302..d1cfdd2d75 100644 --- a/lib/ethdev/rte_flow_driver.h +++ b/lib/ethdev/rte_flow_driver.h @@ -156,6 +156,7 @@ struct rte_flow_ops { int (*configure) (struct rte_eth_dev *dev, const struct rte_flow_port_attr *port_attr, + const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *err); /** See rte_flow_item_template_create() */ struct rte_flow_item_template *(*item_template_create) @@ -194,6 +195,66 @@ struct rte_flow_ops { (struct rte_eth_dev *dev, struct rte_flow_table *table, struct rte_flow_error *err); + /** See rte_flow_q_flow_create() */ + struct rte_flow *(*q_flow_create) + (struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_table *table, + const struct rte_flow_item items[], + uint8_t item_template_index, + const struct rte_flow_action actions[], + uint8_t action_template_index, + struct rte_flow_error *err); + /** See rte_flow_q_flow_destroy() */ + int (*q_flow_destroy) + (struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow *flow, + struct rte_flow_error *err); + /** See rte_flow_q_flow_update() */ + int (*q_flow_update) + (struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow *flow, + struct rte_flow_error *err); + /** See rte_flow_q_action_handle_create() */ + struct rte_flow_action_handle *(*q_action_handle_create) + (struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + const struct rte_flow_indir_action_conf *indir_action_conf, + const struct rte_flow_action *action, + struct rte_flow_error *err); + /** See rte_flow_q_action_handle_destroy() */ + int (*q_action_handle_destroy) + (struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_action_handle *action_handle, + struct rte_flow_error *error); + /** See rte_flow_q_action_handle_update() */ + int (*q_action_handle_update) + (struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_action_handle *action_handle, + const void *update, + struct rte_flow_error *error); + /** See rte_flow_q_drain() */ + int (*q_drain) + (struct rte_eth_dev *dev, + uint32_t queue_id, + struct rte_flow_error *err); + /** See rte_flow_q_dequeue() */ + int (*q_dequeue) + (struct rte_eth_dev *dev, + uint32_t queue_id, + struct rte_flow_q_op_res res[], + uint16_t n_res, + struct rte_flow_error *error); }; /** diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index cfd5e2a3e4..d705e36c90 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -265,6 +265,13 @@ EXPERIMENTAL { rte_flow_action_template_destroy; rte_flow_table_create; rte_flow_table_destroy; + rte_flow_q_flow_create; + rte_flow_q_flow_destroy; + rte_flow_q_action_handle_create; + rte_flow_q_action_handle_destroy; + rte_flow_q_action_handle_update; + rte_flow_q_drain; + rte_flow_q_dequeue; }; INTERNAL { From patchwork Tue Jan 18 15:30:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 106032 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 89644A034C; Tue, 18 Jan 2022 16:31:22 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 08D4F42779; Tue, 18 Jan 2022 16:31:07 +0100 (CET) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2073.outbound.protection.outlook.com [40.107.93.73]) by mails.dpdk.org (Postfix) with ESMTP id C83C742741 for ; Tue, 18 Jan 2022 16:31:04 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=MERcIZ4yLdzWT8U70EbM1T1o7nxBkQe48IqTidktKL09Ty5ZNw6MgdULNa0Nl8vHvj0oZpr1+uaCf1P1KwWIznBvKcZld3TO07m3/S7gsJpxLZgao/3sGzD9BkAs6BizYhiEaXfx0HAdbzYTyx5QukrSJw6vPgdZMQpdPnZyWH7CSa9uwebWfhEM3DlBhOteuY6fhgTAqg3ECGLB0TWpvB5g7e32ksuM52a2fQDmpwS3qoquqPncKMs96rG5aiFhQwU5GsYhlRQves/3XV4Qr18syRTW5NmahgiWWOh1cV6pm9QgT3PhEirKeA1ldZVckN08D/mjac9GFbDHAwjaWA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=uOH7d1j6yX9kMUyeKIl+eFx8/NCACHWUPGVI3BjFti0=; b=kmmFHBTviCl4MUUBaGZtTpPLm1w+jDeoMXS4NqD/tTmWpCSQ+jSG7428OwwrRkICpjBHPX9lYUely7+ej1OFo44WqDUvnZEVqPh/vEyitOtKbyrjG8qcKORevW2xvaLugewrzXwfgm+OhVFUg95KlNfCpGFhiNZ2R2kzGsgMAqMU598hyUKDKIIEjuA6+WJb7Q9B212QXMPSgDav2VHTmVuC1ixYHFfi/tlunpDZOiNCVGRwXWwwoN0+ersJIzyDmFlkZZJdvfNJ8G0Le3KGVr42oVzNHqBELSZIoyWxC3KsI9Q1i8Gq9rcneaVRRO1oLtryEAKkdqij5C5GyXiKJw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=uOH7d1j6yX9kMUyeKIl+eFx8/NCACHWUPGVI3BjFti0=; b=bxfje0aEwe8EhYb0V5zm6SuhVkrDYgr7INa39nil5OtwEqVqjFVji0nDONM4/t0cil6firz+eD03P7c2Zvmsgm9JGwv/TyXCDqXVq6yzCtI6pSh6+qNMPUONMtz14Rq7UySVhwrerHob2U2GvObSm07BU5ICDcWptLKFtRXryisjY3suFH6VI3QvLISDhFS4I/kWanmOVJDZVVuRYLkInC01DOU6J7XgNtl9Mrn7Jnh79AdLMLoVazEcdU0zTcE55qS2zHijxsP9T2l8wJInifzAmwpI5Wi8MOT+HFqk4x5D6Vliw7ytQcq3eHSOjpiebGPCV7C1DFP4P4LoQp2PPw== Received: from CY4PR12MB1414.namprd12.prod.outlook.com (2603:10b6:903:3a::23) by DM6PR12MB4434.namprd12.prod.outlook.com (2603:10b6:5:2ad::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4888.10; Tue, 18 Jan 2022 15:31:03 +0000 Received: from MWHPR21CA0055.namprd21.prod.outlook.com (2603:10b6:300:db::17) by CY4PR12MB1414.namprd12.prod.outlook.com (2603:10b6:903:3a::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4888.11; Tue, 18 Jan 2022 15:31:01 +0000 Received: from CO1NAM11FT051.eop-nam11.prod.protection.outlook.com (2603:10b6:300:db:cafe::60) by MWHPR21CA0055.outlook.office365.com (2603:10b6:300:db::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4930.3 via Frontend Transport; Tue, 18 Jan 2022 15:31:01 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.236) by CO1NAM11FT051.mail.protection.outlook.com (10.13.174.114) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4888.9 via Frontend Transport; Tue, 18 Jan 2022 15:31:01 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 18 Jan 2022 15:31:00 +0000 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.986.9; Tue, 18 Jan 2022 07:30:57 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , Subject: [PATCH v2 04/10] app/testpmd: implement rte flow configure Date: Tue, 18 Jan 2022 17:30:21 +0200 Message-ID: <20220118153027.3947448-5-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220118153027.3947448-1-akozyrev@nvidia.com> References: <20211006044835.3936226-1-akozyrev@nvidia.com> <20220118153027.3947448-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 52891de9-cfd7-4fa8-df85-08d9da978848 X-MS-TrafficTypeDiagnostic: CY4PR12MB1414:EE_|DM6PR12MB4434:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:9508; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: nAUMpzZsrt3I1jX7DLvx8QzgkjDsmyLoj0KPCuho0Zfra9XmKIW1kkES48CdoRdS/TSgZtcANb++21aA4kUIoNqVkM+mas940lJfA3eqij887Ww/35Atx2cgBUbd0Xe5U8GhGAJLZMd6ZxXhJfv9fm2xpCH7sU5njd46OAE5kcdwQXBCIZRICDVjlGWSXeTXdfk4RU2wQ0vHH9h3bci3NspOXLwsPQYLkRoNVOM79eYPcugoLnVOSjbeKB4LOfB0mcgIxKy/cckxqhpOXAO0QEpBihtrlWcbOt8ej/UV88E05LZ8CtDta0k/b9CRAb+6Ba3gG+XqHkNaG8iG2SsEi+q9WuKVAaKVi+e9qoUyCsjPPYyi2EWK0glxfpzu/i0RfcQDb+JNkdHAYH6Kpl/tkEhyhiHHcL2ofjIWZfqll23pLqvfh6mqgTXNTP6GzjEuIjj4/Dowf/gvZKGB1hCQEensoAWeZzUrAWzThcd2y+tbdNrsiKN966M0p1CWkOY0wZLy0Nto29id6A+OVILY/66cnnuv0vVNnfeqto/tjJz+9dJgpY4MLFe+M821lQIIMNpnLAqNWsyB4VqyGDWfq7pvUX1ozLw+to2zilfEM31HnMW7lchCXgZ2MstR9CrRGP62G+xBE2Zr7yCOkFnchuc4iO9wC3gYc6BNV+KDF/9wZbKhXLYHiPn00cH7gZi8ubEm4GrMqeuXuVXp5mOCNNlgmVmKgRk/8xY8e/cPBco9vSAMh2iiy8qPAW+7bT9fXk+5YDK00Emrn0PB6OMbuBwbi9ay/M8XMoZGUsKFn2k= X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(4636009)(46966006)(40470700002)(36840700001)(5660300002)(70206006)(40460700001)(2616005)(426003)(6916009)(7696005)(86362001)(81166007)(70586007)(26005)(6666004)(356005)(16526019)(8676002)(36860700001)(8936002)(186003)(36756003)(316002)(54906003)(6286002)(336012)(55016003)(82310400004)(83380400001)(2906002)(1076003)(508600001)(4326008)(47076005)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2022 15:31:01.0404 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 52891de9-cfd7-4fa8-df85-08d9da978848 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT051.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4434 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add testpmd support for the rte_flow_configure API. Provide the command line interface for the Flow management. Usage example: flow configure 0 queues_number 8 queues_size 256 Signed-off-by: Alexander Kozyrev --- app/test-pmd/cmdline_flow.c | 109 +++++++++++++++++++- app/test-pmd/config.c | 29 ++++++ app/test-pmd/testpmd.h | 5 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 34 +++++- 4 files changed, 174 insertions(+), 3 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 5c2bba48ad..ea4af8dd45 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -72,6 +72,7 @@ enum index { /* Top-level command. */ FLOW, /* Sub-level commands. */ + CONFIGURE, INDIRECT_ACTION, VALIDATE, CREATE, @@ -122,6 +123,13 @@ enum index { DUMP_ALL, DUMP_ONE, + /* Configure arguments */ + CONFIG_QUEUES_NUMBER, + CONFIG_QUEUES_SIZE, + CONFIG_COUNTERS_NUMBER, + CONFIG_AGING_COUNTERS_NUMBER, + CONFIG_METERS_NUMBER, + /* Indirect action arguments */ INDIRECT_ACTION_CREATE, INDIRECT_ACTION_UPDATE, @@ -846,6 +854,10 @@ struct buffer { enum index command; /**< Flow command. */ portid_t port; /**< Affected port ID. */ union { + struct { + struct rte_flow_port_attr port_attr; + struct rte_flow_queue_attr queue_attr; + } configure; /**< Configuration arguments. */ struct { uint32_t *action_id; uint32_t action_id_n; @@ -927,6 +939,16 @@ static const enum index next_flex_item[] = { ZERO, }; +static const enum index next_config_attr[] = { + CONFIG_QUEUES_NUMBER, + CONFIG_QUEUES_SIZE, + CONFIG_COUNTERS_NUMBER, + CONFIG_AGING_COUNTERS_NUMBER, + CONFIG_METERS_NUMBER, + END, + ZERO, +}; + static const enum index next_ia_create_attr[] = { INDIRECT_ACTION_CREATE_ID, INDIRECT_ACTION_INGRESS, @@ -1962,6 +1984,9 @@ static int parse_aged(struct context *, const struct token *, static int parse_isolate(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_configure(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); static int parse_tunnel(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -2187,7 +2212,8 @@ static const struct token token_list[] = { .type = "{command} {port_id} [{arg} [...]]", .help = "manage ingress/egress flow rules", .next = NEXT(NEXT_ENTRY - (INDIRECT_ACTION, + (CONFIGURE, + INDIRECT_ACTION, VALIDATE, CREATE, DESTROY, @@ -2202,6 +2228,56 @@ static const struct token token_list[] = { .call = parse_init, }, /* Top-level command. */ + [CONFIGURE] = { + .name = "configure", + .help = "configure flow rules", + .next = NEXT(next_config_attr, + NEXT_ENTRY(COMMON_PORT_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_configure, + }, + /* Configure arguments. */ + [CONFIG_QUEUES_NUMBER] = { + .name = "queues_number", + .help = "number of queues", + .next = NEXT(next_config_attr, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.configure.port_attr.nb_queues)), + }, + [CONFIG_QUEUES_SIZE] = { + .name = "queues_size", + .help = "number of elements in queues", + .next = NEXT(next_config_attr, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.configure.queue_attr.size)), + }, + [CONFIG_COUNTERS_NUMBER] = { + .name = "counters_number", + .help = "number of counters", + .next = NEXT(next_config_attr, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.configure.port_attr.nb_counters)), + }, + [CONFIG_AGING_COUNTERS_NUMBER] = { + .name = "aging_counters_number", + .help = "number of aging counters", + .next = NEXT(next_config_attr, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.configure.port_attr.nb_aging)), + }, + [CONFIG_METERS_NUMBER] = { + .name = "meters_number", + .help = "number of meters", + .next = NEXT(next_config_attr, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.configure.port_attr.nb_meters)), + }, + /* Top-level command. */ [INDIRECT_ACTION] = { .name = "indirect_action", .type = "{command} {port_id} [{arg} [...]]", @@ -7465,6 +7541,33 @@ parse_isolate(struct context *ctx, const struct token *token, return len; } +/** Parse tokens for configure command. */ +static int +parse_configure(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command) { + if (ctx->curr != CONFIGURE) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + } + return len; +} + static int parse_flex(struct context *ctx, const struct token *token, const char *str, unsigned int len, @@ -8691,6 +8794,10 @@ static void cmd_flow_parsed(const struct buffer *in) { switch (in->command) { + case CONFIGURE: + port_flow_configure(in->port, &in->args.configure.port_attr, + &in->args.configure.queue_attr); + break; case INDIRECT_ACTION_CREATE: port_action_handle_create( in->port, in->args.vc.attr.group, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 1722d6c8f8..85d31de7f7 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -1595,6 +1595,35 @@ action_alloc(portid_t port_id, uint32_t id, return 0; } +/** Configure flow management resources. */ +int +port_flow_configure(portid_t port_id, + const struct rte_flow_port_attr *port_attr, + const struct rte_flow_queue_attr *queue_attr) +{ + struct rte_port *port; + struct rte_flow_error error; + const struct rte_flow_queue_attr *attr_list[port_attr->nb_queues]; + int std_queue; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + port->queue_nb = port_attr->nb_queues; + port->queue_sz = queue_attr->size; + for (std_queue = 0; std_queue < port_attr->nb_queues; std_queue++) + attr_list[std_queue] = queue_attr; + /* Poisoning to make sure PMDs update it in case of error. */ + memset(&error, 0x66, sizeof(error)); + if (rte_flow_configure(port_id, port_attr, attr_list, &error)) + return port_flow_complain(&error); + printf("Configure flows on port %u: " + "number of queues %d with %d elements\n", + port_id, port_attr->nb_queues, queue_attr->size); + return 0; +} + /** Create indirect action */ int port_action_handle_create(portid_t port_id, uint32_t id, diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 9967825044..ce80a00193 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -243,6 +243,8 @@ struct rte_port { struct rte_eth_txconf tx_conf[RTE_MAX_QUEUES_PER_PORT+1]; /**< per queue tx configuration */ struct rte_ether_addr *mc_addr_pool; /**< pool of multicast addrs */ uint32_t mc_addr_nb; /**< nb. of addr. in mc_addr_pool */ + queueid_t queue_nb; /**< nb. of queues for flow rules */ + uint32_t queue_sz; /**< size of a queue for flow rules */ uint8_t slave_flag; /**< bonding slave port */ struct port_flow *flow_list; /**< Associated flows. */ struct port_indirect_action *actions_list; @@ -885,6 +887,9 @@ struct rte_flow_action_handle *port_action_handle_get_by_id(portid_t port_id, uint32_t id); int port_action_handle_update(portid_t port_id, uint32_t id, const struct rte_flow_action *action); +int port_flow_configure(portid_t port_id, + const struct rte_flow_port_attr *port_attr, + const struct rte_flow_queue_attr *queue_attr); int port_flow_validate(portid_t port_id, const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 94792d88cc..8af28bd3b3 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3285,8 +3285,8 @@ Flow rules management --------------------- Control of the generic flow API (*rte_flow*) is fully exposed through the -``flow`` command (validation, creation, destruction, queries and operation -modes). +``flow`` command (configuration, validation, creation, destruction, queries +and operation modes). Considering *rte_flow* overlaps with all `Filter Functions`_, using both features simultaneously may cause undefined side-effects and is therefore @@ -3309,6 +3309,14 @@ The first parameter stands for the operation mode. Possible operations and their general syntax are described below. They are covered in detail in the following sections. +- Configure flow management:: + + flow configure {port_id} + [queues_number {number}] [queues_size {size}] + [counters_number {number}] + [aging_counters_number {number}] + [meters_number {number}] + - Check whether a flow rule can be created:: flow validate {port_id} @@ -3368,6 +3376,28 @@ following sections. flow tunnel list {port_id} +Configuring flow management library +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow configure`` pre-allocates all the needed resources in the underlying +device to be used later at the flow creation. Flow queues are allocated as well +for asynchronous flow creation/destruction operations. It is bound to +``rte_flow_configure()``:: + + flow configure {port_id} + [queues_number {number}] [queues_size {size}] + [counters_number {number}] + [aging_counters_number {number}] + [meters_number {number}] + +If successful, it will show:: + + Configure flows on port #[...]: number of queues #[...] with #[...] elements + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + Creating a tunnel stub for offload ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From patchwork Tue Jan 18 15:33:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 106034 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 18822A034C; Tue, 18 Jan 2022 16:33:59 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 09F134272D; Tue, 18 Jan 2022 16:33:59 +0100 (CET) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2067.outbound.protection.outlook.com [40.107.244.67]) by mails.dpdk.org (Postfix) with ESMTP id 19D094068E for ; Tue, 18 Jan 2022 16:33:57 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=LRaYdVGIWtVMV7z/fSZaig91afp/7cGjVHLvGv1tGmnRZU7xiqHeLARzCnJhaNNZGcOEIlEMl35CX91v16SC8Qsq6J/V+guZ7SETupVXZz4YES/kri4HxP0t66/fAAecrhhhzq4tmm5s8C+ICNduI5g1QxftUjfPPk+22rRI8DMywN6K3vic1pXjrlu+eX3z09aZDbxJFH0t/eQLQ/jNzKsC1YiLe8uZJXNSvpAaMP9wM7pl44Tsutwwc0wsJuKgsQeSvDnrZGuQJJCYnV92LiLr1+v9s+Fg54f7Q4d/WjYh16pfsC51H9CVBFAVc0sv/Cns+kkWMBhpF+ANkPrkGA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=zaP4Zu1mqFVes0mJrAy1+MkHqAdnTbTvzs20cmYsYBs=; b=XDvBebEvM1roNDtCiCqIM9cFBv1ftItYJRIdnkVPUf41Xbz756AU2OncveA85gqVJQHEpF10aINTqISQOsJPhrHdLwZu7NPYulwcMgvm3bFeKWt0eQgjdGt9khsD/P6zNs80b9GxLUCx9dhwU5/oQDGubnuOZf/po23Dx6SDqP0cqUTLqgiSAjttLkYyHN4Mu5kbF3mcqaQm2dDIpdCBnh/FfO1gTlFgsuqgvj4z+DVdIfn6nQqOLt3NQecvS6MdZyOwS97Z8rTCgayW2YHdiEcUKy/Y8O40agyE58XRmFdaLpqcrKXa23wKmzPOXDbTw83U7vqRJR4jS911Lp9QnQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=zaP4Zu1mqFVes0mJrAy1+MkHqAdnTbTvzs20cmYsYBs=; b=F0JYug+IwyQywPdPruvTs0n1fwpvWS2sjn3BlwVB+MEjVEnNtzSzj2Mqu7933u/yB2HBAmUB9vE8rPEZlnvCKTe+AJByWMNSmryN5pKPeohYjleGJCP4923xsDYTDtZn3Iaoc/+lb6rtRc6qzbVl5C/f6sOUN7i/vw/0dIBYzWVPI0dlEuvIcoe5igJSQ4i/sJDqFV3vF+bfNzvInUBzcgzdYaD4FHi/pDTQVup39cPoj4TBFgK18FkgUidB64aS+TeE7vIIqn/NGvrqaCunOZUQ4YaJw6XiAYXdoC/YqiEkW5WFTgEKBvxXbhARW4x/GPfUi7dGCq6ACV4fwvXX8Q== Received: from DM5PR12MB1628.namprd12.prod.outlook.com (2603:10b6:4:7::13) by CH2PR12MB4040.namprd12.prod.outlook.com (2603:10b6:610:ac::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4909.7; Tue, 18 Jan 2022 15:33:54 +0000 Received: from BN9PR03CA0795.namprd03.prod.outlook.com (2603:10b6:408:13f::20) by DM5PR12MB1628.namprd12.prod.outlook.com (2603:10b6:4:7::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4888.11; Tue, 18 Jan 2022 15:33:53 +0000 Received: from BN8NAM11FT048.eop-nam11.prod.protection.outlook.com (2603:10b6:408:13f:cafe::14) by BN9PR03CA0795.outlook.office365.com (2603:10b6:408:13f::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4909.7 via Frontend Transport; Tue, 18 Jan 2022 15:33:53 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by BN8NAM11FT048.mail.protection.outlook.com (10.13.177.117) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4888.9 via Frontend Transport; Tue, 18 Jan 2022 15:33:53 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 18 Jan 2022 15:33:52 +0000 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.986.9; Tue, 18 Jan 2022 07:33:49 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , Subject: [v2,05/10] app/testpmd: implement rte flow item/action template Date: Tue, 18 Jan 2022 17:33:15 +0200 Message-ID: <20220118153315.3947641-1-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220118153027.3947448-1-akozyrev@nvidia.com> References: <20220118153027.3947448-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 074bca11-a019-481f-376a-08d9da97ef18 X-MS-TrafficTypeDiagnostic: DM5PR12MB1628:EE_|CH2PR12MB4040:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:6430; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: cDM+WHQLP4cNom1Lt1iBUGCyIdqGJGLLMVtaeHEBxfJ4hCL2dfS6E+txiA0eFkosqprO/GPhZxTphCiw/45xDZxQn2O1KKNoO/Ts/z0FKb8mstfx0EYRrJ6anlKh6elCMLLLKBJIBTIGaXWO1AwTiwFyBK3UlUz6BaC9qp9J0nd8GWeKDKqYuAT3SAps+2oYitGgeaBBZSuYeXPDCVIhK/kz9pGgxbU0Jui8ZS9KEBYP4W7Ix2i2J8IDLF0xoTMiOT9w8vfcx2GJCjxr9WTaPgDvPWJsB6puOv9SCboLvLMt3R+HeONsY9XzmfcAeWwiJJDfsWirlfjkH5/qLf7fXQACfrFdfKsQx2IbTZ/b+nfDayhMVnVk28i72dzmusoa+5E80ZEduhjMm/0sZY6ZtC97FsQUDfmE1OAs8szyyW4KqKCv2NDFUErsMyrfI7sZW3T9zzjSFkqOPCglSvvCk8SOzWlh27hKnMTrM6YxoZRdliIRrBzbQwthudOGko/w6seXk85hutW+81kix3PwP7kGTjLE1JLOhanWsP9KIOsJB1mS303RjhZ/ZH5m46y2IZsJu/jCDgeOdv3mMNr+tztEUc5ifL+0ePEy6Gtftph0BsHfoVjYNirtE1/Ot7sFs42qZbOaiA2EGXXm89tfDwGwDU+qMqYdGvGpJJ7yMh+oxlypIg3En4rJDkjnWMUgEWLo9DCbvLPYQUP8zGiekf1E5nGOK2BemwF/pNvXylU/cwsr9JvQmaKVvAjMBn4vFU03tXxOfMmGmgSooICFQ6N6h3ad1A7gfgEgB1dek00= X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(40470700002)(47076005)(54906003)(83380400001)(2906002)(6286002)(336012)(356005)(81166007)(7696005)(36860700001)(186003)(316002)(55016003)(30864003)(82310400004)(1076003)(6916009)(426003)(5660300002)(508600001)(36756003)(8676002)(8936002)(40460700001)(6666004)(26005)(16526019)(70586007)(70206006)(86362001)(4326008)(2616005)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2022 15:33:53.4331 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 074bca11-a019-481f-376a-08d9da97ef18 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT048.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4040 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add testpmd support for the rte_flow_item_template and rte_flow_action_template APIs. Provide the command line interface for the template creation/destruction. Usage example: testpmd> flow item_template 0 create item_template_id 2 template eth dst is 00:16:3e:31:15:c3 / end testpmd> flow action_template 0 create action_template_id 4 template drop / end mask drop / end testpmd> flow action_template 0 destroy action_template 4 testpmd> flow item_template 0 destroy item_template 2 Signed-off-by: Alexander Kozyrev --- app/test-pmd/cmdline_flow.c | 376 +++++++++++++++++++- app/test-pmd/config.c | 204 +++++++++++ app/test-pmd/testpmd.h | 22 ++ doc/guides/testpmd_app_ug/testpmd_funcs.rst | 97 +++++ 4 files changed, 697 insertions(+), 2 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index ea4af8dd45..fb27a97855 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -56,6 +56,8 @@ enum index { COMMON_POLICY_ID, COMMON_FLEX_HANDLE, COMMON_FLEX_TOKEN, + COMMON_ITEM_TEMPLATE_ID, + COMMON_ACTION_TEMPLATE_ID, /* TOP-level command. */ ADD, @@ -73,6 +75,8 @@ enum index { FLOW, /* Sub-level commands. */ CONFIGURE, + ITEM_TEMPLATE, + ACTION_TEMPLATE, INDIRECT_ACTION, VALIDATE, CREATE, @@ -91,6 +95,22 @@ enum index { FLEX_ITEM_CREATE, FLEX_ITEM_DESTROY, + /* Item template arguments. */ + ITEM_TEMPLATE_CREATE, + ITEM_TEMPLATE_DESTROY, + ITEM_TEMPLATE_CREATE_ID, + ITEM_TEMPLATE_DESTROY_ID, + ITEM_TEMPLATE_RELAXED_MATCHING, + ITEM_TEMPLATE_SPEC, + + /* Action template arguments. */ + ACTION_TEMPLATE_CREATE, + ACTION_TEMPLATE_DESTROY, + ACTION_TEMPLATE_CREATE_ID, + ACTION_TEMPLATE_DESTROY_ID, + ACTION_TEMPLATE_SPEC, + ACTION_TEMPLATE_MASK, + /* Tunnel arguments. */ TUNNEL_CREATE, TUNNEL_CREATE_TYPE, @@ -858,6 +878,10 @@ struct buffer { struct rte_flow_port_attr port_attr; struct rte_flow_queue_attr queue_attr; } configure; /**< Configuration arguments. */ + struct { + uint32_t *template_id; + uint32_t template_id_n; + } templ_destroy; /**< Template destroy arguments. */ struct { uint32_t *action_id; uint32_t action_id_n; @@ -866,10 +890,13 @@ struct buffer { uint32_t action_id; } ia; /* Indirect action query arguments */ struct { + uint32_t it_id; + uint32_t at_id; struct rte_flow_attr attr; struct tunnel_ops tunnel_ops; struct rte_flow_item *pattern; struct rte_flow_action *actions; + struct rte_flow_action *masks; uint32_t pattern_n; uint32_t actions_n; uint8_t *data; @@ -949,6 +976,43 @@ static const enum index next_config_attr[] = { ZERO, }; +static const enum index next_it_subcmd[] = { + ITEM_TEMPLATE_CREATE, + ITEM_TEMPLATE_DESTROY, + ZERO, +}; + +static const enum index next_it_attr[] = { + ITEM_TEMPLATE_CREATE_ID, + ITEM_TEMPLATE_RELAXED_MATCHING, + ITEM_TEMPLATE_SPEC, + ZERO, +}; + +static const enum index next_it_destroy_attr[] = { + ITEM_TEMPLATE_DESTROY_ID, + END, + ZERO, +}; + +static const enum index next_at_subcmd[] = { + ACTION_TEMPLATE_CREATE, + ACTION_TEMPLATE_DESTROY, + ZERO, +}; + +static const enum index next_at_attr[] = { + ACTION_TEMPLATE_CREATE_ID, + ACTION_TEMPLATE_SPEC, + ZERO, +}; + +static const enum index next_at_destroy_attr[] = { + ACTION_TEMPLATE_DESTROY_ID, + END, + ZERO, +}; + static const enum index next_ia_create_attr[] = { INDIRECT_ACTION_CREATE_ID, INDIRECT_ACTION_INGRESS, @@ -1987,6 +2051,12 @@ static int parse_isolate(struct context *, const struct token *, static int parse_configure(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_template(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); +static int parse_template_destroy(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); static int parse_tunnel(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -2056,6 +2126,10 @@ static int comp_set_modify_field_op(struct context *, const struct token *, unsigned int, char *, unsigned int); static int comp_set_modify_field_id(struct context *, const struct token *, unsigned int, char *, unsigned int); +static int comp_item_template_id(struct context *, const struct token *, + unsigned int, char *, unsigned int); +static int comp_action_template_id(struct context *, const struct token *, + unsigned int, char *, unsigned int); /** Token definitions. */ static const struct token token_list[] = { @@ -2206,6 +2280,20 @@ static const struct token token_list[] = { .call = parse_flex_handle, .comp = comp_none, }, + [COMMON_ITEM_TEMPLATE_ID] = { + .name = "{item_template_id}", + .type = "ITEM_TEMPLATE_ID", + .help = "item template id", + .call = parse_int, + .comp = comp_item_template_id, + }, + [COMMON_ACTION_TEMPLATE_ID] = { + .name = "{action_template_id}", + .type = "ACTION_TEMPLATE_ID", + .help = "action template id", + .call = parse_int, + .comp = comp_action_template_id, + }, /* Top-level command. */ [FLOW] = { .name = "flow", @@ -2213,6 +2301,8 @@ static const struct token token_list[] = { .help = "manage ingress/egress flow rules", .next = NEXT(NEXT_ENTRY (CONFIGURE, + ITEM_TEMPLATE, + ACTION_TEMPLATE, INDIRECT_ACTION, VALIDATE, CREATE, @@ -2278,6 +2368,112 @@ static const struct token token_list[] = { args.configure.port_attr.nb_meters)), }, /* Top-level command. */ + [ITEM_TEMPLATE] = { + .name = "item_template", + .type = "{command} {port_id} [{arg} [...]]", + .help = "manage item templates", + .next = NEXT(next_it_subcmd, NEXT_ENTRY(COMMON_PORT_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_template, + }, + /* Sub-level commands. */ + [ITEM_TEMPLATE_CREATE] = { + .name = "create", + .help = "create item template", + .next = NEXT(next_it_attr), + .call = parse_template, + }, + [ITEM_TEMPLATE_DESTROY] = { + .name = "destroy", + .help = "destroy item template", + .next = NEXT(NEXT_ENTRY(ITEM_TEMPLATE_DESTROY_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_template_destroy, + }, + /* Item arguments. */ + [ITEM_TEMPLATE_CREATE_ID] = { + .name = "item_template_id", + .help = "specify a item template id to create", + .next = NEXT(next_it_attr, + NEXT_ENTRY(COMMON_ITEM_TEMPLATE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, args.vc.it_id)), + }, + [ITEM_TEMPLATE_DESTROY_ID] = { + .name = "item_template", + .help = "specify an item template id to destroy", + .next = NEXT(next_it_destroy_attr, + NEXT_ENTRY(COMMON_ITEM_TEMPLATE_ID)), + .args = ARGS(ARGS_ENTRY_PTR(struct buffer, + args.templ_destroy.template_id)), + .call = parse_template_destroy, + }, + [ITEM_TEMPLATE_RELAXED_MATCHING] = { + .name = "relaxed", + .help = "is matching relaxed", + .next = NEXT(next_it_attr, + NEXT_ENTRY(COMMON_BOOLEAN)), + .args = ARGS(ARGS_ENTRY_BF(struct buffer, + args.vc.attr.reserved, 1)), + }, + [ITEM_TEMPLATE_SPEC] = { + .name = "template", + .help = "specify item to create item template", + .next = NEXT(next_item), + }, + /* Top-level command. */ + [ACTION_TEMPLATE] = { + .name = "action_template", + .type = "{command} {port_id} [{arg} [...]]", + .help = "manage action templates", + .next = NEXT(next_at_subcmd, NEXT_ENTRY(COMMON_PORT_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_template, + }, + /* Sub-level commands. */ + [ACTION_TEMPLATE_CREATE] = { + .name = "create", + .help = "create action template", + .next = NEXT(next_at_attr), + .call = parse_template, + }, + [ACTION_TEMPLATE_DESTROY] = { + .name = "destroy", + .help = "destroy action template", + .next = NEXT(NEXT_ENTRY(ACTION_TEMPLATE_DESTROY_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_template_destroy, + }, + /* Action arguments. */ + [ACTION_TEMPLATE_CREATE_ID] = { + .name = "action_template_id", + .help = "specify a action template id to create", + .next = NEXT(NEXT_ENTRY(ACTION_TEMPLATE_MASK), + NEXT_ENTRY(ACTION_TEMPLATE_SPEC), + NEXT_ENTRY(COMMON_ACTION_TEMPLATE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, args.vc.at_id)), + }, + [ACTION_TEMPLATE_DESTROY_ID] = { + .name = "action_template", + .help = "specify an action template id to destroy", + .next = NEXT(next_at_destroy_attr, + NEXT_ENTRY(COMMON_ACTION_TEMPLATE_ID)), + .args = ARGS(ARGS_ENTRY_PTR(struct buffer, + args.templ_destroy.template_id)), + .call = parse_template_destroy, + }, + [ACTION_TEMPLATE_SPEC] = { + .name = "template", + .help = "specify action to create action template", + .next = NEXT(next_action), + .call = parse_template, + }, + [ACTION_TEMPLATE_MASK] = { + .name = "mask", + .help = "specify action mask to create action template", + .next = NEXT(next_action), + .call = parse_template, + }, + /* Top-level command. */ [INDIRECT_ACTION] = { .name = "indirect_action", .type = "{command} {port_id} [{arg} [...]]", @@ -2600,7 +2796,7 @@ static const struct token token_list[] = { .name = "end", .help = "end list of pattern items", .priv = PRIV_ITEM(END, 0), - .next = NEXT(NEXT_ENTRY(ACTIONS)), + .next = NEXT(NEXT_ENTRY(ACTIONS, END)), .call = parse_vc, }, [ITEM_VOID] = { @@ -5704,7 +5900,9 @@ parse_vc(struct context *ctx, const struct token *token, if (!out) return len; if (!out->command) { - if (ctx->curr != VALIDATE && ctx->curr != CREATE) + if (ctx->curr != VALIDATE && ctx->curr != CREATE && + ctx->curr != ITEM_TEMPLATE_CREATE && + ctx->curr != ACTION_TEMPLATE_CREATE) return -1; if (sizeof(*out) > size) return -1; @@ -7568,6 +7766,114 @@ parse_configure(struct context *ctx, const struct token *token, return len; } +/** Parse tokens for template create command. */ +static int +parse_template(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command) { + if (ctx->curr != ITEM_TEMPLATE && + ctx->curr != ACTION_TEMPLATE) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.vc.data = (uint8_t *)out + size; + return len; + } + switch (ctx->curr) { + case ITEM_TEMPLATE_CREATE: + out->args.vc.pattern = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + out->args.vc.it_id = UINT32_MAX; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + return len; + case ACTION_TEMPLATE_CREATE: + out->args.vc.at_id = UINT32_MAX; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + return len; + case ACTION_TEMPLATE_SPEC: + out->args.vc.actions = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + ctx->object = out->args.vc.actions; + ctx->objmask = NULL; + return len; + case ACTION_TEMPLATE_MASK: + out->args.vc.masks = + (void *)RTE_ALIGN_CEIL((uintptr_t) + (out->args.vc.actions + + out->args.vc.actions_n), + sizeof(double)); + ctx->object = out->args.vc.masks; + ctx->objmask = NULL; + return len; + default: + return -1; + } +} + +/** Parse tokens for template destroy command. */ +static int +parse_template_destroy(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + uint32_t *template_id; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command || + out->command == ITEM_TEMPLATE || + out->command == ACTION_TEMPLATE) { + if (ctx->curr != ITEM_TEMPLATE_DESTROY && + ctx->curr != ACTION_TEMPLATE_DESTROY) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.templ_destroy.template_id = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + return len; + } + template_id = out->args.templ_destroy.template_id + + out->args.templ_destroy.template_id_n++; + if ((uint8_t *)template_id > (uint8_t *)out + size) + return -1; + ctx->objdata = 0; + ctx->object = template_id; + ctx->objmask = NULL; + return len; +} + static int parse_flex(struct context *ctx, const struct token *token, const char *str, unsigned int len, @@ -8535,6 +8841,54 @@ comp_set_modify_field_id(struct context *ctx, const struct token *token, return -1; } +/** Complete available item template IDs. */ +static int +comp_item_template_id(struct context *ctx, const struct token *token, + unsigned int ent, char *buf, unsigned int size) +{ + unsigned int i = 0; + struct rte_port *port; + struct port_template *pt; + + (void)token; + if (port_id_is_invalid(ctx->port, DISABLED_WARN) || + ctx->port == (portid_t)RTE_PORT_ALL) + return -1; + port = &ports[ctx->port]; + for (pt = port->item_templ_list; pt != NULL; pt = pt->next) { + if (buf && i == ent) + return snprintf(buf, size, "%u", pt->id); + ++i; + } + if (buf) + return -1; + return i; +} + +/** Complete available iaction template IDs. */ +static int +comp_action_template_id(struct context *ctx, const struct token *token, + unsigned int ent, char *buf, unsigned int size) +{ + unsigned int i = 0; + struct rte_port *port; + struct port_template *pt; + + (void)token; + if (port_id_is_invalid(ctx->port, DISABLED_WARN) || + ctx->port == (portid_t)RTE_PORT_ALL) + return -1; + port = &ports[ctx->port]; + for (pt = port->action_templ_list; pt != NULL; pt = pt->next) { + if (buf && i == ent) + return snprintf(buf, size, "%u", pt->id); + ++i; + } + if (buf) + return -1; + return i; +} + /** Internal context. */ static struct context cmd_flow_context; @@ -8798,6 +9152,24 @@ cmd_flow_parsed(const struct buffer *in) port_flow_configure(in->port, &in->args.configure.port_attr, &in->args.configure.queue_attr); break; + case ITEM_TEMPLATE_CREATE: + port_flow_item_template_create(in->port, in->args.vc.it_id, + in->args.vc.attr.reserved, in->args.vc.pattern); + break; + case ITEM_TEMPLATE_DESTROY: + port_flow_item_template_destroy(in->port, + in->args.templ_destroy.template_id_n, + in->args.templ_destroy.template_id); + break; + case ACTION_TEMPLATE_CREATE: + port_flow_action_template_create(in->port, in->args.vc.at_id, + in->args.vc.actions, in->args.vc.masks); + break; + case ACTION_TEMPLATE_DESTROY: + port_flow_action_template_destroy(in->port, + in->args.templ_destroy.template_id_n, + in->args.templ_destroy.template_id); + break; case INDIRECT_ACTION_CREATE: port_action_handle_create( in->port, in->args.vc.attr.group, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 85d31de7f7..80678d851f 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -1595,6 +1595,49 @@ action_alloc(portid_t port_id, uint32_t id, return 0; } +static int +template_alloc(uint32_t id, struct port_template **template, + struct port_template **list) +{ + struct port_template *lst = *list; + struct port_template **ppt; + struct port_template *pt = NULL; + + *template = NULL; + if (id == UINT32_MAX) { + /* taking first available ID */ + if (lst) { + if (lst->id == UINT32_MAX - 1) { + printf("Highest item template ID is already" + " assigned, delete it first\n"); + return -ENOMEM; + } + id = lst->id + 1; + } else { + id = 0; + } + } + pt = calloc(1, sizeof(*pt)); + if (!pt) { + printf("Allocation of port template failed\n"); + return -ENOMEM; + } + ppt = list; + while (*ppt && (*ppt)->id > id) + ppt = &(*ppt)->next; + if (*ppt && (*ppt)->id == id) { + printf("Template #%u is already assigned," + " delete it first\n", id); + free(pt); + return -EINVAL; + } + pt->next = *ppt; + pt->id = id; + *ppt = pt; + *template = pt; + return 0; +} + /** Configure flow management resources. */ int port_flow_configure(portid_t port_id, @@ -2039,6 +2082,167 @@ age_action_get(const struct rte_flow_action *actions) return NULL; } +/** Create item template */ +int +port_flow_item_template_create(portid_t port_id, uint32_t id, bool relaxed, + const struct rte_flow_item *pattern) +{ + struct rte_port *port; + struct port_template *pit; + int ret; + struct rte_flow_item_template_attr attr = { + .relaxed_matching = relaxed }; + struct rte_flow_error error; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + ret = template_alloc(id, &pit, &port->item_templ_list); + if (ret) + return ret; + /* Poisoning to make sure PMDs update it in case of error. */ + memset(&error, 0x22, sizeof(error)); + pit->template.itempl = rte_flow_item_template_create(port_id, + &attr, pattern, &error); + if (!pit->template.itempl) { + uint32_t destroy_id = pit->id; + port_flow_item_template_destroy(port_id, 1, &destroy_id); + return port_flow_complain(&error); + } + printf("Item template #%u created\n", pit->id); + return 0; +} + +/** Destroy item template */ +int +port_flow_item_template_destroy(portid_t port_id, uint32_t n, + const uint32_t *template) +{ + struct rte_port *port; + struct port_template **tmp; + uint32_t c = 0; + int ret = 0; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + tmp = &port->item_templ_list; + while (*tmp) { + uint32_t i; + + for (i = 0; i != n; ++i) { + struct rte_flow_error error; + struct port_template *pit = *tmp; + + if (template[i] != pit->id) + continue; + /* + * Poisoning to make sure PMDs update it in case + * of error. + */ + memset(&error, 0x33, sizeof(error)); + + if (pit->template.itempl && + rte_flow_item_template_destroy(port_id, + pit->template.itempl, + &error)) { + ret = port_flow_complain(&error); + continue; + } + *tmp = pit->next; + printf("Item template #%u destroyed\n", pit->id); + free(pit); + break; + } + if (i == n) + tmp = &(*tmp)->next; + ++c; + } + return ret; +} + +/** Create action template */ +int +port_flow_action_template_create(portid_t port_id, uint32_t id, + const struct rte_flow_action *actions, + const struct rte_flow_action *masks) +{ + struct rte_port *port; + struct port_template *pat; + int ret; + struct rte_flow_action_template_attr attr = { 0 }; + struct rte_flow_error error; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + ret = template_alloc(id, &pat, &port->action_templ_list); + if (ret) + return ret; + /* Poisoning to make sure PMDs update it in case of error. */ + memset(&error, 0x22, sizeof(error)); + pat->template.atempl = rte_flow_action_template_create(port_id, + &attr, actions, masks, &error); + if (!pat->template.atempl) { + uint32_t destroy_id = pat->id; + port_flow_action_template_destroy(port_id, 1, &destroy_id); + return port_flow_complain(&error); + } + printf("Action template #%u created\n", pat->id); + return 0; +} + +/** Destroy action template */ +int +port_flow_action_template_destroy(portid_t port_id, uint32_t n, + const uint32_t *template) +{ + struct rte_port *port; + struct port_template **tmp; + uint32_t c = 0; + int ret = 0; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + tmp = &port->action_templ_list; + while (*tmp) { + uint32_t i; + + for (i = 0; i != n; ++i) { + struct rte_flow_error error; + struct port_template *pat = *tmp; + + if (template[i] != pat->id) + continue; + /* + * Poisoning to make sure PMDs update it in case + * of error. + */ + memset(&error, 0x33, sizeof(error)); + + if (pat->template.atempl && + rte_flow_action_template_destroy(port_id, + pat->template.atempl, &error)) { + ret = port_flow_complain(&error); + continue; + } + *tmp = pat->next; + printf("Action template #%u destroyed\n", pat->id); + free(pat); + break; + } + if (i == n) + tmp = &(*tmp)->next; + ++c; + } + return ret; +} + /** Create flow rule. */ int port_flow_create(portid_t port_id, diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index ce80a00193..4befa6d7a4 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -166,6 +166,17 @@ enum age_action_context_type { ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION, }; +/** Descriptor for a template. */ +struct port_template { + struct port_template *next; /**< Next template in list. */ + struct port_template *tmp; /**< Temporary linking. */ + uint32_t id; /**< Template ID. */ + union { + struct rte_flow_item_template *itempl; + struct rte_flow_action_template *atempl; + } template; /**< PMD opaque template object */ +}; + /** Descriptor for a single flow. */ struct port_flow { struct port_flow *next; /**< Next flow in list. */ @@ -246,6 +257,8 @@ struct rte_port { queueid_t queue_nb; /**< nb. of queues for flow rules */ uint32_t queue_sz; /**< size of a queue for flow rules */ uint8_t slave_flag; /**< bonding slave port */ + struct port_template *item_templ_list; /**< Item templates. */ + struct port_template *action_templ_list; /**< Action templates. */ struct port_flow *flow_list; /**< Associated flows. */ struct port_indirect_action *actions_list; /**< Associated indirect actions. */ @@ -890,6 +903,15 @@ int port_action_handle_update(portid_t port_id, uint32_t id, int port_flow_configure(portid_t port_id, const struct rte_flow_port_attr *port_attr, const struct rte_flow_queue_attr *queue_attr); +int port_flow_item_template_create(portid_t port_id, uint32_t id, bool relaxed, + const struct rte_flow_item *pattern); +int port_flow_item_template_destroy(portid_t port_id, uint32_t n, + const uint32_t *template); +int port_flow_action_template_create(portid_t port_id, uint32_t id, + const struct rte_flow_action *actions, + const struct rte_flow_action *masks); +int port_flow_action_template_destroy(portid_t port_id, uint32_t n, + const uint32_t *template); int port_flow_validate(portid_t port_id, const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 8af28bd3b3..d23cfa6572 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3317,6 +3317,24 @@ following sections. [aging_counters_number {number}] [meters_number {number}] +- Create an item template:: + flow item_template {port_id} create [item_template_id {id}] + [relaxed {boolean}] template {item} [/ {item} [...]] / end + +- Destroy an item template:: + + flow item_template {port_id} destroy item_template {id} [...] + +- Create an action template:: + + flow action_template {port_id} create [action_template_id {id}] + template {action} [/ {action} [...]] / end + mask {action} [/ {action} [...]] / end + +- Destroy an action template:: + + flow action_template {port_id} destroy action_template {id} [...] + - Check whether a flow rule can be created:: flow validate {port_id} @@ -3398,6 +3416,85 @@ Otherwise it will show an error message of the form:: Caught error type [...] ([...]): [...] +Creating item templates +~~~~~~~~~~~~~~~~~~~~~~~ + +``flow item_template create`` creates the specified item template. +It is bound to ``rte_flow_item_template_create()``:: + + flow item_template {port_id} create [item_template_id {id}] + [relaxed {boolean}] template {item} [/ {item} [...]] / end + +If successful, it will show:: + + Item template #[...] created + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + +This command uses the same pattern items as ``flow create``, +their format is described in `Creating flow rules`_. + +Destroying item templates +~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow item_template destroy`` destroys one or more item templates +from their template ID (as returned by ``flow item_template create``), +this command calls ``rte_flow_item_template_destroy()`` as many +times as necessary:: + + flow item_template {port_id} destroy item_template {id} [...] + +If successful, it will show:: + + Item template #[...] destroyed + +It does not report anything for item template IDs that do not exist. +The usual error message is shown when an item template cannot be destroyed:: + + Caught error type [...] ([...]): [...] + +Creating action templates +~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow action_template create`` creates the specified action template. +It is bound to ``rte_flow_action_template_create()``:: + + flow action_template {port_id} create [action_template_id {id}] + template {action} [/ {action} [...]] / end + mask {action} [/ {action} [...]] / end + +If successful, it will show:: + + Action template #[...] created + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + +This command uses the same actions as ``flow create``, +their format is described in `Creating flow rules`_. + +Destroying action templates +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow action_template destroy`` destroys one or more action templates +from their template ID (as returned by ``flow action_template create``), +this command calls ``rte_flow_action_template_destroy()`` as many +times as necessary:: + + flow action_template {port_id} destroy action_template {id} [...] + +If successful, it will show:: + + Action template #[...] destroyed + +It does not report anything for item template IDs that do not exist. +The usual error message is shown when an item template cannot be destroyed:: + + Caught error type [...] ([...]): [...] + Creating a tunnel stub for offload ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From patchwork Tue Jan 18 15:34:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 106035 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8E7EBA034C; Tue, 18 Jan 2022 16:34:57 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 724E842730; Tue, 18 Jan 2022 16:34:57 +0100 (CET) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2074.outbound.protection.outlook.com [40.107.93.74]) by mails.dpdk.org (Postfix) with ESMTP id BFF714068E for ; Tue, 18 Jan 2022 16:34:55 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=QCEpj91suEN9Vwn4ik42106UiENVB4zBAQ00+Ns4BsmoIImX2FYNszuU1pDwjXh4N7G3Fci8v5f1dBRZyIPAz5z/QD886LvfVTOG0VduFEED9hOZ4u5Zi9loZYCxH0l6zGLyIluxlaN/gcvCP8uMMd/Qr2g94IBd6bU7RweO4Q13xP10fG99Yw56HYplZdnAhotMabeydHaF+/mwa+UehtSHRGvgPK22vaHZWBUuPn6fDggjMa2wmLTLpBRCF3lpSvbi539p6QHpcBTfPJ2pxHa33SL0sdWStE38w/6CR/ExDwgiKVbY6cRs/HPDnnBN/fAXgrEV5qylVqpbNBsnMg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=HEJFW/CJbrHu63nLaTb4wpYNYP1ZTksevilSaOQxkb4=; b=klFEsoS+2VSefw+3H3nsg9G558Vg3ARIBXi9UMgP1Rhp2EC4gQKaEMD8vV5zEeJ4IWtcr7iZcXhK7Xh8c2z6Og5IplQRWRdv0MA6GAcCmdixAGfxZ9vCkDgKz7AsI5XCFhktNRELuVQy9lhjOOcEemGQNQqsoJ6A/bCa18gYwR8bJNHJD2B/yUmJW5Sp7tQCkxisUaW/oqnPh0dZrUtwl2bsXCpEyLshvagGJB0FraAtDfpLx4CrkO/MNj3GkhQsq1SSC8lSAhsjDYvV1DWk5KSM0Y+uo65cI7dm+mQxVWvyXvkRy9LARFBKMEdTOazDl2AFY632XFMhCxKVGYLhCg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=HEJFW/CJbrHu63nLaTb4wpYNYP1ZTksevilSaOQxkb4=; b=MaBkHhXLBeAp1w2osvvDjOIrNYgJzJPYbHVc5/yXr+PQ9caGQWqdR0ulIuc0WSWEw9cAea7/niP1ec5TC80oWJ1toyRPli7qo5fZNyqN3AkOdUkl2OKP6ICNgRTAW1EzR3ptKShcSQLP+6fryNasH8iZZuhOU3zZN+ZCYeRol6YLEgsXSdYjz3tHBnoTEVMMpvkU8YzDsHdG+hmVd/UCXMbvOABgvoyLIInHli3Xvvu60DtSQ/BEa2Y6NAZH5Kw2j74dfm5Zydyp0TuFiGIYOFIV0mm4q8JDzDYVQHaTcWcWYXOwq9eVZBd/+o3hsx5sxeCknDGr4dldh5U702lpGQ== Received: from CH2PR12MB3926.namprd12.prod.outlook.com (2603:10b6:610:27::30) by DM6PR12MB3433.namprd12.prod.outlook.com (2603:10b6:5:38::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4888.11; Tue, 18 Jan 2022 15:34:53 +0000 Received: from DM6PR12CA0004.namprd12.prod.outlook.com (2603:10b6:5:1c0::17) by CH2PR12MB3926.namprd12.prod.outlook.com (2603:10b6:610:27::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4888.11; Tue, 18 Jan 2022 15:34:50 +0000 Received: from DM6NAM11FT042.eop-nam11.prod.protection.outlook.com (2603:10b6:5:1c0:cafe::96) by DM6PR12CA0004.outlook.office365.com (2603:10b6:5:1c0::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4888.10 via Frontend Transport; Tue, 18 Jan 2022 15:34:50 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.234) by DM6NAM11FT042.mail.protection.outlook.com (10.13.173.165) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4888.9 via Frontend Transport; Tue, 18 Jan 2022 15:34:50 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 18 Jan 2022 15:34:50 +0000 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.986.9; Tue, 18 Jan 2022 07:34:47 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , Subject: [v2,06/10] app/testpmd: implement rte flow table Date: Tue, 18 Jan 2022 17:34:34 +0200 Message-ID: <20220118153434.3947731-1-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220118153027.3947448-1-akozyrev@nvidia.com> References: <20220118153027.3947448-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: d867b07e-801b-4934-d857-08d9da98112e X-MS-TrafficTypeDiagnostic: CH2PR12MB3926:EE_|DM6PR12MB3433:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:6430; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: tEVpu3MusraxsCEIoY8ioVo4yWU+nGdMjCU9AP+gPMlXfMMslXYPx6IpEvw91knU6ElDUCSL/+mti8YtE1Z7osaEFle0Cp1hz7fvAYAS0+3QrO1eWU3BsnnVpE4PhTfOq+2shEwVoaK3mVmx6zTFeJLhLEoI/GpZ3OsvIkvhaRsQKGGrpDmYHqKoRN65Pg1EM/LTR8TwtRR7ISBye5W7vb7EQJah5HWMD0NYaxYuhT5p+5tfuG5W4tvLE4zGk82DbJQu6w4YYakoGaHzDUgetBjRi0dGPwuXsWtKFYzCFMm9ZVYAW8vVAGJOjhAaeFTk3sEPVZG2KWQO8ifNyBS1zClfTjWycDG/XBSCfYOMwPZqpslax/IuWFO52Dtg2mxu1q5wxp2clK56GhxnbRDIOM2jt7KrhdqE1XBjjv+ilLdkB9vSKWhmzGn4M9JFTpyGFXFmJlm+Y/soIj9sUfiItBtWkPeGA/L+sFAiQSJZhvhcNShBKrWhtcwuQ0RoJtU7v/8Vr/Qym0LhJzkgGXg5Jf+cxYKNDGiK3CZ7Cn3LQGqKCLKg66H81Wn8lR6LbLj14YWODSSy0z5UrW9rwBP3SR3nVQNb8sjoDyG5Id2mEn3TsKOiVGV5XpbI725HLrEYwS+xzpDakIMP7/54IEXnED46KYwAP4efRcyyoMjwfKBaXEWA4Xo0ScnmQN61vpHIMAZU2Lx2vknfDOfd8GEbJlt3iZkkv2vVjyGZ74pIn/IA8OcgnVe2tdp6nqX1+ipCwdxZdhXzyPsSMSjZpmZyFFW8v0ganLnhind3DRi4ID8= X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(40470700002)(5660300002)(336012)(82310400004)(2906002)(70586007)(6286002)(316002)(6916009)(70206006)(55016003)(186003)(16526019)(6666004)(426003)(26005)(1076003)(54906003)(30864003)(2616005)(86362001)(81166007)(36756003)(356005)(508600001)(8676002)(8936002)(7696005)(40460700001)(47076005)(83380400001)(4326008)(36860700001)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2022 15:34:50.5927 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d867b07e-801b-4934-d857-08d9da98112e X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT042.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3433 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add testpmd support for the rte_flow_table API. Provide the command line interface for the flow table creation/destruction. Usage example: testpmd> flow table 0 create table_id 6 group 9 priority 4 ingress mode 1 rules_number 64 item_template 2 action_template 4 testpmd> flow table 0 destroy table 6 Signed-off-by: Alexander Kozyrev --- app/test-pmd/cmdline_flow.c | 315 ++++++++++++++++++++ app/test-pmd/config.c | 168 +++++++++++ app/test-pmd/testpmd.h | 15 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 53 ++++ 4 files changed, 551 insertions(+) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index fb27a97855..4dc2a2aaeb 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -58,6 +58,7 @@ enum index { COMMON_FLEX_TOKEN, COMMON_ITEM_TEMPLATE_ID, COMMON_ACTION_TEMPLATE_ID, + COMMON_TABLE_ID, /* TOP-level command. */ ADD, @@ -77,6 +78,7 @@ enum index { CONFIGURE, ITEM_TEMPLATE, ACTION_TEMPLATE, + TABLE, INDIRECT_ACTION, VALIDATE, CREATE, @@ -111,6 +113,20 @@ enum index { ACTION_TEMPLATE_SPEC, ACTION_TEMPLATE_MASK, + /* Table arguments. */ + TABLE_CREATE, + TABLE_DESTROY, + TABLE_CREATE_ID, + TABLE_DESTROY_ID, + TABLE_GROUP, + TABLE_PRIORITY, + TABLE_INGRESS, + TABLE_EGRESS, + TABLE_TRANSFER, + TABLE_RULES_NUMBER, + TABLE_ITEM_TEMPLATE, + TABLE_ACTION_TEMPLATE, + /* Tunnel arguments. */ TUNNEL_CREATE, TUNNEL_CREATE_TYPE, @@ -882,6 +898,18 @@ struct buffer { uint32_t *template_id; uint32_t template_id_n; } templ_destroy; /**< Template destroy arguments. */ + struct { + uint32_t id; + struct rte_flow_table_attr attr; + uint32_t *item_id; + uint32_t item_id_n; + uint32_t *action_id; + uint32_t action_id_n; + } table; /**< Table arguments. */ + struct { + uint32_t *table_id; + uint32_t table_id_n; + } table_destroy; /**< Template destroy arguments. */ struct { uint32_t *action_id; uint32_t action_id_n; @@ -1013,6 +1041,32 @@ static const enum index next_at_destroy_attr[] = { ZERO, }; +static const enum index next_table_subcmd[] = { + TABLE_CREATE, + TABLE_DESTROY, + ZERO, +}; + +static const enum index next_table_attr[] = { + TABLE_CREATE_ID, + TABLE_GROUP, + TABLE_PRIORITY, + TABLE_INGRESS, + TABLE_EGRESS, + TABLE_TRANSFER, + TABLE_RULES_NUMBER, + TABLE_ITEM_TEMPLATE, + TABLE_ACTION_TEMPLATE, + END, + ZERO, +}; + +static const enum index next_table_destroy_attr[] = { + TABLE_DESTROY_ID, + END, + ZERO, +}; + static const enum index next_ia_create_attr[] = { INDIRECT_ACTION_CREATE_ID, INDIRECT_ACTION_INGRESS, @@ -2057,6 +2111,11 @@ static int parse_template(struct context *, const struct token *, static int parse_template_destroy(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_table(struct context *, const struct token *, + const char *, unsigned int, void *, unsigned int); +static int parse_table_destroy(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); static int parse_tunnel(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -2130,6 +2189,8 @@ static int comp_item_template_id(struct context *, const struct token *, unsigned int, char *, unsigned int); static int comp_action_template_id(struct context *, const struct token *, unsigned int, char *, unsigned int); +static int comp_table_id(struct context *, const struct token *, + unsigned int, char *, unsigned int); /** Token definitions. */ static const struct token token_list[] = { @@ -2294,6 +2355,13 @@ static const struct token token_list[] = { .call = parse_int, .comp = comp_action_template_id, }, + [COMMON_TABLE_ID] = { + .name = "{table_id}", + .type = "TABLE_ID", + .help = "table id", + .call = parse_int, + .comp = comp_table_id, + }, /* Top-level command. */ [FLOW] = { .name = "flow", @@ -2303,6 +2371,7 @@ static const struct token token_list[] = { (CONFIGURE, ITEM_TEMPLATE, ACTION_TEMPLATE, + TABLE, INDIRECT_ACTION, VALIDATE, CREATE, @@ -2474,6 +2543,104 @@ static const struct token token_list[] = { .call = parse_template, }, /* Top-level command. */ + [TABLE] = { + .name = "table", + .type = "{command} {port_id} [{arg} [...]]", + .help = "manage tables", + .next = NEXT(next_table_subcmd, NEXT_ENTRY(COMMON_PORT_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_table, + }, + /* Sub-level commands. */ + [TABLE_CREATE] = { + .name = "create", + .help = "create table", + .next = NEXT(next_table_attr), + .call = parse_table, + }, + [TABLE_DESTROY] = { + .name = "destroy", + .help = "destroy table", + .next = NEXT(NEXT_ENTRY(TABLE_DESTROY_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_table_destroy, + }, + /* Table arguments. */ + [TABLE_CREATE_ID] = { + .name = "table_id", + .help = "specify table id to create", + .next = NEXT(next_table_attr, + NEXT_ENTRY(COMMON_TABLE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, args.table.id)), + }, + [TABLE_DESTROY_ID] = { + .name = "table", + .help = "specify table id to destroy", + .next = NEXT(next_table_destroy_attr, + NEXT_ENTRY(COMMON_TABLE_ID)), + .args = ARGS(ARGS_ENTRY_PTR(struct buffer, + args.table_destroy.table_id)), + .call = parse_table_destroy, + }, + [TABLE_GROUP] = { + .name = "group", + .help = "specify a group", + .next = NEXT(next_table_attr, NEXT_ENTRY(COMMON_GROUP_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.table.attr.flow_attr.group)), + }, + [TABLE_PRIORITY] = { + .name = "priority", + .help = "specify a priority level", + .next = NEXT(next_table_attr, NEXT_ENTRY(COMMON_PRIORITY_LEVEL)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.table.attr.flow_attr.priority)), + }, + [TABLE_EGRESS] = { + .name = "egress", + .help = "affect rule to egress", + .next = NEXT(next_table_attr), + .call = parse_table, + }, + [TABLE_INGRESS] = { + .name = "ingress", + .help = "affect rule to ingress", + .next = NEXT(next_table_attr), + .call = parse_table, + }, + [TABLE_TRANSFER] = { + .name = "transfer", + .help = "affect rule to transfer", + .next = NEXT(next_table_attr), + .call = parse_table, + }, + [TABLE_RULES_NUMBER] = { + .name = "rules_number", + .help = "number of rules in table", + .next = NEXT(next_table_attr, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.table.attr.nb_flows)), + }, + [TABLE_ITEM_TEMPLATE] = { + .name = "item_template", + .help = "specify item template id", + .next = NEXT(next_table_attr, + NEXT_ENTRY(COMMON_ITEM_TEMPLATE_ID)), + .args = ARGS(ARGS_ENTRY_PTR(struct buffer, + args.table.item_id)), + .call = parse_table, + }, + [TABLE_ACTION_TEMPLATE] = { + .name = "action_template", + .help = "specify action template id", + .next = NEXT(next_table_attr, + NEXT_ENTRY(COMMON_ACTION_TEMPLATE_ID)), + .args = ARGS(ARGS_ENTRY_PTR(struct buffer, + args.table.action_id)), + .call = parse_table, + }, + /* Top-level command. */ [INDIRECT_ACTION] = { .name = "indirect_action", .type = "{command} {port_id} [{arg} [...]]", @@ -7874,6 +8041,119 @@ parse_template_destroy(struct context *ctx, const struct token *token, return len; } +/** Parse tokens for indirect action commands. */ +static int +parse_table(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + uint32_t *template_id; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command) { + if (ctx->curr != TABLE) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + return len; + } + switch (ctx->curr) { + case TABLE_CREATE: + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.table.id = UINT32_MAX; + return len; + case TABLE_ITEM_TEMPLATE: + out->args.table.item_id = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + template_id = out->args.table.item_id + + out->args.table.item_id_n++; + if ((uint8_t *)template_id > (uint8_t *)out + size) + return -1; + ctx->objdata = 0; + ctx->object = template_id; + ctx->objmask = NULL; + return len; + case TABLE_ACTION_TEMPLATE: + out->args.table.action_id = + (void *)RTE_ALIGN_CEIL((uintptr_t) + (out->args.table.item_id + + out->args.table.item_id_n), + sizeof(double)); + template_id = out->args.table.action_id + + out->args.table.action_id_n++; + if ((uint8_t *)template_id > (uint8_t *)out + size) + return -1; + ctx->objdata = 0; + ctx->object = template_id; + ctx->objmask = NULL; + return len; + case TABLE_INGRESS: + out->args.table.attr.flow_attr.ingress = 1; + return len; + case TABLE_EGRESS: + out->args.table.attr.flow_attr.egress = 1; + return len; + case TABLE_TRANSFER: + out->args.table.attr.flow_attr.transfer = 1; + return len; + default: + return -1; + } +} + +/** Parse tokens for indirect action destroy command. */ +static int +parse_table_destroy(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + uint32_t *table_id; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command || out->command == TABLE) { + if (ctx->curr != TABLE_DESTROY) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.table_destroy.table_id = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + return len; + } + table_id = out->args.table_destroy.table_id + + out->args.table_destroy.table_id_n++; + if ((uint8_t *)table_id > (uint8_t *)out + size) + return -1; + ctx->objdata = 0; + ctx->object = table_id; + ctx->objmask = NULL; + return len; +} + static int parse_flex(struct context *ctx, const struct token *token, const char *str, unsigned int len, @@ -8889,6 +9169,30 @@ comp_action_template_id(struct context *ctx, const struct token *token, return i; } +/** Complete available table IDs. */ +static int +comp_table_id(struct context *ctx, const struct token *token, + unsigned int ent, char *buf, unsigned int size) +{ + unsigned int i = 0; + struct rte_port *port; + struct port_table *pt; + + (void)token; + if (port_id_is_invalid(ctx->port, DISABLED_WARN) || + ctx->port == (portid_t)RTE_PORT_ALL) + return -1; + port = &ports[ctx->port]; + for (pt = port->table_list; pt != NULL; pt = pt->next) { + if (buf && i == ent) + return snprintf(buf, size, "%u", pt->id); + ++i; + } + if (buf) + return -1; + return i; +} + /** Internal context. */ static struct context cmd_flow_context; @@ -9170,6 +9474,17 @@ cmd_flow_parsed(const struct buffer *in) in->args.templ_destroy.template_id_n, in->args.templ_destroy.template_id); break; + case TABLE_CREATE: + port_flow_table_create(in->port, in->args.table.id, + &in->args.table.attr, in->args.table.item_id_n, + in->args.table.item_id, in->args.table.action_id_n, + in->args.table.action_id); + break; + case TABLE_DESTROY: + port_flow_table_destroy(in->port, + in->args.table_destroy.table_id_n, + in->args.table_destroy.table_id); + break; case INDIRECT_ACTION_CREATE: port_action_handle_create( in->port, in->args.vc.attr.group, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 80678d851f..07582fa552 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -1638,6 +1638,49 @@ template_alloc(uint32_t id, struct port_template **template, return 0; } +static int +table_alloc(uint32_t id, struct port_table **table, + struct port_table **list) +{ + struct port_table *lst = *list; + struct port_table **ppt; + struct port_table *pt = NULL; + + *table = NULL; + if (id == UINT32_MAX) { + /* taking first available ID */ + if (lst) { + if (lst->id == UINT32_MAX - 1) { + printf("Highest table ID is already" + " assigned, delete it first\n"); + return -ENOMEM; + } + id = lst->id + 1; + } else { + id = 0; + } + } + pt = calloc(1, sizeof(*pt)); + if (!pt) { + printf("Allocation of table failed\n"); + return -ENOMEM; + } + ppt = list; + while (*ppt && (*ppt)->id > id) + ppt = &(*ppt)->next; + if (*ppt && (*ppt)->id == id) { + printf("Table #%u is already assigned," + " delete it first\n", id); + free(pt); + return -EINVAL; + } + pt->next = *ppt; + pt->id = id; + *ppt = pt; + *table = pt; + return 0; +} + /** Configure flow management resources. */ int port_flow_configure(portid_t port_id, @@ -2243,6 +2286,131 @@ port_flow_action_template_destroy(portid_t port_id, uint32_t n, return ret; } +/** Create table */ +int +port_flow_table_create(portid_t port_id, uint32_t id, + const struct rte_flow_table_attr *table_attr, + uint32_t nb_item_templates, uint32_t *item_templates, + uint32_t nb_action_templates, uint32_t *action_templates) +{ + struct rte_port *port; + struct port_table *pt; + struct port_template *temp = NULL; + int ret; + uint32_t i; + struct rte_flow_error error; + struct rte_flow_item_template + *flow_item_templates[nb_item_templates]; + struct rte_flow_action_template + *flow_action_templates[nb_action_templates]; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + for (i = 0; i < nb_item_templates; ++i) { + bool found = false; + temp = port->item_templ_list; + while (temp) { + if (item_templates[i] == temp->id) { + flow_item_templates[i] = temp->template.itempl; + found = true; + break; + } + temp = temp->next; + } + if (!found) { + printf("Item template #%u is invalid\n", + item_templates[i]); + return -EINVAL; + } + } + for (i = 0; i < nb_action_templates; ++i) { + bool found = false; + temp = port->action_templ_list; + while (temp) { + if (action_templates[i] == temp->id) { + flow_action_templates[i] = + temp->template.atempl; + found = true; + break; + } + temp = temp->next; + } + if (!found) { + printf("Action template #%u is invalid\n", + action_templates[i]); + return -EINVAL; + } + } + ret = table_alloc(id, &pt, &port->table_list); + if (ret) + return ret; + /* Poisoning to make sure PMDs update it in case of error. */ + memset(&error, 0x22, sizeof(error)); + pt->table = rte_flow_table_create(port_id, table_attr, + flow_item_templates, nb_item_templates, + flow_action_templates, nb_action_templates, + &error); + + if (!pt->table) { + uint32_t destroy_id = pt->id; + port_flow_table_destroy(port_id, 1, &destroy_id); + return port_flow_complain(&error); + } + printf("Table #%u created\n", pt->id); + return 0; +} + +/** Destroy table */ +int +port_flow_table_destroy(portid_t port_id, + uint32_t n, const uint32_t *table) +{ + struct rte_port *port; + struct port_table **tmp; + uint32_t c = 0; + int ret = 0; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + tmp = &port->table_list; + while (*tmp) { + uint32_t i; + + for (i = 0; i != n; ++i) { + struct rte_flow_error error; + struct port_table *pt = *tmp; + + if (table[i] != pt->id) + continue; + /* + * Poisoning to make sure PMDs update it in case + * of error. + */ + memset(&error, 0x33, sizeof(error)); + + if (pt->table && + rte_flow_table_destroy(port_id, + pt->table, + &error)) { + ret = port_flow_complain(&error); + continue; + } + *tmp = pt->next; + printf("Table #%u destroyed\n", pt->id); + free(pt); + break; + } + if (i == n) + tmp = &(*tmp)->next; + ++c; + } + return ret; +} + /** Create flow rule. */ int port_flow_create(portid_t port_id, diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 4befa6d7a4..b8655b9987 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -177,6 +177,14 @@ struct port_template { } template; /**< PMD opaque template object */ }; +/** Descriptor for a flow table. */ +struct port_table { + struct port_table *next; /**< Next table in list. */ + struct port_table *tmp; /**< Temporary linking. */ + uint32_t id; /**< Table ID. */ + struct rte_flow_table *table; /**< PMD opaque template object */ +}; + /** Descriptor for a single flow. */ struct port_flow { struct port_flow *next; /**< Next flow in list. */ @@ -259,6 +267,7 @@ struct rte_port { uint8_t slave_flag; /**< bonding slave port */ struct port_template *item_templ_list; /**< Item templates. */ struct port_template *action_templ_list; /**< Action templates. */ + struct port_table *table_list; /**< Flow tables. */ struct port_flow *flow_list; /**< Associated flows. */ struct port_indirect_action *actions_list; /**< Associated indirect actions. */ @@ -912,6 +921,12 @@ int port_flow_action_template_create(portid_t port_id, uint32_t id, const struct rte_flow_action *masks); int port_flow_action_template_destroy(portid_t port_id, uint32_t n, const uint32_t *template); +int port_flow_table_create(portid_t port_id, uint32_t id, + const struct rte_flow_table_attr *table_attr, + uint32_t nb_item_templates, uint32_t *item_templates, + uint32_t nb_action_templates, uint32_t *action_templates); +int port_flow_table_destroy(portid_t port_id, + uint32_t n, const uint32_t *table); int port_flow_validate(portid_t port_id, const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index d23cfa6572..f8a87564be 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3335,6 +3335,19 @@ following sections. flow action_template {port_id} destroy action_template {id} [...] +- Create a table:: + + flow table {port_id} create + [table_id {id}] + [group {group_id}] [priority {level}] [ingress] [egress] [transfer] + rules_number {number} + item_template {item_template_id} + action_template {action_template_id} + +- Destroy a table:: + + flow table {port_id} destroy table {id} [...] + - Check whether a flow rule can be created:: flow validate {port_id} @@ -3495,6 +3508,46 @@ The usual error message is shown when an item template cannot be destroyed:: Caught error type [...] ([...]): [...] +Creating flow table +~~~~~~~~~~~~~~~~~~~ + +``flow table create`` creates the specified flow table. +It is bound to ``rte_flow_table_create()``:: + + flow table {port_id} create + [table_id {id}] [group {group_id}] + [priority {level}] [ingress] [egress] [transfer] + rules_number {number} + item_template {item_template_id} + action_template {action_template_id} + +If successful, it will show:: + + Table #[...] created + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + +Destroying flow table +~~~~~~~~~~~~~~~~~~~~~ + +``flow table destroy`` destroys one or more flow tables +from their table ID (as returned by ``flow table create``), +this command calls ``rte_flow_table_destroy()`` as many +times as necessary:: + + flow table {port_id} destroy table {id} [...] + +If successful, it will show:: + + Table #[...] destroyed + +It does not report anything for table IDs that do not exist. +The usual error message is shown when a table cannot be destroyed:: + + Caught error type [...] ([...]): [...] + Creating a tunnel stub for offload ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From patchwork Tue Jan 18 15:35:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 106037 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 51D1AA034C; Tue, 18 Jan 2022 16:35:39 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 423B442734; Tue, 18 Jan 2022 16:35:39 +0100 (CET) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2080.outbound.protection.outlook.com [40.107.236.80]) by mails.dpdk.org (Postfix) with ESMTP id A37D84068E for ; Tue, 18 Jan 2022 16:35:37 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=i5WD8cJe6DiLzBh1G+qCud3bZ+OIwCsy3uPs/+cfjxPl+Ec5SXn0YT6Q7uk0VnvT0IpDFFXeAgW44b+HqmJP5/tBgSDZLfggcr1oWZWB6ZlZEmLOFtG67m3y420EeSK+blgk/9mgx0Cnxt6shAyYf80vpFo2d4PLzJbF9rm/rtHVUojuoX8gVtbWva/Ri2Kaq3husWwVhMhH/xPwOsnqBCVLUBKCtr/4aL4N5IJXWo/GnZcPWZHtj+U/bTYjWz5P48pwZr/Op2VTI6W/6xlZzSWBucvvXTeeaptRcoGhS35k1h9aQ9wmlMBPbTfq/WrPoFz1g/FRg4mGr5/YKxKt8w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=KJJga7vr03s1YuWbk7nM2BhADNArS9yMNgfqKaBJP1c=; b=FAiKxUt4RUL3viM+KuWJH00MVp36p7FELCAtk32Ph0G/OIlT8tT7wPO6gfvyQDTnIMioeNgUn9wvRf1gRgXhhubPmiX1xK2B4QTzQYtRbOYr7EoOt7LZyYUQgvH72bfnjQrPHdDe3IIe9S1PynYDCg/0iwYqbYD56Y+kb7lfVfqY3yWURh4GSMEcO3+24yRjnl5FkrtX5v533o+rsx2wIDYX4wawzOHfYEwLUxEm2EWP8rHeF46wcGQ+MBHKNyBjg5YfmKEiXqMc3maaWk86/o5Ddnto3Qr4lZdA7jCpuVCSMuPCA65BBwbkxcYglcC7Cl00uQAmnJUVpW8qk51pxQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=KJJga7vr03s1YuWbk7nM2BhADNArS9yMNgfqKaBJP1c=; b=DjUXq7LF3U5hdHfKs2z3LRV16Fyx6lFJ/CY3zWUkHu7bidt7oon73ulpKQ9neH2YKRXAkLBdse+axgW3P67CYCNuFVM2pOpk2jnIdR+ZQ6D351+786IrYnVf07uHVMqaWfiHTxEOjtygx9rQLd76KvX/JMPA1UKIJIdzVBn41NB0ZndBtAL+am+EhEayivl6apCP0SKTleMq4iTkDn3lcr6vuDDaNmojqEYABHYfFl6PX1BIjar4qnR9u2wPrEXWRHhPbovtUTUOv3ks3chTe9MVn1K3OLj4+RCUCbMPsZwVO3z7WEaHp36wnfXzA463OZ3SoAZ33aqlFvbG+Mh4bA== Received: from BL1PR12MB5336.namprd12.prod.outlook.com (2603:10b6:208:314::8) by BN6PR12MB1396.namprd12.prod.outlook.com (2603:10b6:404:1b::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4888.10; Tue, 18 Jan 2022 15:35:36 +0000 Received: from MW4PR04CA0295.namprd04.prod.outlook.com (2603:10b6:303:89::30) by BL1PR12MB5336.namprd12.prod.outlook.com (2603:10b6:208:314::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4888.9; Tue, 18 Jan 2022 15:35:35 +0000 Received: from CO1NAM11FT039.eop-nam11.prod.protection.outlook.com (2603:10b6:303:89:cafe::f) by MW4PR04CA0295.outlook.office365.com (2603:10b6:303:89::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4888.9 via Frontend Transport; Tue, 18 Jan 2022 15:35:34 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.236) by CO1NAM11FT039.mail.protection.outlook.com (10.13.174.110) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4888.9 via Frontend Transport; Tue, 18 Jan 2022 15:35:34 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 18 Jan 2022 15:35:32 +0000 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.986.9; Tue, 18 Jan 2022 07:35:29 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , Subject: [v2,07/10] app/testpmd: implement rte flow queue create flow Date: Tue, 18 Jan 2022 17:35:17 +0200 Message-ID: <20220118153517.3947794-1-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220118153027.3947448-1-akozyrev@nvidia.com> References: <20220118153027.3947448-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 2017e4e9-4e16-450d-20b4-08d9da982b5a X-MS-TrafficTypeDiagnostic: BL1PR12MB5336:EE_|BN6PR12MB1396:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:6430; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: el6OwQIK6U98ZCMd0o3b9eDafN8Va7Zzq376wvElyMOWRacUXfaeofJ3SrMUNdPGs4hcwAwQfp3dinZG/YUGFqujWO+9UGVh3hJwpdJIPCZUjd24OOOTV+4zaHj5cq/y3MmNxPXVqgcewEBCzNXJ79E+8smkLlbn3jxpa0EVgar10m9kTV4NRkZu7mQGNrp7fjhC79ZPYgfahAb7E4k/amcWzhIf5MmFBcGpFBcPxK1nUEwYJ2iABFWwqn+WpZ6TsIlhfbbEpJhW4FvQ/1Jhq+kWgJpavhaEbZtN0A3OTdTjBwhy5p0Scn/9SxN8eTFk2+/D9UFm77DaihzxEfRzrr7+QYp2NJ6BuYD9Os64ekJ3KZZ0e2oIkW1RnefBHrHcHnIKurxmoCB3LHzwj7AW8OciMXRAN8K/NSQuz539MavmSLBYvYsjmIfi3txpSHHi+JMM9qm8tK7UXe0huOapl4WOn03Q7Ny6H/60YSWZtxUtTromYrOTRj3j0qedZ3XQxHkC4TY9sX5VJvL1ABVkr94Q94w+vZJKHbtrwtujJBmpXoVkUoEzTvKCICySCpAFNDX7/9tl6bqXxEG8VxrX8jS/5koZd9qMzsBhQhBCevVZENafpIsoVAtwOhsLuGPuCGWkFaSUPWGHsVVUaG41dqkhxdpSbS5h/O8Q0YdqOHjnYkF8GM3StctBTHJpCD0O87jxpQicKzdFaF/KqwxY3byF/BSOq1aaAl+YYUvb0hBJY1WhV8vmPwY0ObME1VuyZX/CP7gw0O7JVSfU2xs/K4wC7oEg9alkgzmd6kPSQ0g= X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(4636009)(46966006)(40470700002)(36840700001)(36756003)(4326008)(86362001)(26005)(8676002)(7696005)(16526019)(47076005)(6286002)(426003)(40460700001)(2616005)(82310400004)(508600001)(186003)(5660300002)(70206006)(81166007)(30864003)(70586007)(54906003)(6916009)(8936002)(316002)(6666004)(83380400001)(356005)(36860700001)(1076003)(55016003)(336012)(2906002)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2022 15:35:34.5753 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 2017e4e9-4e16-450d-20b4-08d9da982b5a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT039.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR12MB1396 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add testpmd support for the rte_flow_q_create/rte_flow_q_destroy API. Provide the command line interface for enqueueing flow creation/destruction operations. Usage example: testpmd> flow queue 0 create 0 drain yes table 6 item_template 0 action_template 0 pattern eth dst is 00:16:3e:31:15:c3 / end actions drop / end testpmd> flow queue 0 destroy 0 drain yes rule 0 Signed-off-by: Alexander Kozyrev --- app/test-pmd/cmdline_flow.c | 266 +++++++++++++++++++- app/test-pmd/config.c | 153 +++++++++++ app/test-pmd/testpmd.h | 7 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 55 ++++ 4 files changed, 480 insertions(+), 1 deletion(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 4dc2a2aaeb..6a8e6fc683 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -59,6 +59,7 @@ enum index { COMMON_ITEM_TEMPLATE_ID, COMMON_ACTION_TEMPLATE_ID, COMMON_TABLE_ID, + COMMON_QUEUE_ID, /* TOP-level command. */ ADD, @@ -91,6 +92,7 @@ enum index { ISOLATE, TUNNEL, FLEX, + QUEUE, /* Flex arguments */ FLEX_ITEM_INIT, @@ -113,6 +115,22 @@ enum index { ACTION_TEMPLATE_SPEC, ACTION_TEMPLATE_MASK, + /* Queue arguments. */ + QUEUE_CREATE, + QUEUE_DESTROY, + + /* Queue create arguments. */ + QUEUE_CREATE_ID, + QUEUE_CREATE_DRAIN, + QUEUE_TABLE, + QUEUE_ITEM_TEMPLATE, + QUEUE_ACTION_TEMPLATE, + QUEUE_SPEC, + + /* Queue destroy arguments. */ + QUEUE_DESTROY_ID, + QUEUE_DESTROY_DRAIN, + /* Table arguments. */ TABLE_CREATE, TABLE_DESTROY, @@ -889,6 +907,8 @@ struct token { struct buffer { enum index command; /**< Flow command. */ portid_t port; /**< Affected port ID. */ + queueid_t queue; /** Async queue ID. */ + bool drain; /** Drain the queue on async oparation */ union { struct { struct rte_flow_port_attr port_attr; @@ -918,6 +938,7 @@ struct buffer { uint32_t action_id; } ia; /* Indirect action query arguments */ struct { + uint32_t table_id; uint32_t it_id; uint32_t at_id; struct rte_flow_attr attr; @@ -1067,6 +1088,18 @@ static const enum index next_table_destroy_attr[] = { ZERO, }; +static const enum index next_queue_subcmd[] = { + QUEUE_CREATE, + QUEUE_DESTROY, + ZERO, +}; + +static const enum index next_queue_destroy_attr[] = { + QUEUE_DESTROY_ID, + END, + ZERO, +}; + static const enum index next_ia_create_attr[] = { INDIRECT_ACTION_CREATE_ID, INDIRECT_ACTION_INGRESS, @@ -2116,6 +2149,12 @@ static int parse_table(struct context *, const struct token *, static int parse_table_destroy(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_qo(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); +static int parse_qo_destroy(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); static int parse_tunnel(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -2191,6 +2230,8 @@ static int comp_action_template_id(struct context *, const struct token *, unsigned int, char *, unsigned int); static int comp_table_id(struct context *, const struct token *, unsigned int, char *, unsigned int); +static int comp_queue_id(struct context *, const struct token *, + unsigned int, char *, unsigned int); /** Token definitions. */ static const struct token token_list[] = { @@ -2362,6 +2403,13 @@ static const struct token token_list[] = { .call = parse_int, .comp = comp_table_id, }, + [COMMON_QUEUE_ID] = { + .name = "{queue_id}", + .type = "QUEUE_ID", + .help = "queue id", + .call = parse_int, + .comp = comp_queue_id, + }, /* Top-level command. */ [FLOW] = { .name = "flow", @@ -2383,7 +2431,8 @@ static const struct token token_list[] = { QUERY, ISOLATE, TUNNEL, - FLEX)), + FLEX, + QUEUE)), .call = parse_init, }, /* Top-level command. */ @@ -2641,6 +2690,83 @@ static const struct token token_list[] = { .call = parse_table, }, /* Top-level command. */ + [QUEUE] = { + .name = "queue", + .help = "queue a flow rule operation", + .next = NEXT(next_queue_subcmd, NEXT_ENTRY(COMMON_PORT_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_qo, + }, + /* Sub-level commands. */ + [QUEUE_CREATE] = { + .name = "create", + .help = "create a flow rule", + .next = NEXT(NEXT_ENTRY(QUEUE_TABLE), NEXT_ENTRY(COMMON_QUEUE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, queue)), + .call = parse_qo, + }, + [QUEUE_DESTROY] = { + .name = "destroy", + .help = "destroy a flow rule", + .next = NEXT(NEXT_ENTRY(QUEUE_DESTROY_ID), + NEXT_ENTRY(COMMON_QUEUE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, queue)), + .call = parse_qo_destroy, + }, + /* Queue arguments. */ + [QUEUE_TABLE] = { + .name = "table", + .help = "specify table id", + .next = NEXT(NEXT_ENTRY(QUEUE_ITEM_TEMPLATE), + NEXT_ENTRY(COMMON_TABLE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.vc.table_id)), + .call = parse_qo, + }, + [QUEUE_ITEM_TEMPLATE] = { + .name = "item_template", + .help = "specify item template id", + .next = NEXT(NEXT_ENTRY(QUEUE_ACTION_TEMPLATE), + NEXT_ENTRY(COMMON_ITEM_TEMPLATE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.vc.it_id)), + .call = parse_qo, + }, + [QUEUE_ACTION_TEMPLATE] = { + .name = "action_template", + .help = "specify action template id", + .next = NEXT(NEXT_ENTRY(QUEUE_CREATE_DRAIN), + NEXT_ENTRY(COMMON_ACTION_TEMPLATE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.vc.at_id)), + .call = parse_qo, + }, + [QUEUE_CREATE_DRAIN] = { + .name = "drain", + .help = "drain queue immediately", + .next = NEXT(NEXT_ENTRY(ITEM_PATTERN), + NEXT_ENTRY(COMMON_BOOLEAN)), + .args = ARGS(ARGS_ENTRY(struct buffer, drain)), + .call = parse_qo, + }, + [QUEUE_DESTROY_DRAIN] = { + .name = "drain", + .help = "drain queue immediately", + .next = NEXT(NEXT_ENTRY(QUEUE_DESTROY_ID), + NEXT_ENTRY(COMMON_BOOLEAN)), + .args = ARGS(ARGS_ENTRY(struct buffer, drain)), + .call = parse_qo_destroy, + }, + [QUEUE_DESTROY_ID] = { + .name = "rule", + .help = "specify rule id to destroy", + .next = NEXT(next_queue_destroy_attr, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY_PTR(struct buffer, + args.destroy.rule)), + .call = parse_qo_destroy, + }, + /* Top-level command. */ [INDIRECT_ACTION] = { .name = "indirect_action", .type = "{command} {port_id} [{arg} [...]]", @@ -8154,6 +8280,111 @@ parse_table_destroy(struct context *ctx, const struct token *token, return len; } +/** Parse tokens for queue create commands. */ +static int +parse_qo(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command) { + if (ctx->curr != QUEUE) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.vc.data = (uint8_t *)out + size; + return len; + } + switch (ctx->curr) { + case QUEUE_CREATE: + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + return len; + case QUEUE_TABLE: + case QUEUE_ITEM_TEMPLATE: + case QUEUE_ACTION_TEMPLATE: + case QUEUE_CREATE_DRAIN: + return len; + case ITEM_PATTERN: + out->args.vc.pattern = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + ctx->object = out->args.vc.pattern; + ctx->objmask = NULL; + return len; + case ACTIONS: + out->args.vc.actions = + (void *)RTE_ALIGN_CEIL((uintptr_t) + (out->args.vc.pattern + + out->args.vc.pattern_n), + sizeof(double)); + ctx->object = out->args.vc.actions; + ctx->objmask = NULL; + return len; + default: + return -1; + } +} + +/** Parse tokens for queue destroy command. */ +static int +parse_qo_destroy(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + uint32_t *flow_id; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command || out->command == QUEUE) { + if (ctx->curr != QUEUE_DESTROY) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.destroy.rule = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + return len; + } + switch (ctx->curr) { + case QUEUE_DESTROY_ID: + flow_id = out->args.destroy.rule + + out->args.destroy.rule_n++; + if ((uint8_t *)flow_id > (uint8_t *)out + size) + return -1; + ctx->objdata = 0; + ctx->object = flow_id; + ctx->objmask = NULL; + return len; + case QUEUE_DESTROY_DRAIN: + return len; + default: + return -1; + } +} + static int parse_flex(struct context *ctx, const struct token *token, const char *str, unsigned int len, @@ -9193,6 +9424,28 @@ comp_table_id(struct context *ctx, const struct token *token, return i; } +/** Complete available queue IDs. */ +static int +comp_queue_id(struct context *ctx, const struct token *token, + unsigned int ent, char *buf, unsigned int size) +{ + unsigned int i = 0; + struct rte_port *port; + + (void)token; + if (port_id_is_invalid(ctx->port, DISABLED_WARN) || + ctx->port == (portid_t)RTE_PORT_ALL) + return -1; + port = &ports[ctx->port]; + for (i = 0; i < port->queue_nb; i++) { + if (buf && i == ent) + return snprintf(buf, size, "%u", i); + } + if (buf) + return -1; + return i; +} + /** Internal context. */ static struct context cmd_flow_context; @@ -9485,6 +9738,17 @@ cmd_flow_parsed(const struct buffer *in) in->args.table_destroy.table_id_n, in->args.table_destroy.table_id); break; + case QUEUE_CREATE: + port_queue_flow_create(in->port, in->queue, in->drain, + in->args.vc.table_id, in->args.vc.it_id, + in->args.vc.at_id, in->args.vc.pattern, + in->args.vc.actions); + break; + case QUEUE_DESTROY: + port_queue_flow_destroy(in->port, in->queue, in->drain, + in->args.destroy.rule_n, + in->args.destroy.rule); + break; case INDIRECT_ACTION_CREATE: port_action_handle_create( in->port, in->args.vc.attr.group, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 07582fa552..31164d6bf6 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -2411,6 +2411,159 @@ port_flow_table_destroy(portid_t port_id, return ret; } +/** Enqueue create flow rule operation. */ +int +port_queue_flow_create(portid_t port_id, queueid_t queue_id, + bool drain, uint32_t table_id, + uint32_t item_id, uint32_t action_id, + const struct rte_flow_item *pattern, + const struct rte_flow_action *actions) +{ + struct rte_flow_q_ops_attr ops_attr = { .drain = drain }; + struct rte_flow_q_op_res comp = { 0 }; + struct rte_flow *flow; + struct rte_port *port; + struct port_flow *pf; + struct port_table *pt; + uint32_t id = 0; + bool found; + int ret = 0; + struct rte_flow_error error; + struct rte_flow_action_age *age = age_action_get(actions); + + port = &ports[port_id]; + if (port->flow_list) { + if (port->flow_list->id == UINT32_MAX) { + printf("Highest rule ID is already assigned," + " delete it first"); + return -ENOMEM; + } + id = port->flow_list->id + 1; + } + + if (queue_id >= port->queue_nb) { + printf("Queue #%u is invalid\n", queue_id); + return -EINVAL; + } + + found = false; + pt = port->table_list; + while (pt) { + if (table_id == pt->id) { + found = true; + break; + } + pt = pt->next; + } + if (!found) { + printf("Table #%u is invalid\n", table_id); + return -EINVAL; + } + + pf = port_flow_new(NULL, pattern, actions, &error); + if (!pf) + return port_flow_complain(&error); + if (age) { + pf->age_type = ACTION_AGE_CONTEXT_TYPE_FLOW; + age->context = &pf->age_type; + } + /* Poisoning to make sure PMDs update it in case of error. */ + memset(&error, 0x11, sizeof(error)); + flow = rte_flow_q_flow_create(port_id, queue_id, &ops_attr, + pt->table, pattern, item_id, actions, action_id, &error); + if (!flow) { + uint32_t flow_id = pf->id; + port_queue_flow_destroy(port_id, queue_id, true, 1, &flow_id); + return port_flow_complain(&error); + } + + while (ret == 0) { + /* Poisoning to make sure PMDs update it in case of error. */ + memset(&error, 0x22, sizeof(error)); + ret = rte_flow_q_dequeue(port_id, queue_id, &comp, 1, &error); + if (ret < 0) { + printf("Failed to poll queue\n"); + return -EINVAL; + } + } + + pf->next = port->flow_list; + pf->id = id; + pf->flow = flow; + port->flow_list = pf; + printf("Flow rule #%u creation enqueued\n", pf->id); + return 0; +} + +/** Enqueue number of destroy flow rules operations. */ +int +port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, + bool drain, uint32_t n, const uint32_t *rule) +{ + struct rte_flow_q_ops_attr op_attr = { .drain = drain }; + struct rte_flow_q_op_res comp = { 0 }; + struct rte_port *port; + struct port_flow **tmp; + uint32_t c = 0; + int ret = 0; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + + if (queue_id >= port->queue_nb) { + printf("Queue #%u is invalid\n", queue_id); + return -EINVAL; + } + + tmp = &port->flow_list; + while (*tmp) { + uint32_t i; + + for (i = 0; i != n; ++i) { + struct rte_flow_error error; + struct port_flow *pf = *tmp; + + if (rule[i] != pf->id) + continue; + /* + * Poisoning to make sure PMD + * update it in case of error. + */ + memset(&error, 0x33, sizeof(error)); + if (rte_flow_q_flow_destroy(port_id, queue_id, &op_attr, + pf->flow, &error)) { + ret = port_flow_complain(&error); + continue; + } + + while (ret == 0) { + /* + * Poisoning to make sure PMD + * update it in case of error. + */ + memset(&error, 0x44, sizeof(error)); + ret = rte_flow_q_dequeue(port_id, queue_id, + &comp, 1, &error); + if (ret < 0) { + printf("Failed to poll queue\n"); + return -EINVAL; + } + } + + printf("Flow rule #%u destruction enqueued\n", pf->id); + *tmp = pf->next; + free(pf); + break; + } + if (i == n) + tmp = &(*tmp)->next; + ++c; + } + return ret; +} + /** Create flow rule. */ int port_flow_create(portid_t port_id, diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index b8655b9987..99845b9e2f 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -927,6 +927,13 @@ int port_flow_table_create(portid_t port_id, uint32_t id, uint32_t nb_action_templates, uint32_t *action_templates); int port_flow_table_destroy(portid_t port_id, uint32_t n, const uint32_t *table); +int port_queue_flow_create(portid_t port_id, queueid_t queue_id, + bool drain, uint32_t table_id, + uint32_t item_id, uint32_t action_id, + const struct rte_flow_item *pattern, + const struct rte_flow_action *actions); +int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, + bool drain, uint32_t n, const uint32_t *rule); int port_flow_validate(portid_t port_id, const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index f8a87564be..eb9dff7221 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3355,6 +3355,19 @@ following sections. pattern {item} [/ {item} [...]] / end actions {action} [/ {action} [...]] / end +- Enqueue creation of a flow rule:: + + flow queue {port_id} create {queue_id} [drain {boolean}] + table {table_id} item_template {item_template_id} + action_template {action_template_id} + pattern {item} [/ {item} [...]] / end + actions {action} [/ {action} [...]] / end + +- Enqueue destruction of specific flow rules:: + + flow queue {port_id} destroy {queue_id} + [drain {boolean}] rule {rule_id} [...] + - Create a flow rule:: flow create {port_id} @@ -3654,6 +3667,29 @@ one. **All unspecified object values are automatically initialized to 0.** +Enqueueing creation of flow rules +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow queue create`` adds creation operation of a flow rule to a queue. +It is bound to ``rte_flow_q_flow_create()``:: + + flow queue {port_id} create {queue_id} [drain {boolean}] + table {table_id} item_template {item_template_id} + action_template {action_template_id} + pattern {item} [/ {item} [...]] / end + actions {action} [/ {action} [...]] / end + +If successful, it will return a flow rule ID usable with other commands:: + + Flow rule #[...] created + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + +This command uses the same pattern items and actions as ``flow create``, +their format is described in `Creating flow rules`_. + Attributes ^^^^^^^^^^ @@ -4368,6 +4404,25 @@ Non-existent rule IDs are ignored:: Flow rule #0 destroyed testpmd> +Enqueueing destruction of flow rules +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow queue destroy`` adds destruction operations to destroy one or more rules +from their rule ID (as returned by ``flow queue create``) to a queue, +this command calls ``rte_flow_q_flow_destroy()`` as many times as necessary:: + + flow queue {port_id} destroy {queue_id} + [drain {boolean}] rule {rule_id} [...] + +If successful, it will show:: + + Flow rule #[...] destruction enqueued + +It does not report anything for rule IDs that do not exist. The usual error +message is shown when a rule cannot be destroyed:: + + Caught error type [...] ([...]): [...] + Querying flow rules ~~~~~~~~~~~~~~~~~~~ From patchwork Tue Jan 18 15:35:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 106038 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 793D2A034C; Tue, 18 Jan 2022 16:36:19 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 62DC84273B; Tue, 18 Jan 2022 16:36:19 +0100 (CET) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2075.outbound.protection.outlook.com [40.107.93.75]) by mails.dpdk.org (Postfix) with ESMTP id 9FCFA4068E for ; Tue, 18 Jan 2022 16:36:18 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=jUkiUlER/MZW4iHkxkwLCCcbjRC9zvAfGuByBRllFlUSErLebeHsG/TZTsHUL1XPxHpzbu0HTllwmWsunx/6qhNCGhq7YVNPvotWBQW62YToDQfBvfsdUeZ6kjQaF03FCfE57+Y7JyIVnqwJNGerrPomsuWcRufW6e++LHMD5mKM5gdfxcIwgAIo08SJoZ0xcYkrbAwOskjAdag/YtoO+6RP8X/4FE2VTOW5wn0t38wZ5xMDxojsWA+l6NP4X0k1z4xL/si62+MaDLmqcN1pawBIEC0/2k4wYk13nmvchPGOMWnJ1DKaGbfgAFncBBXE4qczXw3h0LdQalSwJYLxXQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=y+LSKXYCkmmWKSy3X5QJ7e0bRzJRUbY04qmSXgW7SCA=; b=Jx0B8JfwsVHEHJum98eGSaz5NwBp97JEoRRjpToE8kz8Ovkd64Yul/9V4jDPr5czYtcBWwXeOxLHNW2LIDTffNhGupW2Ae1aauMt23Wx3xXIkQhB5NO8twu9XtoOQojmfM+96qNGPFN/rzEoJezCe/I9ZYJFIOctwYgTkClJerNB5JPd499fPV2IEj77gfS+KYJ+klJFexo9gzREvWHb9vDVzA11z4eObwmQCVigMmYw0KE0m4sYio7hQRJkAqhCHRKOHXfuF5hbOHCC44Z5l4mNcr8rrffVET4/lYX08iVQhp1JoGQOkvg0nJmpomr9bLRuKJFKBaw6Eg747Ep4yA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=y+LSKXYCkmmWKSy3X5QJ7e0bRzJRUbY04qmSXgW7SCA=; b=QrqUX85y/SY0B4s51pcQ7ja698OR+ULovXz3U16XpSgazDy745ES6kZHGxTR6KP4U1uekpvVhy/Y0o9Un5g/nj14CDpN7WmvfpBsRrAKeSEs5oWGWbMTK80SSILU7rQgjzdWUckx0iP10a8CutNHJ3aLcjjoy39uu5njTxvR0aEDLQVvGQSrqE3FflxL0X7LexRR9DpHtpamuT0yRGzjCRDhmcXXvRvp6b1g9VkYETm4ry5uDCAxQ5tv9ywp9bnKaWqBmvXuy6T0EJ8HD9nylA7h625RCHd+66YHcYEP+0WPW6IwRKPsIrRwapvwDzzk43TgoV2Vg3C53YPl7GuQPA== Received: from MWHPR1201MB0078.namprd12.prod.outlook.com (2603:10b6:301:56::20) by CY4PR12MB1128.namprd12.prod.outlook.com (2603:10b6:903:3e::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4909.7; Tue, 18 Jan 2022 15:36:16 +0000 Received: from DM5PR15CA0068.namprd15.prod.outlook.com (2603:10b6:3:ae::30) by MWHPR1201MB0078.namprd12.prod.outlook.com (2603:10b6:301:56::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4888.12; Tue, 18 Jan 2022 15:36:15 +0000 Received: from DM6NAM11FT021.eop-nam11.prod.protection.outlook.com (2603:10b6:3:ae:cafe::4b) by DM5PR15CA0068.outlook.office365.com (2603:10b6:3:ae::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4888.12 via Frontend Transport; Tue, 18 Jan 2022 15:36:14 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.235) by DM6NAM11FT021.mail.protection.outlook.com (10.13.173.76) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4888.9 via Frontend Transport; Tue, 18 Jan 2022 15:36:14 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 18 Jan 2022 15:36:14 +0000 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.986.9; Tue, 18 Jan 2022 07:36:10 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , Subject: [v2,08/10] app/testpmd: implement rte flow queue drain Date: Tue, 18 Jan 2022 17:35:57 +0200 Message-ID: <20220118153557.3947859-1-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220118153027.3947448-1-akozyrev@nvidia.com> References: <20220118153027.3947448-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: cc9c0063-fe8e-4e30-6391-08d9da98434a X-MS-TrafficTypeDiagnostic: MWHPR1201MB0078:EE_|CY4PR12MB1128:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4714; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: H8g1ta4Sw04AVdwZHx3uM3LM70NDluBgrQkdo/UikUj8WB4Q6Ji72nNxcoHq9STLkc1Sf+Por7dKPNvahltZgIXN5YLjiTrbsyrj3MWhr+8YegEynhz69vl7IEMw+BE6fpvgFD5yxE2FU6TX9enNCelvSz4GLDtkpSoQKSSlL6xs6/QenN6FuNbCxRBzkIcKjRJ+KNjnmTUmK7B3Saxs13bw6cDQLiiumTNMXe1Y20G6p4vMGfNodULNyZcLIrkpk747GH/lgp7X4qJtEOipsIaQ+4xcMpGwp++8H+8IRpycJ17grCyEvR5GHd2vjKZ/2N8VbYZV99nmP0TG7oFCQ2sYj2AwBJVMi7F1cmJNd0WOX2lBtZgSAn2q+dP079rx0GnnqP/t5zHhlcp6n9Oprcyu+ioXfhuWA33SecJvUfKOrJmeDfpxCYMf7rtAvYjZo+nCUIhBr8cPQ0ZZwM120NrodvJiKovGTTnMtK65PiHd0UNaakf6stjknQCFh4j9H7ekHF5iYHX+ITYpaki97mwRDzWTMQSTFXNZrjWBncLHyrD9tylEGrUdi6yFi0RsCd+SErdPH1MY7/F2zQSgSSQvV6xmWdyMjbp7nwkRxG5zfvYjA4iuJwFxlaUGEdMi2ofrCGQXSEqLkxEsfmvJX2plMLv45kdL+Yoqf2ef6SspSXK8oGtB4fwIrV7o/YRzfYlXofJdV3soWTx0D2RbOBwj9Y0LiSI3Nu9Ni78tNQZQ+Hyk5+Ta++fWp+tleL43150tGlFWYXUM1Q3GMK/um04lcE4hahqr2/46Of4qx8E= X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(4636009)(40470700002)(36840700001)(46966006)(6666004)(8936002)(40460700001)(26005)(16526019)(426003)(1076003)(55016003)(82310400004)(6916009)(36756003)(8676002)(5660300002)(508600001)(4326008)(2616005)(70206006)(70586007)(86362001)(2906002)(6286002)(336012)(81166007)(356005)(83380400001)(54906003)(47076005)(7696005)(316002)(36860700001)(186003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2022 15:36:14.7374 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: cc9c0063-fe8e-4e30-6391-08d9da98434a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT021.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR12MB1128 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add testpmd support for the rte_flow_q_drain API. Provide the command line interface for the queue draining. Usage example: flow queue 0 drain 0 Signed-off-by: Alexander Kozyrev --- app/test-pmd/cmdline_flow.c | 56 ++++++++++++++++++++- app/test-pmd/config.c | 28 +++++++++++ app/test-pmd/testpmd.h | 1 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 21 ++++++++ 4 files changed, 105 insertions(+), 1 deletion(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 6a8e6fc683..e94c01cf75 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -93,6 +93,7 @@ enum index { TUNNEL, FLEX, QUEUE, + DRAIN, /* Flex arguments */ FLEX_ITEM_INIT, @@ -131,6 +132,9 @@ enum index { QUEUE_DESTROY_ID, QUEUE_DESTROY_DRAIN, + /* Drain arguments. */ + DRAIN_QUEUE, + /* Table arguments. */ TABLE_CREATE, TABLE_DESTROY, @@ -2155,6 +2159,9 @@ static int parse_qo(struct context *, const struct token *, static int parse_qo_destroy(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_drain(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); static int parse_tunnel(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -2432,7 +2439,8 @@ static const struct token token_list[] = { ISOLATE, TUNNEL, FLEX, - QUEUE)), + QUEUE, + DRAIN)), .call = parse_init, }, /* Top-level command. */ @@ -2767,6 +2775,21 @@ static const struct token token_list[] = { .call = parse_qo_destroy, }, /* Top-level command. */ + [DRAIN] = { + .name = "drain", + .help = "drain a flow queue", + .next = NEXT(NEXT_ENTRY(DRAIN_QUEUE), NEXT_ENTRY(COMMON_PORT_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_drain, + }, + /* Sub-level commands. */ + [DRAIN_QUEUE] = { + .name = "queue", + .help = "specify queue id", + .next = NEXT(NEXT_ENTRY(END), NEXT_ENTRY(COMMON_QUEUE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, queue)), + }, + /* Top-level command. */ [INDIRECT_ACTION] = { .name = "indirect_action", .type = "{command} {port_id} [{arg} [...]]", @@ -8385,6 +8408,34 @@ parse_qo_destroy(struct context *ctx, const struct token *token, } } +/** Parse tokens for drain queue command. */ +static int +parse_drain(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command) { + if (ctx->curr != DRAIN) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.vc.data = (uint8_t *)out + size; + } + return len; +} + static int parse_flex(struct context *ctx, const struct token *token, const char *str, unsigned int len, @@ -9749,6 +9800,9 @@ cmd_flow_parsed(const struct buffer *in) in->args.destroy.rule_n, in->args.destroy.rule); break; + case DRAIN: + port_queue_flow_drain(in->port, in->queue); + break; case INDIRECT_ACTION_CREATE: port_action_handle_create( in->port, in->args.vc.attr.group, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 31164d6bf6..c6469dd06f 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -2564,6 +2564,34 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, return ret; } +/** Drain all the queue operations down the queue. */ +int +port_queue_flow_drain(portid_t port_id, queueid_t queue_id) +{ + struct rte_port *port; + struct rte_flow_error error; + int ret = 0; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + + if (queue_id >= port->queue_nb) { + printf("Queue #%u is invalid\n", queue_id); + return -EINVAL; + } + + memset(&error, 0x55, sizeof(error)); + ret = rte_flow_q_drain(port_id, queue_id, &error); + if (ret < 0) { + printf("Failed to drain queue\n"); + return -EINVAL; + } + printf("Queue #%u drained\n", queue_id); + return ret; +} + /** Create flow rule. */ int port_flow_create(portid_t port_id, diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 99845b9e2f..bf4597e7ba 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -934,6 +934,7 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id, const struct rte_flow_action *actions); int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, bool drain, uint32_t n, const uint32_t *rule); +int port_queue_flow_drain(portid_t port_id, queueid_t queue_id); int port_flow_validate(portid_t port_id, const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index eb9dff7221..2ff4e4aef1 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3368,6 +3368,10 @@ following sections. flow queue {port_id} destroy {queue_id} [drain {boolean}] rule {rule_id} [...] +- Drain a queue:: + + flow drain {port_id} queue {queue_id} + - Create a flow rule:: flow create {port_id} @@ -3561,6 +3565,23 @@ The usual error message is shown when a table cannot be destroyed:: Caught error type [...] ([...]): [...] +Draining a flow queue +~~~~~~~~~~~~~~~~~~~~~ + +``flow drain`` drains the specific queue to push all the +outstanding queued operations to the underlying device immediately. +It is bound to ``rte_flow_q_drain()``:: + + flow drain {port_id} queue {queue_id} + +If successful, it will show:: + + Queue #[...] drained + +The usual error message is shown when a queue cannot be drained:: + + Caught error type [...] ([...]): [...] + Creating a tunnel stub for offload ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From patchwork Tue Jan 18 15:36:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 106039 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B8D8FA034C; Tue, 18 Jan 2022 16:36:59 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A445C42735; Tue, 18 Jan 2022 16:36:59 +0100 (CET) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2085.outbound.protection.outlook.com [40.107.236.85]) by mails.dpdk.org (Postfix) with ESMTP id CD5B94068E for ; Tue, 18 Jan 2022 16:36:57 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=QzR9njrDt8wC/HlWuQWpIEcqsSZtKXfSWB6ljhT3czglFY/0mYcpWrRWwOzxQPtWWl8/T7n+BMmPt8BeMc/vmrO327T6fjaEpQnnrdeV1E1J5Vap2VWHS/lb1bDB9XReXM4hx7XWl6eNyIrQ/OONQSlLOPvZCBh8cucvQjUV1oT8J0kdffoKClEWi0KRXcn7JdI+aKxd+7aruhdN5dDVHBfhex5BvyglnbfGQulq5gouFsz3BJGSRTBLRcGyEB1CUVqO6aBnb3mM4CN8k/8jX3QjPPeb5TVzzLMVP+6WLT/sQpR/8AmtJK1cfmBWDEIqW7yNSRztu4e2QKSzr5ItUg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=MbCwJtq/8ubFYUL3cGU7m5AUbPSn8iwvEFjHojPUYWQ=; b=cD3E/kdYjOrAa8aSzNPyGfJpZo0tSIUwSiBwJLwaJbabIOJZTUUjkDDO5jv849aUXqvhSTog/8H0dMY9eu5kZUtuWqMQ9Bm191wpj71aNc/2jCHgiaCAoWE5D8HFmkDGaAGcJHNsMj1tVaph28uZ/3zgpW6vje+8QRh4CRZzcEmmFZR9XSxK4xpHnUQXPXxoIlwSNdmRYYYeqHP5xNVvq8opvk40nk4lUTEhwpBM/v2qLhCrjk4iqyvuoYvgupTMYr8RZZ+KnzVLqs7rd7hXXLQzcQxlZcHOrSBS0q089dQGSAiBh++NnweyT4a10WsjDAH9W43Lf9E/WvdT9RKx3w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=MbCwJtq/8ubFYUL3cGU7m5AUbPSn8iwvEFjHojPUYWQ=; b=APpSiNLncUmbhw9tBMtV6+XsJW3jkI7oQfUYWoTTyd+kJniTMUSriyt7XJXGXX4f0AayN55x5riV+eSBVuZ4pMBceKjS4TM0FoHzMQsYdkFqysRR4eOggJXzV4PrMtVAFsGb2gI9E4LyWVLMGfMqAAlTczApq7T9Y6wg4N+UQFLVL9ueSdOPpdjre8m04IlyAZslykJtU83hCkzXV6zHl0otrpyzXveC3z3BS+PSBD/nFnO6NqoCrcQ3KH9dpbPHlUKllyoB7YvBTgY8inlm2UzDqOgUnaPL00DU07tmw6qb1iSy3jPgI7M++PEBES9USs6TqcieokZAm5P3r12NqA== Received: from PH0PR12MB5497.namprd12.prod.outlook.com (2603:10b6:510:eb::22) by MN2PR12MB3696.namprd12.prod.outlook.com (2603:10b6:208:169::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4888.10; Tue, 18 Jan 2022 15:36:56 +0000 Received: from BN9PR03CA0045.namprd03.prod.outlook.com (2603:10b6:408:fb::20) by PH0PR12MB5497.namprd12.prod.outlook.com (2603:10b6:510:eb::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4888.11; Tue, 18 Jan 2022 15:36:54 +0000 Received: from BN8NAM11FT003.eop-nam11.prod.protection.outlook.com (2603:10b6:408:fb:cafe::96) by BN9PR03CA0045.outlook.office365.com (2603:10b6:408:fb::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4909.7 via Frontend Transport; Tue, 18 Jan 2022 15:36:54 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by BN8NAM11FT003.mail.protection.outlook.com (10.13.177.90) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4888.9 via Frontend Transport; Tue, 18 Jan 2022 15:36:54 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 18 Jan 2022 15:36:53 +0000 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.986.9; Tue, 18 Jan 2022 07:36:50 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , Subject: [v2,09/10] app/testpmd: implement rte flow queue dequeue Date: Tue, 18 Jan 2022 17:36:37 +0200 Message-ID: <20220118153637.3947925-1-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220118153027.3947448-1-akozyrev@nvidia.com> References: <20220118153027.3947448-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 59d30e83-ed67-4460-f55c-08d9da985adf X-MS-TrafficTypeDiagnostic: PH0PR12MB5497:EE_|MN2PR12MB3696:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2803; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: a+I8CnHQrOT4gI7aeu19pY95EhjDTkqJkBYGDQ/HpjfCWik5mnw07UBQhNnicf02eZBH1XeYXBjWfJAR6tjTtuR4S3V1pZJYYQIoZgrI3H8Od7wvfuD/7uXEnS7zB23D4LID03SBE8zLdnY3hHSiMLCDW17Blwks9L24LEQcsnrgrMMaLG9LbhzgU1yNeZa4U1jJjfZhAFpRxreLGgE9eCydC9JVw3WXM2THJ0OVXX6hi1ddGh4aszyOmG2hE6RhGGtakOJSQRkTAmLhje65J2bRV5WyGq9/dfj+tyK25stFgWcJLYtIlGwmiKHYd0pppMQbwKBHhyScG2y7p54fiWq7TUS2IVdCHd+9STZXfj3FL6pI4qBOAi0MbcuM2mz70saAAJJdc9sGCDjJXA4c5ZP6WpiBCRghyL1ali1f7J/S0tFgoJWtFL5yCdZHYPUvs1i1+gCJ0hOHI9fmUorrShn6CP8EkoI2l3Gxc9/NS2Cgjp+YJj0IQsz8AvBTVbh0MBVObIHMk11P9aSlxjlKV+mymn8MJoBmnPSzJ1wVg5/TrKSnhdKXMhtmBWlV8lYsZ/f7HK6yrhbTwjtd8LJrC+z0ErHUuQAji8kHxkL5m3B0Spx/GjD7bbGCv1nttrJhjbk1wqwQTtCsqD5rIakCmtQW9eoiIZ3RX2/PS7CaoH7yEryiYomkQaxWS019dkCyvCUDQkMN3CNcIQTWnGK/UysvKHbQWAPQedBK767cm9ndBl9qhVnmqXSui5vLS6z0dni3zdxRShut1jHQiFZVJftZH0W7YyUUlFR0cZlVr0w= X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(4636009)(40470700002)(46966006)(36840700001)(8676002)(83380400001)(8936002)(1076003)(356005)(2906002)(4326008)(81166007)(70206006)(36860700001)(5660300002)(6916009)(70586007)(316002)(54906003)(336012)(6666004)(26005)(36756003)(47076005)(6286002)(16526019)(7696005)(86362001)(82310400004)(186003)(508600001)(426003)(2616005)(55016003)(40460700001)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2022 15:36:54.2428 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 59d30e83-ed67-4460-f55c-08d9da985adf X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT003.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB3696 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add testpmd support for the rte_flow_q_dequeue API. Provide the command line interface for operations dequeue. Usage example: flow dequeue 0 queue 0 Signed-off-by: Alexander Kozyrev --- app/test-pmd/cmdline_flow.c | 54 +++++++++++++++ app/test-pmd/config.c | 74 +++++++++++++-------- app/test-pmd/testpmd.h | 1 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 25 +++++++ 4 files changed, 126 insertions(+), 28 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index e94c01cf75..507eb87984 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -93,6 +93,7 @@ enum index { TUNNEL, FLEX, QUEUE, + DEQUEUE, DRAIN, /* Flex arguments */ @@ -132,6 +133,9 @@ enum index { QUEUE_DESTROY_ID, QUEUE_DESTROY_DRAIN, + /* Dequeue arguments. */ + DEQUEUE_QUEUE, + /* Drain arguments. */ DRAIN_QUEUE, @@ -2159,6 +2163,9 @@ static int parse_qo(struct context *, const struct token *, static int parse_qo_destroy(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_dequeue(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); static int parse_drain(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -2440,6 +2447,7 @@ static const struct token token_list[] = { TUNNEL, FLEX, QUEUE, + DEQUEUE, DRAIN)), .call = parse_init, }, @@ -2775,6 +2783,21 @@ static const struct token token_list[] = { .call = parse_qo_destroy, }, /* Top-level command. */ + [DEQUEUE] = { + .name = "dequeue", + .help = "dequeue flow operations", + .next = NEXT(NEXT_ENTRY(DEQUEUE_QUEUE), NEXT_ENTRY(COMMON_PORT_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_dequeue, + }, + /* Sub-level commands. */ + [DEQUEUE_QUEUE] = { + .name = "queue", + .help = "specify queue id", + .next = NEXT(NEXT_ENTRY(END), NEXT_ENTRY(COMMON_QUEUE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, queue)), + }, + /* Top-level command. */ [DRAIN] = { .name = "drain", .help = "drain a flow queue", @@ -8408,6 +8431,34 @@ parse_qo_destroy(struct context *ctx, const struct token *token, } } +/** Parse tokens for dequeue command. */ +static int +parse_dequeue(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command) { + if (ctx->curr != DEQUEUE) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.vc.data = (uint8_t *)out + size; + } + return len; +} + /** Parse tokens for drain queue command. */ static int parse_drain(struct context *ctx, const struct token *token, @@ -9800,6 +9851,9 @@ cmd_flow_parsed(const struct buffer *in) in->args.destroy.rule_n, in->args.destroy.rule); break; + case DEQUEUE: + port_queue_flow_dequeue(in->port, in->queue); + break; case DRAIN: port_queue_flow_drain(in->port, in->queue); break; diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index c6469dd06f..5d23edf562 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -2420,14 +2420,12 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id, const struct rte_flow_action *actions) { struct rte_flow_q_ops_attr ops_attr = { .drain = drain }; - struct rte_flow_q_op_res comp = { 0 }; struct rte_flow *flow; struct rte_port *port; struct port_flow *pf; struct port_table *pt; uint32_t id = 0; bool found; - int ret = 0; struct rte_flow_error error; struct rte_flow_action_age *age = age_action_get(actions); @@ -2477,16 +2475,6 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id, return port_flow_complain(&error); } - while (ret == 0) { - /* Poisoning to make sure PMDs update it in case of error. */ - memset(&error, 0x22, sizeof(error)); - ret = rte_flow_q_dequeue(port_id, queue_id, &comp, 1, &error); - if (ret < 0) { - printf("Failed to poll queue\n"); - return -EINVAL; - } - } - pf->next = port->flow_list; pf->id = id; pf->flow = flow; @@ -2501,7 +2489,6 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, bool drain, uint32_t n, const uint32_t *rule) { struct rte_flow_q_ops_attr op_attr = { .drain = drain }; - struct rte_flow_q_op_res comp = { 0 }; struct rte_port *port; struct port_flow **tmp; uint32_t c = 0; @@ -2537,21 +2524,6 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, ret = port_flow_complain(&error); continue; } - - while (ret == 0) { - /* - * Poisoning to make sure PMD - * update it in case of error. - */ - memset(&error, 0x44, sizeof(error)); - ret = rte_flow_q_dequeue(port_id, queue_id, - &comp, 1, &error); - if (ret < 0) { - printf("Failed to poll queue\n"); - return -EINVAL; - } - } - printf("Flow rule #%u destruction enqueued\n", pf->id); *tmp = pf->next; free(pf); @@ -2592,6 +2564,52 @@ port_queue_flow_drain(portid_t port_id, queueid_t queue_id) return ret; } +/** Dequeue a queue operation from the queue. */ +int +port_queue_flow_dequeue(portid_t port_id, queueid_t queue_id) +{ + struct rte_port *port; + struct rte_flow_q_op_res *res; + struct rte_flow_error error; + int ret = 0; + int success = 0; + int i; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + + if (queue_id >= port->queue_nb) { + printf("Queue #%u is invalid\n", queue_id); + return -EINVAL; + } + + res = malloc(sizeof(struct rte_flow_q_op_res) * port->queue_sz); + if (!res) { + printf("Failed to allocate memory for dequeue results\n"); + return -ENOMEM; + } + + memset(&error, 0x66, sizeof(error)); + ret = rte_flow_q_dequeue(port_id, queue_id, res, + port->queue_sz, &error); + if (ret < 0) { + printf("Failed to dequeue a queue\n"); + free(res); + return -EINVAL; + } + + for (i = 0; i < ret; i++) { + if (res[i].status == RTE_FLOW_Q_OP_SUCCESS) + success++; + } + printf("Queue #%u dequeued %u operations (%u failed, %u succeeded)\n", + queue_id, ret, ret - success, success); + free(res); + return ret; +} + /** Create flow rule. */ int port_flow_create(portid_t port_id, diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index bf4597e7ba..3cf336dbae 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -935,6 +935,7 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id, int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, bool drain, uint32_t n, const uint32_t *rule); int port_queue_flow_drain(portid_t port_id, queueid_t queue_id); +int port_queue_flow_dequeue(portid_t port_id, queueid_t queue_id); int port_flow_validate(portid_t port_id, const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 2ff4e4aef1..fff4de8f00 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3372,6 +3372,10 @@ following sections. flow drain {port_id} queue {queue_id} +- Dequeue all operations from a queue:: + + flow dequeue {port_id} queue {queue_id} + - Create a flow rule:: flow create {port_id} @@ -3582,6 +3586,23 @@ The usual error message is shown when a queue cannot be drained:: Caught error type [...] ([...]): [...] +Dequeueing flow operations +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow dequeue`` asks the underlying device about flow queue operations +results and return all the processed (successfully or not) operations. +It is bound to ``rte_flow_q_dequeue()``:: + + flow dequeue {port_id} queue {queue_id} + +If successful, it will show:: + + Queue #[...] dequeued #[...] operations (#[...] failed, #[...] succeeded) + +The usual error message is shown when a queue cannot be drained:: + + Caught error type [...] ([...]): [...] + Creating a tunnel stub for offload ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -3711,6 +3732,8 @@ Otherwise it will show an error message of the form:: This command uses the same pattern items and actions as ``flow create``, their format is described in `Creating flow rules`_. +``flow queue dequeue`` must be called to retrieve the operation status. + Attributes ^^^^^^^^^^ @@ -4444,6 +4467,8 @@ message is shown when a rule cannot be destroyed:: Caught error type [...] ([...]): [...] +``flow queue dequeue`` must be called to retrieve the operation status. + Querying flow rules ~~~~~~~~~~~~~~~~~~~ From patchwork Tue Jan 18 15:37:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 106040 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A5621A034C; Tue, 18 Jan 2022 16:37:33 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 929134273D; Tue, 18 Jan 2022 16:37:33 +0100 (CET) Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam07on2062.outbound.protection.outlook.com [40.107.212.62]) by mails.dpdk.org (Postfix) with ESMTP id 9C2A34068E for ; Tue, 18 Jan 2022 16:37:32 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=VXqU896VFhbQQsyeq3QX0LY4jnTbrZ9xsTpiZEBwVYImZybM+2+507vKANxSRdpkS+svS9vjj8qaOblLF/UZx1UgqTkiGJGHUhtMuVIEeV8mDwqrf6CRw+exJQmH8iiPJiv1yOyZzmVaeakjvQJqFwvW7hpXtZbqtwTAMwXG72kx9E8szyR0JvD65dg5vq9w0FTOStHURC0pDeVmw9emKEQjuNyIpYltTnPR1v6MF4qy5gHWadgMxcVxlzcus5o0zI70ICnWPB+lusTWiSmUwrclWSL2/Ufh7Iq8DuUUE8WNJwRzRxOsjzllo1UFOIAez9G7znzSmItQGDnjexmC4w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=16Air03rGTEm/t/M6PEuvdsDgR32V6eubN/eW9THhds=; b=e4Tf/uxx9OfdQ/BDj5Vtjn6YwJnkEws836kHV3jThbtc2uGmkQxV+0Igl6ZAofSY8Gi5hKXu71CeJwKIcnwbidp46FZhYfTNICJ5wHzrFNCUAc/wWSmw6pMqL4WHlYl+QNSEGkcbPNMcBP47+PiK2n4IwM/GlOUNblb6wrp4qg3bCE4LuhxetVJkVJEWBsQSbGQ335/UDsLqaibAsbP075nyIgNSnVsY0HWwGcr/3X7HjdQFxzKh2WmANk4CqVLqKbTxKIPxO5IIWox0jhSEI1IhCja0Mzv5WVtbQZJ7q82i9ijtDmGpyok8HsXXcqfh22Fu60GassH1d7jCA5IwmA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=16Air03rGTEm/t/M6PEuvdsDgR32V6eubN/eW9THhds=; b=i+hyLFlHnit8BBuZt8Kyeet1fWdz7TXgpuK3favfvvbPegBxDCdIb57kBcWNhA6Pd++42UMkZUM2tk/9FdaMDAIHYtZGGutmF+MLVP/y7Kt4kMHcWY81gjol35FiyALdoPB8j9US0HpzYmKc5JxroX+W6uMQHrFcazcdbSEGVet8DoWIP0pu2fQgzgtHW7mqxIPzLlNvGzM7bI5lOTV8dxigevhdcUrX8aVpGr3DdZRrkzKdb7ifZwCCkcOmiP/gQ2JAgiR1xcGjcWmNWst6RyfmTVs+9d2shFPr5kaKrfyi2GxKyqrYnYkuhPXaohjiIm+e5ObpQHZzhDTfn0sUKw== Received: from DM6PR12MB4941.namprd12.prod.outlook.com (2603:10b6:5:1b8::17) by DM6PR12MB3002.namprd12.prod.outlook.com (2603:10b6:5:117::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4888.12; Tue, 18 Jan 2022 15:37:30 +0000 Received: from DM5PR15CA0052.namprd15.prod.outlook.com (2603:10b6:3:ae::14) by DM6PR12MB4941.namprd12.prod.outlook.com (2603:10b6:5:1b8::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4888.10; Tue, 18 Jan 2022 15:37:30 +0000 Received: from DM6NAM11FT021.eop-nam11.prod.protection.outlook.com (2603:10b6:3:ae:cafe::52) by DM5PR15CA0052.outlook.office365.com (2603:10b6:3:ae::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4909.7 via Frontend Transport; Tue, 18 Jan 2022 15:37:30 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.234) by DM6NAM11FT021.mail.protection.outlook.com (10.13.173.76) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4888.9 via Frontend Transport; Tue, 18 Jan 2022 15:37:30 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 18 Jan 2022 15:37:28 +0000 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.986.9; Tue, 18 Jan 2022 07:37:25 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , Subject: [v2,10/10] app/testpmd: implement rte flow queue indirect action Date: Tue, 18 Jan 2022 17:37:12 +0200 Message-ID: <20220118153712.3947984-1-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220118153027.3947448-1-akozyrev@nvidia.com> References: <20220118153027.3947448-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 33da60c2-8a4a-4022-ce9f-08d9da987036 X-MS-TrafficTypeDiagnostic: DM6PR12MB4941:EE_|DM6PR12MB3002:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4502; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: wmzCP3yuu1iGSEw5sFHBu0CcAcgDpXA2BshzJZkn2x2EDEMfWiPpHTIP4IiprXgf1GJ5v6TWUoYtEQr88ZoYa0dISbQXpurX0VK/UZMR4arSPKVbY1foI1w+bRwkD5wh4BCgouhqaasZu1F3snMp/D02qZVKlMM7bSXN0Un2jxWdXqY1upu/uuJA5i8VRoh3PRfDxLNXZ/wMnY8fC9Lwm0eE9fu3j0AKbZu6O5Zs+ym7Gusk8418P0pKDpSL05284NlwOQarAwpnX4VyVmNieWrW1MGfh/3NwvOg0cr7DiJIR8VzGkHGrKDlpIRxj0OwaOGp7ywqpDQ6QIMIgFiOx7FOeWWxAI0tuT0FaRSQRCub5NtKbl1k5bONj4zUFiowHLYHYX4plSWvweeR8/r7O5WQhUwQxzNk3PeKR2nBKRKim+9j/9O3MDhyirBzELKN/Q9jjwghFfRn5oOgo+ezhpTMd3fFI/sg+tFFAvbInw860QbVNGjzY/o/Vz7REvVm3BTCFI2FVG8Dsv9wNXKQanOYyCMeJKgxfCADG1TJMMw4yEQ+ne8JV7dFkI87qIb5w5A7jcBbaclGCGBIdZeo75vyq9b5UX1/FGj7ta/9h47Dhw3M/1uvXc9bPMV1UK/frSxFTqIZD2Qm+peEaAbRFxuIewg0w5fMk3zGTsAva9iQaIHKhLiliBehanKk+lHeJYnJ84RkMCGD5l5S9qRTopbjP0jJj69l5xJTUcBBVN/TUb8em5uOH/qTbf1zkVoRlCJMZYWIK31QPeouuBZvU/MQOovdT9/2Y3tmlliOpdo= X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(4636009)(46966006)(40470700002)(36840700001)(16526019)(4326008)(36756003)(336012)(26005)(47076005)(316002)(2906002)(54906003)(6916009)(508600001)(6286002)(186003)(36860700001)(356005)(2616005)(8936002)(83380400001)(82310400004)(5660300002)(426003)(40460700001)(7696005)(55016003)(81166007)(8676002)(70586007)(6666004)(30864003)(70206006)(1076003)(86362001)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2022 15:37:30.0753 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 33da60c2-8a4a-4022-ce9f-08d9da987036 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT021.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3002 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add testpmd support for the rte_flow_q_action_handle API. Provide the command line interface for operations dequeue. Usage example: flow queue 0 indirect_action 0 create action_id 9 ingress drain yes action rss / end flow queue 0 indirect_action 0 update action_id 9 action queue index 0 / end flow queue 0 indirect_action 0 destroy action_id 9 Signed-off-by: Alexander Kozyrev --- app/test-pmd/cmdline_flow.c | 276 ++++++++++++++++++++ app/test-pmd/config.c | 131 ++++++++++ app/test-pmd/testpmd.h | 10 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 65 +++++ 4 files changed, 482 insertions(+) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 507eb87984..50b6424933 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -120,6 +120,7 @@ enum index { /* Queue arguments. */ QUEUE_CREATE, QUEUE_DESTROY, + QUEUE_INDIRECT_ACTION, /* Queue create arguments. */ QUEUE_CREATE_ID, @@ -133,6 +134,26 @@ enum index { QUEUE_DESTROY_ID, QUEUE_DESTROY_DRAIN, + /* Queue indirect action arguments */ + QUEUE_INDIRECT_ACTION_CREATE, + QUEUE_INDIRECT_ACTION_UPDATE, + QUEUE_INDIRECT_ACTION_DESTROY, + + /* Queue indirect action create arguments */ + QUEUE_INDIRECT_ACTION_CREATE_ID, + QUEUE_INDIRECT_ACTION_INGRESS, + QUEUE_INDIRECT_ACTION_EGRESS, + QUEUE_INDIRECT_ACTION_TRANSFER, + QUEUE_INDIRECT_ACTION_CREATE_DRAIN, + QUEUE_INDIRECT_ACTION_SPEC, + + /* Queue indirect action update arguments */ + QUEUE_INDIRECT_ACTION_UPDATE_DRAIN, + + /* Queue indirect action destroy arguments */ + QUEUE_INDIRECT_ACTION_DESTROY_ID, + QUEUE_INDIRECT_ACTION_DESTROY_DRAIN, + /* Dequeue arguments. */ DEQUEUE_QUEUE, @@ -1099,6 +1120,7 @@ static const enum index next_table_destroy_attr[] = { static const enum index next_queue_subcmd[] = { QUEUE_CREATE, QUEUE_DESTROY, + QUEUE_INDIRECT_ACTION, ZERO, }; @@ -1108,6 +1130,36 @@ static const enum index next_queue_destroy_attr[] = { ZERO, }; +static const enum index next_qia_subcmd[] = { + QUEUE_INDIRECT_ACTION_CREATE, + QUEUE_INDIRECT_ACTION_UPDATE, + QUEUE_INDIRECT_ACTION_DESTROY, + ZERO, +}; + +static const enum index next_qia_create_attr[] = { + QUEUE_INDIRECT_ACTION_CREATE_ID, + QUEUE_INDIRECT_ACTION_INGRESS, + QUEUE_INDIRECT_ACTION_EGRESS, + QUEUE_INDIRECT_ACTION_TRANSFER, + QUEUE_INDIRECT_ACTION_CREATE_DRAIN, + QUEUE_INDIRECT_ACTION_SPEC, + ZERO, +}; + +static const enum index next_qia_update_attr[] = { + QUEUE_INDIRECT_ACTION_UPDATE_DRAIN, + QUEUE_INDIRECT_ACTION_SPEC, + ZERO, +}; + +static const enum index next_qia_destroy_attr[] = { + QUEUE_INDIRECT_ACTION_DESTROY_DRAIN, + QUEUE_INDIRECT_ACTION_DESTROY_ID, + END, + ZERO, +}; + static const enum index next_ia_create_attr[] = { INDIRECT_ACTION_CREATE_ID, INDIRECT_ACTION_INGRESS, @@ -2163,6 +2215,12 @@ static int parse_qo(struct context *, const struct token *, static int parse_qo_destroy(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_qia(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); +static int parse_qia_destroy(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); static int parse_dequeue(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -2729,6 +2787,13 @@ static const struct token token_list[] = { .args = ARGS(ARGS_ENTRY(struct buffer, queue)), .call = parse_qo_destroy, }, + [QUEUE_INDIRECT_ACTION] = { + .name = "indirect_action", + .help = "queue indirect actions", + .next = NEXT(next_qia_subcmd, NEXT_ENTRY(COMMON_QUEUE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, queue)), + .call = parse_qia, + }, /* Queue arguments. */ [QUEUE_TABLE] = { .name = "table", @@ -2782,6 +2847,90 @@ static const struct token token_list[] = { args.destroy.rule)), .call = parse_qo_destroy, }, + /* Queue indirect action arguments */ + [QUEUE_INDIRECT_ACTION_CREATE] = { + .name = "create", + .help = "create indirect action", + .next = NEXT(next_qia_create_attr), + .call = parse_qia, + }, + [QUEUE_INDIRECT_ACTION_UPDATE] = { + .name = "update", + .help = "update indirect action", + .next = NEXT(next_qia_update_attr, + NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, args.vc.attr.group)), + .call = parse_qia, + }, + [QUEUE_INDIRECT_ACTION_DESTROY] = { + .name = "destroy", + .help = "destroy indirect action", + .next = NEXT(next_qia_destroy_attr), + .call = parse_qia_destroy, + }, + /* Indirect action destroy arguments. */ + [QUEUE_INDIRECT_ACTION_DESTROY_DRAIN] = { + .name = "drain", + .help = "drain operation immediately", + .next = NEXT(next_qia_destroy_attr, + NEXT_ENTRY(COMMON_BOOLEAN)), + .args = ARGS(ARGS_ENTRY(struct buffer, drain)), + }, + [QUEUE_INDIRECT_ACTION_DESTROY_ID] = { + .name = "action_id", + .help = "specify a indirect action id to destroy", + .next = NEXT(next_qia_destroy_attr, + NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)), + .args = ARGS(ARGS_ENTRY_PTR(struct buffer, + args.ia_destroy.action_id)), + .call = parse_qia_destroy, + }, + /* Indirect action update arguments. */ + [QUEUE_INDIRECT_ACTION_UPDATE_DRAIN] = { + .name = "drain", + .help = "drain operation immediately", + .next = NEXT(next_qia_update_attr, + NEXT_ENTRY(COMMON_BOOLEAN)), + .args = ARGS(ARGS_ENTRY(struct buffer, drain)), + }, + /* Indirect action create arguments. */ + [QUEUE_INDIRECT_ACTION_CREATE_ID] = { + .name = "action_id", + .help = "specify a indirect action id to create", + .next = NEXT(next_qia_create_attr, + NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, args.vc.attr.group)), + }, + [QUEUE_INDIRECT_ACTION_INGRESS] = { + .name = "ingress", + .help = "affect rule to ingress", + .next = NEXT(next_qia_create_attr), + .call = parse_qia, + }, + [QUEUE_INDIRECT_ACTION_EGRESS] = { + .name = "egress", + .help = "affect rule to egress", + .next = NEXT(next_qia_create_attr), + .call = parse_qia, + }, + [QUEUE_INDIRECT_ACTION_TRANSFER] = { + .name = "transfer", + .help = "affect rule to transfer", + .next = NEXT(next_qia_create_attr), + .call = parse_qia, + }, + [QUEUE_INDIRECT_ACTION_CREATE_DRAIN] = { + .name = "drain", + .help = "drain operation immediately", + .next = NEXT(next_qia_create_attr, + NEXT_ENTRY(COMMON_BOOLEAN)), + .args = ARGS(ARGS_ENTRY(struct buffer, drain)), + }, + [QUEUE_INDIRECT_ACTION_SPEC] = { + .name = "action", + .help = "specify action to create indirect handle", + .next = NEXT(next_action), + }, /* Top-level command. */ [DEQUEUE] = { .name = "dequeue", @@ -6181,6 +6330,110 @@ parse_ia_destroy(struct context *ctx, const struct token *token, return len; } +/** Parse tokens for indirect action commands. */ +static int +parse_qia(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command) { + if (ctx->curr != QUEUE) + return -1; + if (sizeof(*out) > size) + return -1; + out->args.vc.data = (uint8_t *)out + size; + return len; + } + switch (ctx->curr) { + case QUEUE_INDIRECT_ACTION: + return len; + case QUEUE_INDIRECT_ACTION_CREATE: + case QUEUE_INDIRECT_ACTION_UPDATE: + out->args.vc.actions = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + out->args.vc.attr.group = UINT32_MAX; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + return len; + case QUEUE_INDIRECT_ACTION_EGRESS: + out->args.vc.attr.egress = 1; + return len; + case QUEUE_INDIRECT_ACTION_INGRESS: + out->args.vc.attr.ingress = 1; + return len; + case QUEUE_INDIRECT_ACTION_TRANSFER: + out->args.vc.attr.transfer = 1; + return len; + case QUEUE_INDIRECT_ACTION_CREATE_DRAIN: + return len; + default: + return -1; + } +} + +/** Parse tokens for indirect action destroy command. */ +static int +parse_qia_destroy(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + uint32_t *action_id; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command || out->command == QUEUE) { + if (ctx->curr != QUEUE_INDIRECT_ACTION_DESTROY) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.ia_destroy.action_id = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + return len; + } + switch (ctx->curr) { + case QUEUE_INDIRECT_ACTION: + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + return len; + case QUEUE_INDIRECT_ACTION_DESTROY_ID: + action_id = out->args.ia_destroy.action_id + + out->args.ia_destroy.action_id_n++; + if ((uint8_t *)action_id > (uint8_t *)out + size) + return -1; + ctx->objdata = 0; + ctx->object = action_id; + ctx->objmask = NULL; + return len; + case QUEUE_INDIRECT_ACTION_DESTROY_DRAIN: + return len; + default: + return -1; + } +} + /** Parse tokens for meter policy action commands. */ static int parse_mp(struct context *ctx, const struct token *token, @@ -9857,6 +10110,29 @@ cmd_flow_parsed(const struct buffer *in) case DRAIN: port_queue_flow_drain(in->port, in->queue); break; + case QUEUE_INDIRECT_ACTION_CREATE: + port_queue_action_handle_create( + in->port, in->queue, in->drain, + in->args.vc.attr.group, + &((const struct rte_flow_indir_action_conf) { + .ingress = in->args.vc.attr.ingress, + .egress = in->args.vc.attr.egress, + .transfer = in->args.vc.attr.transfer, + }), + in->args.vc.actions); + break; + case QUEUE_INDIRECT_ACTION_DESTROY: + port_queue_action_handle_destroy(in->port, + in->queue, in->drain, + in->args.ia_destroy.action_id_n, + in->args.ia_destroy.action_id); + break; + case QUEUE_INDIRECT_ACTION_UPDATE: + port_queue_action_handle_update(in->port, + in->queue, in->drain, + in->args.vc.attr.group, + in->args.vc.actions); + break; case INDIRECT_ACTION_CREATE: port_action_handle_create( in->port, in->args.vc.attr.group, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 5d23edf562..634174eec6 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -2536,6 +2536,137 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, return ret; } +/** Enqueue indirect action create operation*/ +int +port_queue_action_handle_create(portid_t port_id, uint32_t queue_id, + bool drain, uint32_t id, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *action) +{ + const struct rte_flow_q_ops_attr attr = { .drain = drain}; + struct rte_port *port; + struct port_indirect_action *pia; + int ret; + struct rte_flow_error error; + + ret = action_alloc(port_id, id, &pia); + if (ret) + return ret; + + port = &ports[port_id]; + if (queue_id >= port->queue_nb) { + printf("Queue #%u is invalid\n", queue_id); + return -EINVAL; + } + + if (action->type == RTE_FLOW_ACTION_TYPE_AGE) { + struct rte_flow_action_age *age = + (struct rte_flow_action_age *)(uintptr_t)(action->conf); + + pia->age_type = ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION; + age->context = &pia->age_type; + } + /* Poisoning to make sure PMDs update it in case of error. */ + memset(&error, 0x88, sizeof(error)); + pia->handle = rte_flow_q_action_handle_create(port_id, queue_id, &attr, + conf, action, &error); + if (!pia->handle) { + uint32_t destroy_id = pia->id; + port_queue_action_handle_destroy(port_id, queue_id, + drain, 1, &destroy_id); + return port_flow_complain(&error); + } + pia->type = action->type; + printf("Indirect action #%u creation queued\n", pia->id); + return 0; +} + +/** Enqueue indirect action destroy operation*/ +int +port_queue_action_handle_destroy(portid_t port_id, + uint32_t queue_id, bool drain, + uint32_t n, const uint32_t *actions) +{ + const struct rte_flow_q_ops_attr attr = { .drain = drain}; + struct rte_port *port; + struct port_indirect_action **tmp; + uint32_t c = 0; + int ret = 0; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + + if (queue_id >= port->queue_nb) { + printf("Queue #%u is invalid\n", queue_id); + return -EINVAL; + } + + tmp = &port->actions_list; + while (*tmp) { + uint32_t i; + + for (i = 0; i != n; ++i) { + struct rte_flow_error error; + struct port_indirect_action *pia = *tmp; + + if (actions[i] != pia->id) + continue; + /* + * Poisoning to make sure PMDs update it in case + * of error. + */ + memset(&error, 0x99, sizeof(error)); + + if (pia->handle && + rte_flow_q_action_handle_destroy(port_id, queue_id, + &attr, pia->handle, &error)) { + ret = port_flow_complain(&error); + continue; + } + *tmp = pia->next; + printf("Indirect action #%u destruction queued\n", + pia->id); + free(pia); + break; + } + if (i == n) + tmp = &(*tmp)->next; + ++c; + } + return ret; +} + +/** Enqueue indirect action update operation*/ +int +port_queue_action_handle_update(portid_t port_id, + uint32_t queue_id, bool drain, uint32_t id, + const struct rte_flow_action *action) +{ + const struct rte_flow_q_ops_attr attr = { .drain = drain}; + struct rte_port *port; + struct rte_flow_error error; + struct rte_flow_action_handle *action_handle; + + action_handle = port_action_handle_get_by_id(port_id, id); + if (!action_handle) + return -EINVAL; + + port = &ports[port_id]; + if (queue_id >= port->queue_nb) { + printf("Queue #%u is invalid\n", queue_id); + return -EINVAL; + } + + if (rte_flow_q_action_handle_update(port_id, queue_id, &attr, + action_handle, action, &error)) { + return port_flow_complain(&error); + } + printf("Indirect action #%u update queued\n", id); + return 0; +} + /** Drain all the queue operations down the queue. */ int port_queue_flow_drain(portid_t port_id, queueid_t queue_id) diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 3cf336dbae..eeaf1864cd 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -934,6 +934,16 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id, const struct rte_flow_action *actions); int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, bool drain, uint32_t n, const uint32_t *rule); +int port_queue_action_handle_create(portid_t port_id, uint32_t queue_id, + bool drain, uint32_t id, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *action); +int port_queue_action_handle_destroy(portid_t port_id, + uint32_t queue_id, bool drain, + uint32_t n, const uint32_t *action); +int port_queue_action_handle_update(portid_t port_id, uint32_t queue_id, + bool drain, uint32_t id, + const struct rte_flow_action *action); int port_queue_flow_drain(portid_t port_id, queueid_t queue_id); int port_queue_flow_dequeue(portid_t port_id, queueid_t queue_id); int port_flow_validate(portid_t port_id, diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index fff4de8f00..dfb81d56d8 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -4728,6 +4728,31 @@ port 0:: testpmd> flow indirect_action 0 create action_id \ ingress action rss queues 0 1 end / end +Enqueueing creation of indirect actions +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow queue indirect_action create`` adds creation operation of an indirect +action to a queue. It is bound to ``rte_flow_q_action_handle_create()``:: + + flow queue {port_id} create {queue_id} [drain {boolean}] + table {table_id} item_template {item_template_id} + action_template {action_template_id} + pattern {item} [/ {item} [...]] / end + actions {action} [/ {action} [...]] / end + +If successful, it will show:: + + Indirect action #[...] creation queued + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + +This command uses the same parameters as ``flow indirect_action create``, +described in `Creating indirect actions`_. + +``flow queue dequeue`` must be called to retrieve the operation status. + Updating indirect actions ~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -4757,6 +4782,25 @@ Update indirect rss action having id 100 on port 0 with rss to queues 0 and 3 testpmd> flow indirect_action 0 update 100 action rss queues 0 3 end / end +Enqueueing update of indirect actions +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow queue indirect_action update`` adds update operation for an indirect +action to a queue. It is bound to ``rte_flow_q_action_handle_update()``:: + + flow queue {port_id} indirect_action {queue_id} update + {indirect_action_id} [drain {boolean}] action {action} / end + +If successful, it will return a flow rule ID usable with other commands:: + + Indirect action #[...] update queued + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + +``flow queue dequeue`` must be called to retrieve the operation status. + Destroying indirect actions ~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -4780,6 +4824,27 @@ Destroy indirect actions having id 100 & 101:: testpmd> flow indirect_action 0 destroy action_id 100 action_id 101 +Enqueueing destruction of indirect actions +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow queue indirect_action destroy`` adds destruction operation to destroy +one or more indirect actions from their indirect action IDs (as returned by +``flow queue {port_id} indirect_action {queue_id} create``) to a queue. +It is bound to ``rte_flow_q_action_handle_destroy()``:: + + flow queue {port_id} indirect_action {queue_id} destroy + [drain {boolean}] action_id {indirect_action_id} [...] + +If successful, it will return a flow rule ID usable with other commands:: + + Indirect action #[...] destruction queued + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + +``flow queue dequeue`` must be called to retrieve the operation status. + Query indirect actions ~~~~~~~~~~~~~~~~~~~~~~