From patchwork Wed Dec 21 07:35:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 121158 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6D220A034C; Wed, 21 Dec 2022 08:36:23 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9156D4113F; Wed, 21 Dec 2022 08:36:22 +0100 (CET) Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1nam02on2046.outbound.protection.outlook.com [40.107.96.46]) by mails.dpdk.org (Postfix) with ESMTP id B235E40684 for ; Wed, 21 Dec 2022 08:36:20 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=JuR+hw3EvcYromupK05FDaHV5s6ORFt1yTJV29Y0Ttqcc62hbl4FgYD6pQAOx6XgRdrOqmAMaetKS1/GYhJMWX33CSuWM2jmCoqL2GctjRsqEowNv0cM3XLSUC+hBWhuEOTQMalsrFhc6eGFab70u7E+5ldFgi3PNZkpNAnwOMPTYIjLAexw3lYU2s3+LwCGxtD7NCUrKzy32PuSxKr6zHAEuDPDSvHqXfsWQ9CZ+2VYj7YRqSQcLG54UeGFQ4FZ7ySIvMXmOyMlmIiWGciG3mUpnBfHnuIYNx/cAp9gFgW1bOIlKK92pvt83Bnag9zJr1neSb5MphTLD2ViM/ucvQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=NQGzO/Ohc8mNGWIBgclnJ5g5kQ6iLp3iVyrxV3jPVV8=; b=NS1sZvalQcTkZkXaLfksUGvRn33xpzgQXxqBfeH57H3QkLV8OMxZ5OLAAbtVtIRRdbwAyBHQyAzYWHwZZ+WpnQE7+N7VgpCSCvHZVDNHFNmcdHvZXjFUx2rkB55J3OP+lbIPWqrV7yrU0xVZrLMm/egk/pbD+m9Vt7DGQQ8h/DzV1Ezqr2OYChwYsY9yNI41z1p0bLJ8YQ0QNrVD+8/0S6cTEGgW7YMaJ16396J1Waosc36BFNpSw1C51YOCQEpsgBtovMwyPO3F+Lvh26rHCN1p6hSkTvCP5JOhGrjPUlDYcCMdlfrtl4Ng+c12M+q38LIXCDvaTyzlmWqXDFnhNg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=NQGzO/Ohc8mNGWIBgclnJ5g5kQ6iLp3iVyrxV3jPVV8=; b=WNCQvvQYOIF2kSG5DrC1Y3GpInTGrXBYdntrrECBYtQH3akgBcgHkImQHkTg/ncyQ+J5AUrdhAusnGJK+/u5DYBbo3L9WJPuGrtsCdWd2RT+os9sLWUIWXs4HujEy3QU0TKECc5KtohvJrSmQGLCCVj1MG3/tCu1Tj+pCk/oQ7lLlpdINr4cNw0rXUTYslRWmnOk7SjoQ+kM2iLxW0vk4+RCqE7HEtn4mzH5opeSnzkgxB4uj76/qvrwCsEaWuaEodESEv3nr1/600Zkwr+6qve+Su665tvv7xorJmiXioo7oJ+yFajhGdcdkhxnLu5zF6mM5nT3Htsgpm3m/kaWKA== Received: from DM6PR13CA0013.namprd13.prod.outlook.com (2603:10b6:5:bc::26) by CH3PR12MB8581.namprd12.prod.outlook.com (2603:10b6:610:15d::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5924.16; Wed, 21 Dec 2022 07:36:18 +0000 Received: from DM6NAM11FT018.eop-nam11.prod.protection.outlook.com (2603:10b6:5:bc:cafe::8a) by DM6PR13CA0013.outlook.office365.com (2603:10b6:5:bc::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.10 via Frontend Transport; Wed, 21 Dec 2022 07:36:18 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by DM6NAM11FT018.mail.protection.outlook.com (10.13.172.110) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.10 via Frontend Transport; Wed, 21 Dec 2022 07:36:18 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 20 Dec 2022 23:36:05 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 20 Dec 2022 23:36:02 -0800 From: Gregory Etelson To: CC: , , , Ori Kam , Thomas Monjalon , Ferruh Yigit , Andrew Rybchenko Subject: [PATCH 1/2] ethdev: add query_update sync and async function calls Date: Wed, 21 Dec 2022 09:35:46 +0200 Message-ID: <20221221073547.988-1-getelson@nvidia.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT018:EE_|CH3PR12MB8581:EE_ X-MS-Office365-Filtering-Correlation-Id: 1ff00a5c-5086-4c6a-9779-08dae3260cae X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: N5bnOcL9h2InDXDrAc63M6QEWLPdPjr+07ejxdI5x8+KWW8uf/LsEeT0Xpcsnrh5wjAD8Lcjb3MHEBC9aQQ2UocQorqgFyehm8opFP+nRQE31gJILdpMxmy42736NNJY18ynL01skojQV0PQA0e+CoPddCqJN2QKJIFcBrWSRV8UEPgiyID0dsrItcjdh/e64oMpS4jfa4YqxBGg12kGltdwMSdD9V3VOhHYRBJvm9Yc7pRHkhA3fOkW5Rm+AqUc/jXSciuaUhxnpyews7FB1KkbrPPeJauidpOfXuD7RRIA5P+pvNJ4tU04nD08jPuSWk4mz3T42m9xbPdGXsH6inJ2Akw9l+SR/8HwBXWcHCGKypuZRkS1txWl3IemlrDykdNQ0PwYdvOVXlFvECyHJJ/0gZlYnbCXUsHEwufbVIUlaPpjwfrp6ECZTvk8fA/fb3lYBmGqcDt2nG48nlsH0V7Lrm3WdlMeDI3NXqenjBQs6NNxug/mNmcBSj1wcBIh/qzqL8RsIN/z/GAHlX1sRA6hh7X9zpdEh9Fn/0bzUAB0RNxGTHmOJBX6VnJo+/7v74GHpvtrc0vs2MbfaRjXZMfiBFGdP0+VhwPw5otLa5YiOp54byuomOnCUa4qveVyfFiPyLNKps6yVDCYUJeECJ1S6+nw/J/zoIo2tZ5fKivGGpiBLTBr+19OsYQJ/O1pDJooa14rfLUtb1Cs7dc+Uw== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(346002)(39860400002)(396003)(136003)(376002)(451199015)(46966006)(36840700001)(40470700004)(6666004)(6286002)(186003)(16526019)(26005)(356005)(36756003)(4326008)(41300700001)(2906002)(70586007)(40460700003)(8676002)(70206006)(55016003)(5660300002)(8936002)(40480700001)(82740400003)(478600001)(54906003)(6916009)(7696005)(316002)(7636003)(83380400001)(86362001)(47076005)(426003)(82310400005)(36860700001)(2616005)(336012)(1076003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Dec 2022 07:36:18.5971 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1ff00a5c-5086-4c6a-9779-08dae3260cae X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT018.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB8581 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Current API allows either query or update indirect flow action. If port hardware allows both update and query in a single operation, application still has to issue 2 separate hardware requests. The patch adds `rte_flow_action_handle_query_update` function call, and it's async version `rte_flow_async_action_handle_query_update` to atomically query and update flow action. int rte_flow_action_handle_query_update (uint16_t port_id, struct rte_flow_action_handle *handle, const void *update, void *query, enum rte_flow_query_update_mode mode, struct rte_flow_error *error); int rte_flow_async_action_handle_query_update (uint16_t port_id, uint32_t queue_id, const struct rte_flow_op_attr *op_attr, struct rte_flow_action_handle *action_handle, const void *update, void *query, enum rte_flow_query_update_mode mode, void *user_data, struct rte_flow_error *error); Application can control query and update order, if that is supported by port hardware, by setting qu_mode parameter to RTE_FLOW_QU_QUERY_FIRST or RTE_FLOW_QU_UPDATE_FIRST. RTE_FLOW_QU_QUERY and RTE_FLOW_QU_UPDATE parameter values provide query only and update only functionality for backward compatibility with existing API. Signed-off-by: Gregory Etelson --- lib/ethdev/rte_flow.c | 39 +++++++++++++ lib/ethdev/rte_flow.h | 105 +++++++++++++++++++++++++++++++++++ lib/ethdev/rte_flow_driver.h | 15 +++++ lib/ethdev/version.map | 5 ++ 4 files changed, 164 insertions(+) diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 7d0c24366c..8b8aa940be 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -1883,3 +1883,42 @@ rte_flow_async_action_handle_query(uint16_t port_id, action_handle, data, user_data, error); return flow_err(port_id, ret, error); } + +int +rte_flow_action_handle_query_update(uint16_t port_id, + struct rte_flow_action_handle *handle, + const void *update, void *query, + enum rte_flow_query_update_mode mode, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + int ret; + + if (!ops || !ops->action_handle_query_update) + return -ENOTSUP; + ret = ops->action_handle_query_update(dev, handle, update, query, + mode, error); + return flow_err(port_id, ret, error); +} + +int +rte_flow_async_action_handle_query_update(uint16_t port_id, uint32_t queue_id, + const struct rte_flow_op_attr *attr, + struct rte_flow_action_handle *handle, + const void *update, void *query, + enum rte_flow_query_update_mode mode, + void *user_data, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + int ret; + + if (!ops || !ops->async_action_handle_query_update) + return -ENOTSUP; + ret = ops->async_action_handle_query_update(dev, queue_id, attr, + handle, update, query, mode, + user_data, error); + return flow_err(port_id, ret, error); +} diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index b60987db4b..f9e919bb80 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -5622,6 +5622,111 @@ rte_flow_async_action_handle_query(uint16_t port_id, void *user_data, struct rte_flow_error *error); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Query_update operational mode. + * + * RTE_FLOW_QU_DEFAULT + * Default query_update operational mode. + * If both `update` and `query` parameters are not NULL the call updates and + * queries action in default port order. + * If `update` parameter is NULL the call queries action. + * If `query` parameter is NULL the call updates action. + * RTE_FLOW_QU_QUERY_FIRST + * Force port to query action before update. + * RTE_FLOW_QU_UPDATE_FIRST + * Force port to update action before update. + * + * @see rte_flow_action_handle_query_update() + * @see rte_flow_async_action_handle_query_update() + */ +enum rte_flow_query_update_mode { + RTE_FLOW_QU_DEFAULT, /* HW default mode */ + RTE_FLOW_QU_QUERY_FIRST, /* query before update */ + RTE_FLOW_QU_UPDATE_FIRST, /* query after update */ +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Query and/or update indirect flow action. + * + * @param[in] port_id + * Port identifier of Ethernet device. + * @param[in] handle + * Handle for the indirect action object to be updated. + * @param update[in] + * Update profile specification used to modify the action pointed by handle. + * *update* could be with the same type of the immediate action corresponding + * to the *handle* argument when creating, or a wrapper structure includes + * action configuration to be updated and bit fields to indicate the member + * of fields inside the action to update. + * @param[out] query + * Pointer to storage for the associated query data type. + * @param[in] mode + * Operational mode. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_action_handle_query_update(uint16_t port_id, + struct rte_flow_action_handle *handle, + const void *update, void *query, + enum rte_flow_query_update_mode mode, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue async indirect flow action query and/or update + * + * @param[in] port_id + * Port identifier of Ethernet device. + * @param[in] queue_id + * Flow queue which is used to update the rule. + * @param[in] attr + * Indirect action update operation attributes. + * @param[in] handle + * Handle for the indirect action object to be updated. + * @param[in] update + * Update profile specification used to modify the action pointed by handle. + * *update* could be with the same type of the immediate action corresponding + * to the *handle* argument when creating, or a wrapper structure includes + * action configuration to be updated and bit fields to indicate the member + * of fields inside the action to update. + * @param[in] query + * Pointer to storage for the associated query data type. + * Query result returned on async completion event. + * @param[in] mode + * Operational mode. + * @param[in] user_data + * The user data that will be returned on async completion event. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_async_action_handle_query_update(uint16_t port_id, uint32_t queue_id, + const struct rte_flow_op_attr *attr, + struct rte_flow_action_handle *handle, + const void *update, void *query, + enum rte_flow_query_update_mode mode, + void *user_data, + struct rte_flow_error *error); + #ifdef __cplusplus } #endif diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h index c7d0699c91..7358c10a7a 100644 --- a/lib/ethdev/rte_flow_driver.h +++ b/lib/ethdev/rte_flow_driver.h @@ -114,6 +114,13 @@ struct rte_flow_ops { const struct rte_flow_action_handle *handle, void *data, struct rte_flow_error *error); + /** See rte_flow_action_handle_query_update() */ + int (*action_handle_query_update) + (struct rte_eth_dev *dev, + struct rte_flow_action_handle *handle, + const void *update, void *query, + enum rte_flow_query_update_mode qu_mode, + struct rte_flow_error *error); /** See rte_flow_tunnel_decap_set() */ int (*tunnel_decap_set) (struct rte_eth_dev *dev, @@ -276,6 +283,14 @@ struct rte_flow_ops { void *data, void *user_data, struct rte_flow_error *error); + /** See rte_flow_async_action_handle_query_update */ + int (*async_action_handle_query_update) + (struct rte_eth_dev *dev, uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_action_handle *action_handle, + const void *update, void *query, + enum rte_flow_query_update_mode qu_mode, + void *user_data, struct rte_flow_error *error); }; /** diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 17201fbe0f..42f0d7b30c 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -298,6 +298,11 @@ EXPERIMENTAL { rte_flow_get_q_aged_flows; rte_mtr_meter_policy_get; rte_mtr_meter_profile_get; + + # future + rte_flow_action_handle_query_update; + rte_flow_async_action_handle_query_update; + }; INTERNAL { From patchwork Wed Dec 21 07:35:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 121159 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D7FD2A034C; Wed, 21 Dec 2022 08:36:33 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4AFD542D14; Wed, 21 Dec 2022 08:36:30 +0100 (CET) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2057.outbound.protection.outlook.com [40.107.223.57]) by mails.dpdk.org (Postfix) with ESMTP id DE1D442D12 for ; Wed, 21 Dec 2022 08:36:26 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=TFQ5b2zR5l/BbRFCVcmNlfraKj4xpj1yl0ntMCkTzI6OlI0gjfbB94GRffAr1e9ZIYBFdbKhVKJfGkZEV78dZfe6eXQQ4F8svzCrUH0dwhA8xe5QsyANCfTUzQGMa7G1886ynNHjZtyn4z0WwVeB/1VeyUxtebyjMleES9KcfBYE0ibPonURZYxKRrF9igZpYZkBdIsCvysxQ/tnjOu1immIBv8l8MigEO5WWWL0kTJTcTI+jR1WG4aw8/SMBTWV/uBz7D0SiK//9Wb3/MHMipZgccn67WjFusa2RiDs1mOoRPFCSV/fcT/hDY7UKSJxueGmmHOExGKtHHnvHdzTcA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=G6bKRuRNJ1fuYAkNEVG4+7FWt/nB0dM3fNvaQxEEXlQ=; b=NddL/cR0YDdwLTRrFzOmw0JzksgFrIrUyW34c0NZG0jiRmFJiAEvTumsZsbNqyMbDbxBDHNGtLD/uWXipWlKHYup0TO/GU868UgtV4M2c1t1U4M4e7zgoDvUlyQDgTYcHE4OwMRYDbqDUmokDgMJIVvEWDNAe7Iz7rQ2RRe13Msjkeo8LIeMW8ExFfiTmeDqot5fk1hxsy5rSO6MHcI8lBWVTJNVCT8eE7BSnp3z5ISHXzqGLu2GLRJzWQQQV2MbrxF4PN1fbgBF9xJ86x7E3csWh606qgewbbdh5/DpWUGW+xUPPXxIoPRKD8JsYa8ONKBOfa0aHugNZ0xXLCdwTA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=G6bKRuRNJ1fuYAkNEVG4+7FWt/nB0dM3fNvaQxEEXlQ=; b=JHIPlCw3Fla0LgIXqCkwuB0XE89kF+A8flLduZG5gqCbNpcK+qPktED2G4zcrYsvo8GeR49uCGipvaEDcKx9SyWZfTgSt38ieFeHD2FBfF3tJHXM2DMCrMQvhTMBk1WkLWv4dCg2ifVXZcDA/9JuS/SSOdBOFCiMjJCgevO32SFz1XS0viNQQmmIl9ey59ouAeuTYOhYDmXtzjuHq9H5hUNcpg2sruq1HRtYUTKTnL6LrsDy02hDptLJhcBrUQbH0V1VJ4w7q8nxrhfeXEJWwoClg5dLaVlSWA+nR786Tm3rtXV0MGeJLeTJXLNe1duTJBrS/6Nm9vDwqVphuC1spA== Received: from BN1PR13CA0013.namprd13.prod.outlook.com (2603:10b6:408:e2::18) by PH7PR12MB6564.namprd12.prod.outlook.com (2603:10b6:510:210::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5924.16; Wed, 21 Dec 2022 07:36:23 +0000 Received: from BN8NAM11FT108.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e2:cafe::98) by BN1PR13CA0013.outlook.office365.com (2603:10b6:408:e2::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.10 via Frontend Transport; Wed, 21 Dec 2022 07:36:23 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT108.mail.protection.outlook.com (10.13.176.155) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.10 via Frontend Transport; Wed, 21 Dec 2022 07:36:22 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 20 Dec 2022 23:36:09 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 20 Dec 2022 23:36:05 -0800 From: Gregory Etelson To: CC: , , , Ori Kam , Aman Singh , Yuying Zhang , Ferruh Yigit , "Viacheslav Ovsiienko" , Thomas Monjalon , Andrew Rybchenko Subject: [PATCH 2/2] ethdev: add quota flow action and item Date: Wed, 21 Dec 2022 09:35:47 +0200 Message-ID: <20221221073547.988-2-getelson@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221221073547.988-1-getelson@nvidia.com> References: <20221221073547.988-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT108:EE_|PH7PR12MB6564:EE_ X-MS-Office365-Filtering-Correlation-Id: b8632cf5-41f1-4539-6c09-08dae3260f3a X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 8h68LSlBWW4oceUb7oc71o/oyphURC3fRDoOQXIbaB7uuH77hLeCjgNasfN9TSGLCvXVgPzoLqiMJgz/2DPnVC8nJ07c69xSkyKBY69bFhqPl8oy+sxt/ils0h/g/fVOxTRE0MaAvM7voU1OBROciFAosKPn/mUUJQRNKsnRffo8mpA5cq6LCgmfDLZmchSJADDsjXsMf9VXhaZUwiScaM3/BO0BHOjBGl3zZ0amILyXe+7C40A2EnBub2+vjI4SdloVqSonEkGLz1Lx6qY7oYQccKgpRN87CWjDEHONHkFiY6z0ZZY2OS0bcQUoDAvu0TTUa7s/He5exQFsE2+fJ6HWZcH4e3v6uDdPwWZzM/F6/0j17iI5erVE9OMWwlahaX74UipP7IgdS/EhG+PogABfYy3QyTs1E7G9LAfAZOz/HrkjP8NPjBzwB04upqQYs066smLeVuDwjeZSUedqtRL9p7xWrAsQwlFSrcJqlmLNJC+OG/YuS/hL3YwM8/6eXQTHuy0Wf38R42BPZq4NEluZMRsZ9W4cz14xgdZ0KhkTpARQVkwC+6KpdTQTvwbJNIa56bBZlBTRFOCGmvTNBGTNQ935BdXg1+c7wW4B4SPMQ3A/it6EDcfwrPNrwAAFGM5MvD6qq0qHxk6rm+PAzEOdn5NtFEuJXZgUs2qiRrGPs3ZePvf7OPEztvUzZYA/fvAKcAeLMHHgG5vv0UOx7A== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(39860400002)(376002)(136003)(396003)(346002)(451199015)(40470700004)(46966006)(36840700001)(8676002)(40480700001)(2906002)(54906003)(86362001)(55016003)(4326008)(16526019)(41300700001)(26005)(186003)(336012)(30864003)(1076003)(2616005)(478600001)(34290500002)(426003)(15650500001)(316002)(6666004)(36860700001)(7696005)(356005)(7636003)(40460700003)(82740400003)(5660300002)(8936002)(83380400001)(47076005)(6916009)(6286002)(70206006)(82310400005)(36756003)(70586007); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Dec 2022 07:36:22.8064 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b8632cf5-41f1-4539-6c09-08dae3260f3a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT108.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB6564 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Quota action limits traffic according to pre-defined configuration. Quota reflects overall traffic usage regardless bandwidth. Quota flow action initialized with signed tokens number value. Quota flow action updates tokens number according to these rules: 1. if quota was configured to count packet length, for each packet of size S, tokens number reduced by S. 2. If quota was configured to count packets, each packet decrements tokens number. quota action sets packet metadata according to a number of remaining tokens number: PASS - remaining tokens number is non-negative. BLOCK - remaining tokens number is negative. Quota flow item matches on that data Application updates tokens number in quota flow action with SET or ADD calls: SET(QUOTA, val) - arm quota with new tokens number set to val ADD(QUOTA, val) - increase existing quota tokens number by val Both SET and ADD return to application number of tokens stored in port before update. Application must create a rule with quota action to mark flow and match on the mark with quota item in following flow rule. Signed-off-by: Gregory Etelson Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 395 ++++++++++++++++++++++++++- app/test-pmd/config.c | 82 +++++- app/test-pmd/testpmd.h | 11 + doc/guides/nics/features/default.ini | 2 + doc/guides/nics/features/mlx5.ini | 2 + doc/guides/nics/mlx5.rst | 12 + doc/guides/prog_guide/rte_flow.rst | 41 +++ lib/ethdev/rte_flow.c | 2 + lib/ethdev/rte_flow.h | 123 +++++++++ 9 files changed, 660 insertions(+), 10 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 88108498e0..5407a72ee2 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -149,6 +149,7 @@ enum index { QUEUE_INDIRECT_ACTION_UPDATE, QUEUE_INDIRECT_ACTION_DESTROY, QUEUE_INDIRECT_ACTION_QUERY, + QUEUE_INDIRECT_ACTION_QUERY_UPDATE, /* Queue indirect action create arguments */ QUEUE_INDIRECT_ACTION_CREATE_ID, @@ -168,6 +169,9 @@ enum index { /* Queue indirect action query arguments */ QUEUE_INDIRECT_ACTION_QUERY_POSTPONE, + /* Queue indirect action query_update arguments */ + QUEUE_INDIRECT_ACTION_QU_MODE, + /* Push arguments. */ PUSH_QUEUE, @@ -227,6 +231,7 @@ enum index { CONFIG_AGING_OBJECTS_NUMBER, CONFIG_METERS_NUMBER, CONFIG_CONN_TRACK_NUMBER, + CONFIG_QUOTAS_NUMBER, CONFIG_FLAGS, /* Indirect action arguments */ @@ -234,6 +239,7 @@ enum index { INDIRECT_ACTION_UPDATE, INDIRECT_ACTION_DESTROY, INDIRECT_ACTION_QUERY, + INDIRECT_ACTION_QUERY_UPDATE, /* Indirect action create arguments */ INDIRECT_ACTION_CREATE_ID, @@ -245,6 +251,10 @@ enum index { /* Indirect action destroy arguments */ INDIRECT_ACTION_DESTROY_ID, + /* Indirect action query-and-update arguments */ + INDIRECT_ACTION_QU_MODE, + INDIRECT_ACTION_QU_MODE_NAME, + /* Validate/create pattern. */ ITEM_PATTERN, ITEM_PARAM_IS, @@ -465,6 +475,9 @@ enum index { ITEM_METER, ITEM_METER_COLOR, ITEM_METER_COLOR_NAME, + ITEM_QUOTA, + ITEM_QUOTA_STATE, + ITEM_QUOTA_STATE_NAME, /* Validate/create actions. */ ACTIONS, @@ -621,6 +634,14 @@ enum index { ACTION_REPRESENTED_PORT, ACTION_REPRESENTED_PORT_ETHDEV_PORT_ID, ACTION_SEND_TO_KERNEL, + ACTION_QUOTA_CREATE, + ACTION_QUOTA_CREATE_LIMIT, + ACTION_QUOTA_CREATE_MODE, + ACTION_QUOTA_CREATE_MODE_NAME, + ACTION_QUOTA_QU, + ACTION_QUOTA_QU_LIMIT, + ACTION_QUOTA_QU_UPDATE_OP, + ACTION_QUOTA_QU_UPDATE_OP_NAME, }; /** Maximum size for pattern in struct rte_flow_item_raw. */ @@ -1011,6 +1032,7 @@ struct buffer { } ia_destroy; /**< Indirect action destroy arguments. */ struct { uint32_t action_id; + enum rte_flow_query_update_mode qu_mode; } ia; /* Indirect action query arguments */ struct { uint32_t table_id; @@ -1097,6 +1119,7 @@ static const enum index next_config_attr[] = { CONFIG_AGING_OBJECTS_NUMBER, CONFIG_METERS_NUMBER, CONFIG_CONN_TRACK_NUMBER, + CONFIG_QUOTAS_NUMBER, CONFIG_FLAGS, END, ZERO, @@ -1190,6 +1213,7 @@ static const enum index next_qia_subcmd[] = { QUEUE_INDIRECT_ACTION_UPDATE, QUEUE_INDIRECT_ACTION_DESTROY, QUEUE_INDIRECT_ACTION_QUERY, + QUEUE_INDIRECT_ACTION_QUERY_UPDATE, ZERO, }; @@ -1231,6 +1255,25 @@ static const enum index next_ia_create_attr[] = { ZERO, }; +static const enum index next_ia[] = { + INDIRECT_ACTION_ID2PTR, + ACTION_NEXT, + ZERO +}; + +static const enum index next_qia_qu_attr[] = { + QUEUE_INDIRECT_ACTION_QU_MODE, + QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE, + INDIRECT_ACTION_SPEC, + ZERO +}; + +static const enum index next_ia_qu_attr[] = { + INDIRECT_ACTION_QU_MODE, + INDIRECT_ACTION_SPEC, + ZERO +}; + static const enum index next_dump_subcmd[] = { DUMP_ALL, DUMP_ONE, @@ -1242,6 +1285,7 @@ static const enum index next_ia_subcmd[] = { INDIRECT_ACTION_UPDATE, INDIRECT_ACTION_DESTROY, INDIRECT_ACTION_QUERY, + INDIRECT_ACTION_QUERY_UPDATE, ZERO, }; @@ -1355,6 +1399,7 @@ static const enum index next_item[] = { ITEM_L2TPV2, ITEM_PPP, ITEM_METER, + ITEM_QUOTA, END_SET, ZERO, }; @@ -1821,6 +1866,12 @@ static const enum index item_meter[] = { ZERO, }; +static const enum index item_quota[] = { + ITEM_QUOTA_STATE, + ITEM_NEXT, + ZERO, +}; + static const enum index next_action[] = { ACTION_END, ACTION_VOID, @@ -1886,9 +1937,25 @@ static const enum index next_action[] = { ACTION_PORT_REPRESENTOR, ACTION_REPRESENTED_PORT, ACTION_SEND_TO_KERNEL, + ACTION_QUOTA_CREATE, + ACTION_QUOTA_QU, ZERO, }; +static const enum index action_quota_create[] = { + ACTION_QUOTA_CREATE_LIMIT, + ACTION_QUOTA_CREATE_MODE, + ACTION_NEXT, + ZERO +}; + +static const enum index action_quota_update[] = { + ACTION_QUOTA_QU_LIMIT, + ACTION_QUOTA_QU_UPDATE_OP, + ACTION_NEXT, + ZERO +}; + static const enum index action_mark[] = { ACTION_MARK_ID, ACTION_NEXT, @@ -2399,6 +2466,22 @@ static int parse_meter_policy_id2ptr(struct context *ctx, static int parse_meter_color(struct context *ctx, const struct token *token, const char *str, unsigned int len, void *buf, unsigned int size); +static int +parse_quota_state_name(struct context *ctx, const struct token *token, + const char *str, unsigned int len, void *buf, + unsigned int size); +static int +parse_quota_mode_name(struct context *ctx, const struct token *token, + const char *str, unsigned int len, void *buf, + unsigned int size); +static int +parse_quota_update_name(struct context *ctx, const struct token *token, + const char *str, unsigned int len, void *buf, + unsigned int size); +static int +parse_qu_mode_name(struct context *ctx, const struct token *token, + const char *str, unsigned int len, void *buf, + unsigned int size); static int comp_none(struct context *, const struct token *, unsigned int, char *, unsigned int); static int comp_boolean(struct context *, const struct token *, @@ -2431,6 +2514,18 @@ static int comp_queue_id(struct context *, const struct token *, unsigned int, char *, unsigned int); static int comp_meter_color(struct context *, const struct token *, unsigned int, char *, unsigned int); +static int +comp_quota_state_name(struct context *ctx, const struct token *token, + unsigned int ent, char *buf, unsigned int size); +static int +comp_quota_mode_name(struct context *ctx, const struct token *token, + unsigned int ent, char *buf, unsigned int size); +static int +comp_quota_update_name(struct context *ctx, const struct token *token, + unsigned int ent, char *buf, unsigned int size); +static int +comp_qu_mode_name(struct context *ctx, const struct token *token, + unsigned int ent, char *buf, unsigned int size); /** Token definitions. */ static const struct token token_list[] = { @@ -2695,6 +2790,14 @@ static const struct token token_list[] = { .args = ARGS(ARGS_ENTRY(struct buffer, args.configure.port_attr.nb_aging_objects)), }, + [CONFIG_QUOTAS_NUMBER] = { + .name = "quotas_number", + .help = "number of quotas", + .next = NEXT(next_config_attr, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.configure.port_attr.nb_quotas)), + }, [CONFIG_METERS_NUMBER] = { .name = "meters_number", .help = "number of meters", @@ -3077,7 +3180,7 @@ static const struct token token_list[] = { .help = "query indirect action", .next = NEXT(next_qia_query_attr, NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)), - .args = ARGS(ARGS_ENTRY(struct buffer, args.vc.attr.group)), + .args = ARGS(ARGS_ENTRY(struct buffer, args.ia.action_id)), .call = parse_qia, }, /* Indirect action destroy arguments. */ @@ -3097,6 +3200,21 @@ static const struct token token_list[] = { args.ia_destroy.action_id)), .call = parse_qia_destroy, }, + [QUEUE_INDIRECT_ACTION_QUERY_UPDATE] = { + .name = "query_update", + .help = "indirect query [and|or] update action", + .next = NEXT(next_qia_qu_attr, NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, args.ia.action_id)), + .call = parse_qia + }, + [QUEUE_INDIRECT_ACTION_QU_MODE] = { + .name = "mode", + .help = "indirect query [and|or] update action", + .next = NEXT(next_qia_qu_attr, + NEXT_ENTRY(INDIRECT_ACTION_QU_MODE_NAME)), + .args = ARGS(ARGS_ENTRY(struct buffer, args.ia.qu_mode)), + .call = parse_qia + }, /* Indirect action update arguments. */ [QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE] = { .name = "postpone", @@ -3220,6 +3338,27 @@ static const struct token token_list[] = { .args = ARGS(ARGS_ENTRY(struct buffer, args.ia.action_id)), .call = parse_ia, }, + [INDIRECT_ACTION_QUERY_UPDATE] = { + .name = "query_update", + .help = "query [and|or] update", + .next = NEXT(next_ia_qu_attr, NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, args.ia.action_id)), + .call = parse_ia + }, + [INDIRECT_ACTION_QU_MODE] = { + .name = "mode", + .help = "query_update mode", + .next = NEXT(next_ia_qu_attr, + NEXT_ENTRY(INDIRECT_ACTION_QU_MODE_NAME)), + .args = ARGS(ARGS_ENTRY(struct buffer, args.ia.qu_mode)), + .call = parse_ia, + }, + [INDIRECT_ACTION_QU_MODE_NAME] = { + .name = "mode_name", + .help = "query-update mode name", + .call = parse_qu_mode_name, + .comp = comp_qu_mode_name, + }, [VALIDATE] = { .name = "validate", .help = "check whether a flow rule can be created", @@ -5127,6 +5266,26 @@ static const struct token token_list[] = { .call = parse_meter_color, .comp = comp_meter_color, }, + [ITEM_QUOTA] = { + .name = "quota", + .help = "match quota", + .priv = PRIV_ITEM(QUOTA, sizeof(struct rte_flow_item_quota)), + .next = NEXT(item_quota), + .call = parse_vc + }, + [ITEM_QUOTA_STATE] = { + .name = "quota_state", + .help = "quota state", + .next = NEXT(item_quota, NEXT_ENTRY(ITEM_QUOTA_STATE_NAME), + NEXT_ENTRY(ITEM_PARAM_SPEC, ITEM_PARAM_MASK)), + .args = ARGS(ARGS_ENTRY(struct rte_flow_item_quota, state)) + }, + [ITEM_QUOTA_STATE_NAME] = { + .name = "state_name", + .help = "quota state name", + .call = parse_quota_state_name, + .comp = comp_quota_state_name + }, /* Validate/create actions. */ [ACTIONS] = { .name = "actions", @@ -6360,7 +6519,7 @@ static const struct token token_list[] = { .name = "indirect", .help = "apply indirect action by id", .priv = PRIV_ACTION(INDIRECT, 0), - .next = NEXT(NEXT_ENTRY(INDIRECT_ACTION_ID2PTR)), + .next = NEXT(next_ia), .args = ARGS(ARGS_ENTRY_ARB(0, sizeof(uint32_t))), .call = parse_vc, }, @@ -6411,6 +6570,64 @@ static const struct token token_list[] = { .help = "submit a list of associated actions for red", .next = NEXT(next_action), }, + [ACTION_QUOTA_CREATE] = { + .name = "quota_create", + .help = "create quota action", + .priv = PRIV_ACTION(QUOTA, + sizeof(struct rte_flow_action_quota)), + .next = NEXT(action_quota_create), + .call = parse_vc + }, + [ACTION_QUOTA_CREATE_LIMIT] = { + .name = "limit", + .help = "quota limit", + .next = NEXT(action_quota_create, NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct rte_flow_action_quota, quota)), + .call = parse_vc_conf + }, + [ACTION_QUOTA_CREATE_MODE] = { + .name = "mode", + .help = "quota mode", + .next = NEXT(action_quota_create, + NEXT_ENTRY(ACTION_QUOTA_CREATE_MODE_NAME)), + .args = ARGS(ARGS_ENTRY(struct rte_flow_action_quota, mode)), + .call = parse_vc_conf + }, + [ACTION_QUOTA_CREATE_MODE_NAME] = { + .name = "mode_name", + .help = "quota mode name", + .call = parse_quota_mode_name, + .comp = comp_quota_mode_name + }, + [ACTION_QUOTA_QU] = { + .name = "quota_update", + .help = "update quota action", + .priv = PRIV_ACTION(QUOTA, + sizeof(struct rte_flow_update_quota)), + .next = NEXT(action_quota_update), + .call = parse_vc + }, + [ACTION_QUOTA_QU_LIMIT] = { + .name = "limit", + .help = "quota limit", + .next = NEXT(action_quota_update, NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY(struct rte_flow_update_quota, quota)), + .call = parse_vc_conf + }, + [ACTION_QUOTA_QU_UPDATE_OP] = { + .name = "update_op", + .help = "query update op SET|ADD", + .next = NEXT(action_quota_update, + NEXT_ENTRY(ACTION_QUOTA_QU_UPDATE_OP_NAME)), + .args = ARGS(ARGS_ENTRY(struct rte_flow_update_quota, op)), + .call = parse_vc_conf + }, + [ACTION_QUOTA_QU_UPDATE_OP_NAME] = { + .name = "update_op_name", + .help = "quota update op name", + .call = parse_quota_update_name, + .comp = comp_quota_update_name + }, /* Top-level command. */ [ADD] = { @@ -6656,6 +6873,7 @@ parse_ia(struct context *ctx, const struct token *token, switch (ctx->curr) { case INDIRECT_ACTION_CREATE: case INDIRECT_ACTION_UPDATE: + case INDIRECT_ACTION_QUERY_UPDATE: out->args.vc.actions = (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), sizeof(double)); @@ -6676,6 +6894,8 @@ parse_ia(struct context *ctx, const struct token *token, case INDIRECT_ACTION_TRANSFER: out->args.vc.attr.transfer = 1; return len; + case INDIRECT_ACTION_QU_MODE: + return len; default: return -1; } @@ -6748,6 +6968,7 @@ parse_qia(struct context *ctx, const struct token *token, return len; case QUEUE_INDIRECT_ACTION_CREATE: case QUEUE_INDIRECT_ACTION_UPDATE: + case QUEUE_INDIRECT_ACTION_QUERY_UPDATE: out->args.vc.actions = (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), sizeof(double)); @@ -6770,6 +6991,8 @@ parse_qia(struct context *ctx, const struct token *token, return len; case QUEUE_INDIRECT_ACTION_CREATE_POSTPONE: return len; + case QUEUE_INDIRECT_ACTION_QU_MODE: + return len; default: return -1; } @@ -10052,6 +10275,108 @@ parse_meter_color(struct context *ctx, const struct token *token, return len; } +static int +parse_name_to_index(struct context *ctx, const struct token *token, + const char *str, unsigned int len, void *buf, + unsigned int size, + const char *const names[], size_t names_size, uint32_t *dst) +{ + int ret; + uint32_t i; + + RTE_SET_USED(token); + RTE_SET_USED(buf); + RTE_SET_USED(size); + if (!ctx->object) + return len; + for (i = 0; i < names_size; i++) { + if (!names[i]) + continue; + ret = strcmp_partial(names[i], str, + RTE_MIN(len, strlen(names[i]))); + if (!ret) { + *dst = i; + return len; + } + } + return -1; +} + +static const char *const quota_mode_names[] = { + NULL, + [RTE_FLOW_QUOTA_MODE_PACKET] = "packet", + [RTE_FLOW_QUOTA_MODE_L2] = "l2", + [RTE_FLOW_QUOTA_MODE_L3] = "l3" +}; + +static const char *const quota_state_names[] = { + [RTE_FLOW_QUOTA_STATE_PASS] = "pass", + [RTE_FLOW_QUOTA_STATE_BLOCK] = "block" +}; + +static const char *const quota_update_names[] = { + [RTE_FLOW_UPDATE_QUOTA_SET] = "set", + [RTE_FLOW_UPDATE_QUOTA_ADD] = "add" +}; + +static const char *const query_update_mode_names[] = { + [RTE_FLOW_QU_DEFAULT] = "default", + [RTE_FLOW_QU_QUERY_FIRST] = "query_first", + [RTE_FLOW_QU_UPDATE_FIRST] = "update_first" +}; + +static int +parse_quota_state_name(struct context *ctx, const struct token *token, + const char *str, unsigned int len, void *buf, + unsigned int size) +{ + struct rte_flow_item_quota *quota = ctx->object; + + return parse_name_to_index(ctx, token, str, len, buf, size, + quota_state_names, + RTE_DIM(quota_state_names), + (uint32_t *)"a->state); +} + +static int +parse_quota_mode_name(struct context *ctx, const struct token *token, + const char *str, unsigned int len, void *buf, + unsigned int size) +{ + struct rte_flow_action_quota *quota = ctx->object; + + return parse_name_to_index(ctx, token, str, len, buf, size, + quota_mode_names, + RTE_DIM(quota_mode_names), + (uint32_t *)"a->mode); +} + +static int +parse_quota_update_name(struct context *ctx, const struct token *token, + const char *str, unsigned int len, void *buf, + unsigned int size) +{ + struct rte_flow_update_quota *update = ctx->object; + + return parse_name_to_index(ctx, token, str, len, buf, size, + quota_update_names, + RTE_DIM(quota_update_names), + (uint32_t *)&update->op); +} + +static int +parse_qu_mode_name(struct context *ctx, const struct token *token, + const char *str, unsigned int len, void *buf, + unsigned int size) +{ + struct buffer *out = ctx->object; + + return parse_name_to_index(ctx, token, str, len, buf, size, + query_update_mode_names, + RTE_DIM(query_update_mode_names), + (uint32_t *)&out->args.ia.qu_mode); +} + /** No completion. */ static int comp_none(struct context *ctx, const struct token *token, @@ -10343,6 +10668,21 @@ comp_queue_id(struct context *ctx, const struct token *token, return i; } +static int +comp_names_to_index(struct context *ctx, const struct token *token, + unsigned int ent, char *buf, unsigned int size, + const char *const names[], size_t names_size) +{ + RTE_SET_USED(ctx); + RTE_SET_USED(token); + if (!buf) + return names_size; + if (names[ent] && ent < names_size) + return rte_strscpy(buf, names[ent], size); + return -1; + +} + /** Complete available Meter colors. */ static int comp_meter_color(struct context *ctx, const struct token *token, @@ -10357,6 +10697,42 @@ comp_meter_color(struct context *ctx, const struct token *token, return -1; } +static int +comp_quota_state_name(struct context *ctx, const struct token *token, + unsigned int ent, char *buf, unsigned int size) +{ + return comp_names_to_index(ctx, token, ent, buf, size, + quota_state_names, + RTE_DIM(quota_state_names)); +} + +static int +comp_quota_mode_name(struct context *ctx, const struct token *token, + unsigned int ent, char *buf, unsigned int size) +{ + return comp_names_to_index(ctx, token, ent, buf, size, + quota_mode_names, + RTE_DIM(quota_mode_names)); +} + +static int +comp_quota_update_name(struct context *ctx, const struct token *token, + unsigned int ent, char *buf, unsigned int size) +{ + return comp_names_to_index(ctx, token, ent, buf, size, + quota_update_names, + RTE_DIM(quota_update_names)); +} + +static int +comp_qu_mode_name(struct context *ctx, const struct token *token, + unsigned int ent, char *buf, unsigned int size) +{ + return comp_names_to_index(ctx, token, ent, buf, size, + query_update_mode_names, + RTE_DIM(query_update_mode_names)); +} + /** Internal context. */ static struct context cmd_flow_context; @@ -10715,7 +11091,14 @@ cmd_flow_parsed(const struct buffer *in) case QUEUE_INDIRECT_ACTION_QUERY: port_queue_action_handle_query(in->port, in->queue, in->postpone, - in->args.vc.attr.group); + in->args.ia.action_id); + break; + case QUEUE_INDIRECT_ACTION_QUERY_UPDATE: + port_queue_action_handle_query_update(in->port, in->queue, + in->postpone, + in->args.ia.action_id, + in->args.ia.qu_mode, + in->args.vc.actions); break; case INDIRECT_ACTION_CREATE: port_action_handle_create( @@ -10739,6 +11122,12 @@ cmd_flow_parsed(const struct buffer *in) case INDIRECT_ACTION_QUERY: port_action_handle_query(in->port, in->args.ia.action_id); break; + case INDIRECT_ACTION_QUERY_UPDATE: + port_action_handle_query_update(in->port, + in->args.ia.action_id, + in->args.ia.qu_mode, + in->args.vc.actions); + break; case VALIDATE: port_flow_validate(in->port, &in->args.vc.attr, in->args.vc.pattern, in->args.vc.actions, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index acccb6b035..820ede5501 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -1901,9 +1901,13 @@ port_action_handle_update(portid_t port_id, uint32_t id, } static void -port_action_handle_query_dump(uint32_t type, union port_action_query *query) +port_action_handle_query_dump(portid_t port_id, + const struct port_indirect_action *pia, + union port_action_query *query) { - switch (type) { + if (!pia || !query) + return; + switch (pia->type) { case RTE_FLOW_ACTION_TYPE_AGE: printf("Indirect AGE action:\n" " aged: %u\n" @@ -1967,15 +1971,41 @@ port_action_handle_query_dump(uint32_t type, union port_action_query *query) query->ct.reply_dir.max_win, query->ct.reply_dir.max_ack); break; + case RTE_FLOW_ACTION_TYPE_QUOTA: + printf("Indirect QUOTA action %u\n" + " unused quota: %" PRId64 "\n", + pia->id, query->quota.quota); + break; default: - fprintf(stderr, - "Indirect action (type: %d) doesn't support query\n", - type); + printf("port-%u: indirect action %u (type: %d) doesn't support query\n", + pia->type, pia->id, port_id); break; } } +void +port_action_handle_query_update(portid_t port_id, uint32_t id, + enum rte_flow_query_update_mode qu_mode, + const struct rte_flow_action *action) +{ + int ret; + struct rte_flow_error error; + struct port_indirect_action *pia; + union port_action_query query; + + pia = action_get_by_id(port_id, id); + if (!pia || !pia->handle) + return; + ret = rte_flow_action_handle_query_update(port_id, pia->handle, action, + &query, qu_mode, &error); + if (ret) + port_flow_complain(&error); + else + port_action_handle_query_dump(port_id, pia, &query); + +} + int port_action_handle_query(portid_t port_id, uint32_t id) { @@ -1989,6 +2019,7 @@ port_action_handle_query(portid_t port_id, uint32_t id) switch (pia->type) { case RTE_FLOW_ACTION_TYPE_AGE: case RTE_FLOW_ACTION_TYPE_COUNT: + case RTE_FLOW_ACTION_TYPE_QUOTA: break; default: fprintf(stderr, @@ -2001,7 +2032,7 @@ port_action_handle_query(portid_t port_id, uint32_t id) memset(&query, 0, sizeof(query)); if (rte_flow_action_handle_query(port_id, pia->handle, &query, &error)) return port_flow_complain(&error); - port_action_handle_query_dump(pia->type, &query); + port_action_handle_query_dump(port_id, pia, &query); return 0; } @@ -2945,6 +2976,42 @@ port_queue_action_handle_update(portid_t port_id, return 0; } +void +port_queue_action_handle_query_update(portid_t port_id, + uint32_t queue_id, bool postpone, + uint32_t id, + enum rte_flow_query_update_mode qu_mode, + const struct rte_flow_action *action) +{ + int ret; + struct rte_flow_error error; + struct port_indirect_action *pia = action_get_by_id(port_id, id); + const struct rte_flow_op_attr attr = { .postpone = postpone}; + struct queue_job *job; + + if (!pia || !pia->handle) + return; + job = calloc(1, sizeof(*job)); + if (!job) + return; + job->type = QUEUE_JOB_TYPE_ACTION_QUERY; + job->pia = pia; + + ret = rte_flow_async_action_handle_query_update(port_id, queue_id, + &attr, pia->handle, + action, + &job->query, + qu_mode, job, + &error); + if (ret) { + port_flow_complain(&error); + free(job); + } else { + printf("port-%u: indirect action #%u update-and-query queued\n", + port_id, id); + } +} + /** Enqueue indirect action query operation. */ int port_queue_action_handle_query(portid_t port_id, @@ -3215,7 +3282,8 @@ port_queue_flow_pull(portid_t port_id, queueid_t queue_id) else if (job->type == QUEUE_JOB_TYPE_ACTION_DESTROY) free(job->pia); else if (job->type == QUEUE_JOB_TYPE_ACTION_QUERY) - port_action_handle_query_dump(job->pia->type, &job->query); + port_action_handle_query_dump(port_id, job->pia, + &job->query); free(job); } printf("Queue #%u pulled %u operations (%u failed, %u succeeded)\n", diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 7d24d25970..babc94246f 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -236,6 +236,7 @@ union port_action_query { struct rte_flow_query_count count; struct rte_flow_query_age age; struct rte_flow_action_conntrack ct; + struct rte_flow_query_quota quota; }; /* Descriptor for queue job. */ @@ -904,6 +905,10 @@ struct rte_flow_action_handle *port_action_handle_get_by_id(portid_t port_id, uint32_t id); int port_action_handle_update(portid_t port_id, uint32_t id, const struct rte_flow_action *action); +void +port_action_handle_query_update(portid_t port_id, uint32_t id, + enum rte_flow_query_update_mode qu_mode, + const struct rte_flow_action *action); int port_flow_get_info(portid_t port_id); int port_flow_configure(portid_t port_id, const struct rte_flow_port_attr *port_attr, @@ -948,6 +953,12 @@ int port_queue_action_handle_update(portid_t port_id, uint32_t queue_id, const struct rte_flow_action *action); int port_queue_action_handle_query(portid_t port_id, uint32_t queue_id, bool postpone, uint32_t id); +void +port_queue_action_handle_query_update(portid_t port_id, + uint32_t queue_id, bool postpone, + uint32_t id, + enum rte_flow_query_update_mode qu_mode, + const struct rte_flow_action *action); int port_queue_flow_push(portid_t port_id, queueid_t queue_id); int port_queue_flow_pull(portid_t port_id, queueid_t queue_id); void port_queue_flow_aged(portid_t port_id, uint32_t queue_id, uint8_t destroy); diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini index 510cc6679d..92d8765fd5 100644 --- a/doc/guides/nics/features/default.ini +++ b/doc/guides/nics/features/default.ini @@ -132,6 +132,7 @@ ppp = pppoed = pppoes = pppoe_proto_id = +quota = raw = represented_port = sctp = @@ -178,6 +179,7 @@ pf = port_id = port_representor = queue = +quota = raw_decap = raw_encap = represented_port = diff --git a/doc/guides/nics/features/mlx5.ini b/doc/guides/nics/features/mlx5.ini index 62fd330e2b..ff9ea0cb43 100644 --- a/doc/guides/nics/features/mlx5.ini +++ b/doc/guides/nics/features/mlx5.ini @@ -80,6 +80,7 @@ mpls = Y nvgre = Y port_id = Y port_representor = Y +quota = Y tag = Y tcp = Y udp = Y @@ -112,6 +113,7 @@ of_set_vlan_pcp = Y of_set_vlan_vid = Y port_id = Y queue = Y +quota = Y raw_decap = Y raw_encap = Y represented_port = Y diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index 51f51259e3..1aac8becf2 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -291,6 +291,18 @@ Limitations - No Tx metadata go to the E-Switch steering domain for the Flow group 0. The flows within group 0 and set metadata action are rejected by hardware. +- Quota: + + - Template API only (HWS). + - Quota flow action and item supported in non-root HWS tables – flow group must be > 0. + - Quota implemented as indirect flow action only. + - Maximal value for quota SET and ADD operations in INT32_MAX (2G) + - Application cannot use 2 consecutive ADD updates. + Next tokens update after ADD must always be SET. + - HW can reduce non-negative quota to negative value. + - Quota flow action cannot be used with Meter or CT flow actions in the same rule. + - Maximal number of HW quota and HW meter objects <= 16e6. + .. note:: MAC addresses not already present in the bridge table of the associated diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 3e6242803d..7a3868638c 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -1544,6 +1544,13 @@ Matches Color Marker set by a Meter. - ``color``: Metering color marker. +Item: ``QUOTA`` +^^^^^^^^^^^^^^^ + +Matches flow quota state set by quota action. + +- ``state``: Flow quota state + Actions ~~~~~~~ @@ -3227,6 +3234,40 @@ and rte_mtr_policy_get() API respectively. | ``policy`` | Meter policy object | +------------------+----------------------+ +Action: ``QUOTA`` +^^^^^^^^^^^^^^^^^ + +Update ``quota`` value and set packet quota state. +If the ``quota`` value after update is non-negative packet quota state set to +``RTE_FLOW_QUOTA_STATE_PASS``. Otherwise, packet quota state set to ``RTE_FLOW_QUOTA_STATE_BLOCK``. +The ``quota`` value is reduced according to ``mode`` setting. + +.. _table_rte_flow_action_quota: + +.. table:: QUOTA + + +------------------+------------------------+ + | Field | Value | + +==================+========================+ + | ``mode`` | Quota operational mode | + +------------------+------------------------+ + | ``quota`` | Quota value | + +------------------+------------------------+ + +.. _rte_flow_quota_mode: + +.. table:: Quota update modes + + +---------------------------------+-------------------------------------+ + | Value | Description | + +=================================+=====================================+ + | ``RTE_FLOW_QUOTA_MODE_PACKET`` | Count packets | + +---------------------------------+-------------------------------------+ + | ``RTE_FLOW_QUOTA_MODE_L2`` | Count packet bytes starting from L2 | + +------------------+----------------------------------------------------+ + | ``RTE_FLOW_QUOTA_MODE_L3`` | Count packet bytes starting from L3 | + +------------------+----------------------------------------------------+ + Negative types ~~~~~~~~~~~~~~ diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 8b8aa940be..6439de3c1d 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -157,6 +157,7 @@ static const struct rte_flow_desc_data rte_flow_desc_item[] = { MK_FLOW_ITEM(L2TPV2, sizeof(struct rte_flow_item_l2tpv2)), MK_FLOW_ITEM(PPP, sizeof(struct rte_flow_item_ppp)), MK_FLOW_ITEM(METER_COLOR, sizeof(struct rte_flow_item_meter_color)), + MK_FLOW_ITEM(QUOTA, sizeof(struct rte_flow_item_quota)), }; /** Generate flow_action[] entry. */ @@ -251,6 +252,7 @@ static const struct rte_flow_desc_data rte_flow_desc_action[] = { MK_FLOW_ACTION(REPRESENTED_PORT, sizeof(struct rte_flow_action_ethdev)), MK_FLOW_ACTION(METER_MARK, sizeof(struct rte_flow_action_meter_mark)), MK_FLOW_ACTION(SEND_TO_KERNEL, 0), + MK_FLOW_ACTION(QUOTA, sizeof(struct rte_flow_action_quota)), }; int diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index f9e919bb80..c67f7c0203 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -624,7 +624,45 @@ enum rte_flow_item_type { * See struct rte_flow_item_meter_color. */ RTE_FLOW_ITEM_TYPE_METER_COLOR, + + /** + * Match Quota state + * + * @see struct rte_flow_item_quota + */ + RTE_FLOW_ITEM_TYPE_QUOTA, +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * QUOTA state. + * + * @see struct rte_flow_item_quota + */ +enum rte_flow_quota_state { + RTE_FLOW_QUOTA_STATE_PASS, /** PASS quota state */ + RTE_FLOW_QUOTA_STATE_BLOCK /** BLOCK quota state */ +}; + +/** + * RTE_FLOW_ITEM_TYPE_QUOTA + * + * Matches QUOTA state + */ +struct rte_flow_item_quota { + enum rte_flow_quota_state state; +}; + +/** + * Default mask for RTE_FLOW_ITEM_TYPE_QUOTA + */ +#ifndef __cplusplus +static const struct rte_flow_item_quota rte_flow_item_quota_mask = { + .state = (enum rte_flow_quota_state)0xff }; +#endif /** * @@ -2736,6 +2774,81 @@ enum rte_flow_action_type { * No associated configuration structure. */ RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL, + + /** + * Apply quota verdict - PASS or BLOCK to a flow. + * + * @see struct rte_flow_action_quota + * @see struct rte_flow_query_quota + * @see struct rte_flow_update_quota + */ + RTE_FLOW_ACTION_TYPE_QUOTA, +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * QUOTA operational mode. + * + * @see struct rte_flow_action_quota + */ +enum rte_flow_quota_mode { + RTE_FLOW_QUOTA_MODE_PACKET = 1, /** Count packets */ + RTE_FLOW_QUOTA_MODE_L2 = 2, /** Count packet bytes starting from L2 */ + RTE_FLOW_QUOTA_MODE_L3 = 3, /** Count packet bytes starting from L3 */ +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Create QUOTA action. + * + * @see RTE_FLOW_ACTION_TYPE_QUOTA + */ +struct rte_flow_action_quota { + enum rte_flow_quota_mode mode; /** quota operational mode */ + int64_t quota; /** quota value */ +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Query indirect QUOTA action. + * + * @see RTE_FLOW_ACTION_TYPE_QUOTA + * + */ +struct rte_flow_query_quota { + int64_t quota; /** quota value */ +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Indirect QUOTA update operations. + * + * @see struct rte_flow_update_quota + */ +enum rte_flow_update_quota_op { + RTE_FLOW_UPDATE_QUOTA_SET, /** set new quota value */ + RTE_FLOW_UPDATE_QUOTA_ADD, /** increase existing quota with new value */ +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * @see RTE_FLOW_ACTION_TYPE_QUOTA + * + * Update indirect QUOTA action. + */ +struct rte_flow_update_quota { + enum rte_flow_update_quota_op op; /** update operation */ + int64_t quota; /** quota value */ }; /** @@ -4854,6 +4967,11 @@ struct rte_flow_port_info { * @see RTE_FLOW_ACTION_TYPE_CONNTRACK */ uint32_t max_nb_conn_tracks; + /** + * Maximum number of quota actions. + * @see RTE_FLOW_ACTION_TYPE_QUOTA + */ + uint32_t max_nb_quotas; /** * Port supported flags (RTE_FLOW_PORT_FLAG_*). */ @@ -4932,6 +5050,11 @@ struct rte_flow_port_attr { * @see RTE_FLOW_ACTION_TYPE_CONNTRACK */ uint32_t nb_conn_tracks; + /** + * Maximum number of quota actions. + * @see RTE_FLOW_ACTION_TYPE_QUOTA + */ + uint32_t nb_quotas; /** * Port flags (RTE_FLOW_PORT_FLAG_*). */