From patchwork Wed Dec 21 07:35:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 121158 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6D220A034C; Wed, 21 Dec 2022 08:36:23 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9156D4113F; Wed, 21 Dec 2022 08:36:22 +0100 (CET) Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1nam02on2046.outbound.protection.outlook.com [40.107.96.46]) by mails.dpdk.org (Postfix) with ESMTP id B235E40684 for ; Wed, 21 Dec 2022 08:36:20 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=JuR+hw3EvcYromupK05FDaHV5s6ORFt1yTJV29Y0Ttqcc62hbl4FgYD6pQAOx6XgRdrOqmAMaetKS1/GYhJMWX33CSuWM2jmCoqL2GctjRsqEowNv0cM3XLSUC+hBWhuEOTQMalsrFhc6eGFab70u7E+5ldFgi3PNZkpNAnwOMPTYIjLAexw3lYU2s3+LwCGxtD7NCUrKzy32PuSxKr6zHAEuDPDSvHqXfsWQ9CZ+2VYj7YRqSQcLG54UeGFQ4FZ7ySIvMXmOyMlmIiWGciG3mUpnBfHnuIYNx/cAp9gFgW1bOIlKK92pvt83Bnag9zJr1neSb5MphTLD2ViM/ucvQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=NQGzO/Ohc8mNGWIBgclnJ5g5kQ6iLp3iVyrxV3jPVV8=; b=NS1sZvalQcTkZkXaLfksUGvRn33xpzgQXxqBfeH57H3QkLV8OMxZ5OLAAbtVtIRRdbwAyBHQyAzYWHwZZ+WpnQE7+N7VgpCSCvHZVDNHFNmcdHvZXjFUx2rkB55J3OP+lbIPWqrV7yrU0xVZrLMm/egk/pbD+m9Vt7DGQQ8h/DzV1Ezqr2OYChwYsY9yNI41z1p0bLJ8YQ0QNrVD+8/0S6cTEGgW7YMaJ16396J1Waosc36BFNpSw1C51YOCQEpsgBtovMwyPO3F+Lvh26rHCN1p6hSkTvCP5JOhGrjPUlDYcCMdlfrtl4Ng+c12M+q38LIXCDvaTyzlmWqXDFnhNg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=NQGzO/Ohc8mNGWIBgclnJ5g5kQ6iLp3iVyrxV3jPVV8=; b=WNCQvvQYOIF2kSG5DrC1Y3GpInTGrXBYdntrrECBYtQH3akgBcgHkImQHkTg/ncyQ+J5AUrdhAusnGJK+/u5DYBbo3L9WJPuGrtsCdWd2RT+os9sLWUIWXs4HujEy3QU0TKECc5KtohvJrSmQGLCCVj1MG3/tCu1Tj+pCk/oQ7lLlpdINr4cNw0rXUTYslRWmnOk7SjoQ+kM2iLxW0vk4+RCqE7HEtn4mzH5opeSnzkgxB4uj76/qvrwCsEaWuaEodESEv3nr1/600Zkwr+6qve+Su665tvv7xorJmiXioo7oJ+yFajhGdcdkhxnLu5zF6mM5nT3Htsgpm3m/kaWKA== Received: from DM6PR13CA0013.namprd13.prod.outlook.com (2603:10b6:5:bc::26) by CH3PR12MB8581.namprd12.prod.outlook.com (2603:10b6:610:15d::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5924.16; Wed, 21 Dec 2022 07:36:18 +0000 Received: from DM6NAM11FT018.eop-nam11.prod.protection.outlook.com (2603:10b6:5:bc:cafe::8a) by DM6PR13CA0013.outlook.office365.com (2603:10b6:5:bc::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.10 via Frontend Transport; Wed, 21 Dec 2022 07:36:18 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by DM6NAM11FT018.mail.protection.outlook.com (10.13.172.110) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.10 via Frontend Transport; Wed, 21 Dec 2022 07:36:18 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 20 Dec 2022 23:36:05 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 20 Dec 2022 23:36:02 -0800 From: Gregory Etelson To: CC: , , , Ori Kam , Thomas Monjalon , Ferruh Yigit , Andrew Rybchenko Subject: [PATCH 1/2] ethdev: add query_update sync and async function calls Date: Wed, 21 Dec 2022 09:35:46 +0200 Message-ID: <20221221073547.988-1-getelson@nvidia.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT018:EE_|CH3PR12MB8581:EE_ X-MS-Office365-Filtering-Correlation-Id: 1ff00a5c-5086-4c6a-9779-08dae3260cae X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: N5bnOcL9h2InDXDrAc63M6QEWLPdPjr+07ejxdI5x8+KWW8uf/LsEeT0Xpcsnrh5wjAD8Lcjb3MHEBC9aQQ2UocQorqgFyehm8opFP+nRQE31gJILdpMxmy42736NNJY18ynL01skojQV0PQA0e+CoPddCqJN2QKJIFcBrWSRV8UEPgiyID0dsrItcjdh/e64oMpS4jfa4YqxBGg12kGltdwMSdD9V3VOhHYRBJvm9Yc7pRHkhA3fOkW5Rm+AqUc/jXSciuaUhxnpyews7FB1KkbrPPeJauidpOfXuD7RRIA5P+pvNJ4tU04nD08jPuSWk4mz3T42m9xbPdGXsH6inJ2Akw9l+SR/8HwBXWcHCGKypuZRkS1txWl3IemlrDykdNQ0PwYdvOVXlFvECyHJJ/0gZlYnbCXUsHEwufbVIUlaPpjwfrp6ECZTvk8fA/fb3lYBmGqcDt2nG48nlsH0V7Lrm3WdlMeDI3NXqenjBQs6NNxug/mNmcBSj1wcBIh/qzqL8RsIN/z/GAHlX1sRA6hh7X9zpdEh9Fn/0bzUAB0RNxGTHmOJBX6VnJo+/7v74GHpvtrc0vs2MbfaRjXZMfiBFGdP0+VhwPw5otLa5YiOp54byuomOnCUa4qveVyfFiPyLNKps6yVDCYUJeECJ1S6+nw/J/zoIo2tZ5fKivGGpiBLTBr+19OsYQJ/O1pDJooa14rfLUtb1Cs7dc+Uw== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(346002)(39860400002)(396003)(136003)(376002)(451199015)(46966006)(36840700001)(40470700004)(6666004)(6286002)(186003)(16526019)(26005)(356005)(36756003)(4326008)(41300700001)(2906002)(70586007)(40460700003)(8676002)(70206006)(55016003)(5660300002)(8936002)(40480700001)(82740400003)(478600001)(54906003)(6916009)(7696005)(316002)(7636003)(83380400001)(86362001)(47076005)(426003)(82310400005)(36860700001)(2616005)(336012)(1076003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Dec 2022 07:36:18.5971 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1ff00a5c-5086-4c6a-9779-08dae3260cae X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT018.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB8581 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Current API allows either query or update indirect flow action. If port hardware allows both update and query in a single operation, application still has to issue 2 separate hardware requests. The patch adds `rte_flow_action_handle_query_update` function call, and it's async version `rte_flow_async_action_handle_query_update` to atomically query and update flow action. int rte_flow_action_handle_query_update (uint16_t port_id, struct rte_flow_action_handle *handle, const void *update, void *query, enum rte_flow_query_update_mode mode, struct rte_flow_error *error); int rte_flow_async_action_handle_query_update (uint16_t port_id, uint32_t queue_id, const struct rte_flow_op_attr *op_attr, struct rte_flow_action_handle *action_handle, const void *update, void *query, enum rte_flow_query_update_mode mode, void *user_data, struct rte_flow_error *error); Application can control query and update order, if that is supported by port hardware, by setting qu_mode parameter to RTE_FLOW_QU_QUERY_FIRST or RTE_FLOW_QU_UPDATE_FIRST. RTE_FLOW_QU_QUERY and RTE_FLOW_QU_UPDATE parameter values provide query only and update only functionality for backward compatibility with existing API. Signed-off-by: Gregory Etelson --- lib/ethdev/rte_flow.c | 39 +++++++++++++ lib/ethdev/rte_flow.h | 105 +++++++++++++++++++++++++++++++++++ lib/ethdev/rte_flow_driver.h | 15 +++++ lib/ethdev/version.map | 5 ++ 4 files changed, 164 insertions(+) diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 7d0c24366c..8b8aa940be 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -1883,3 +1883,42 @@ rte_flow_async_action_handle_query(uint16_t port_id, action_handle, data, user_data, error); return flow_err(port_id, ret, error); } + +int +rte_flow_action_handle_query_update(uint16_t port_id, + struct rte_flow_action_handle *handle, + const void *update, void *query, + enum rte_flow_query_update_mode mode, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + int ret; + + if (!ops || !ops->action_handle_query_update) + return -ENOTSUP; + ret = ops->action_handle_query_update(dev, handle, update, query, + mode, error); + return flow_err(port_id, ret, error); +} + +int +rte_flow_async_action_handle_query_update(uint16_t port_id, uint32_t queue_id, + const struct rte_flow_op_attr *attr, + struct rte_flow_action_handle *handle, + const void *update, void *query, + enum rte_flow_query_update_mode mode, + void *user_data, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + int ret; + + if (!ops || !ops->async_action_handle_query_update) + return -ENOTSUP; + ret = ops->async_action_handle_query_update(dev, queue_id, attr, + handle, update, query, mode, + user_data, error); + return flow_err(port_id, ret, error); +} diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index b60987db4b..f9e919bb80 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -5622,6 +5622,111 @@ rte_flow_async_action_handle_query(uint16_t port_id, void *user_data, struct rte_flow_error *error); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Query_update operational mode. + * + * RTE_FLOW_QU_DEFAULT + * Default query_update operational mode. + * If both `update` and `query` parameters are not NULL the call updates and + * queries action in default port order. + * If `update` parameter is NULL the call queries action. + * If `query` parameter is NULL the call updates action. + * RTE_FLOW_QU_QUERY_FIRST + * Force port to query action before update. + * RTE_FLOW_QU_UPDATE_FIRST + * Force port to update action before update. + * + * @see rte_flow_action_handle_query_update() + * @see rte_flow_async_action_handle_query_update() + */ +enum rte_flow_query_update_mode { + RTE_FLOW_QU_DEFAULT, /* HW default mode */ + RTE_FLOW_QU_QUERY_FIRST, /* query before update */ + RTE_FLOW_QU_UPDATE_FIRST, /* query after update */ +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Query and/or update indirect flow action. + * + * @param[in] port_id + * Port identifier of Ethernet device. + * @param[in] handle + * Handle for the indirect action object to be updated. + * @param update[in] + * Update profile specification used to modify the action pointed by handle. + * *update* could be with the same type of the immediate action corresponding + * to the *handle* argument when creating, or a wrapper structure includes + * action configuration to be updated and bit fields to indicate the member + * of fields inside the action to update. + * @param[out] query + * Pointer to storage for the associated query data type. + * @param[in] mode + * Operational mode. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_action_handle_query_update(uint16_t port_id, + struct rte_flow_action_handle *handle, + const void *update, void *query, + enum rte_flow_query_update_mode mode, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue async indirect flow action query and/or update + * + * @param[in] port_id + * Port identifier of Ethernet device. + * @param[in] queue_id + * Flow queue which is used to update the rule. + * @param[in] attr + * Indirect action update operation attributes. + * @param[in] handle + * Handle for the indirect action object to be updated. + * @param[in] update + * Update profile specification used to modify the action pointed by handle. + * *update* could be with the same type of the immediate action corresponding + * to the *handle* argument when creating, or a wrapper structure includes + * action configuration to be updated and bit fields to indicate the member + * of fields inside the action to update. + * @param[in] query + * Pointer to storage for the associated query data type. + * Query result returned on async completion event. + * @param[in] mode + * Operational mode. + * @param[in] user_data + * The user data that will be returned on async completion event. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_async_action_handle_query_update(uint16_t port_id, uint32_t queue_id, + const struct rte_flow_op_attr *attr, + struct rte_flow_action_handle *handle, + const void *update, void *query, + enum rte_flow_query_update_mode mode, + void *user_data, + struct rte_flow_error *error); + #ifdef __cplusplus } #endif diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h index c7d0699c91..7358c10a7a 100644 --- a/lib/ethdev/rte_flow_driver.h +++ b/lib/ethdev/rte_flow_driver.h @@ -114,6 +114,13 @@ struct rte_flow_ops { const struct rte_flow_action_handle *handle, void *data, struct rte_flow_error *error); + /** See rte_flow_action_handle_query_update() */ + int (*action_handle_query_update) + (struct rte_eth_dev *dev, + struct rte_flow_action_handle *handle, + const void *update, void *query, + enum rte_flow_query_update_mode qu_mode, + struct rte_flow_error *error); /** See rte_flow_tunnel_decap_set() */ int (*tunnel_decap_set) (struct rte_eth_dev *dev, @@ -276,6 +283,14 @@ struct rte_flow_ops { void *data, void *user_data, struct rte_flow_error *error); + /** See rte_flow_async_action_handle_query_update */ + int (*async_action_handle_query_update) + (struct rte_eth_dev *dev, uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_action_handle *action_handle, + const void *update, void *query, + enum rte_flow_query_update_mode qu_mode, + void *user_data, struct rte_flow_error *error); }; /** diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 17201fbe0f..42f0d7b30c 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -298,6 +298,11 @@ EXPERIMENTAL { rte_flow_get_q_aged_flows; rte_mtr_meter_policy_get; rte_mtr_meter_profile_get; + + # future + rte_flow_action_handle_query_update; + rte_flow_async_action_handle_query_update; + }; INTERNAL {