From patchwork Wed Jan 11 09:22:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 121820 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C96A5423AC; Wed, 11 Jan 2023 10:23:19 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B3F1140691; Wed, 11 Jan 2023 10:23:19 +0100 (CET) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2071.outbound.protection.outlook.com [40.107.94.71]) by mails.dpdk.org (Postfix) with ESMTP id 7D06C4014F for ; Wed, 11 Jan 2023 10:23:18 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=M3JJrPQfhJoMYAZsUVvpMfBo7NDKHyftjycpO5iSzdOoc3sodn1DBRRu0dDKC4nxl7VoPpibVQ2I/FvhehBFaiprnmW4DOFiO1d+iUoE0fMQuaZ3E0Vc42YGzNeESz21M5s8HD5doyTCLQ3vzLpJSN0jKDVZLkNIuDbSbJ8bxlOmg8WnWFog229jYLN2qzGi0lvZ/JYIMoG29RgFww6sewPX2ukJOTmR3agzs2aVsfKt4GxAU6dqqiXFY8XNsLcgRGAEH7TF6M/NrOUU47Ea4V8/PrP61aJtx/6hUijwJFeBJBQx3Pc7crTspR9yA+aHkuyWsHdLboJvRg0adPIGNA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=WhOPYugYS8IAlxhVK4j8GFmlxMI8zB9aRUrs4eYC9Us=; b=clgo+zc0l2CkhwROOJouE3XryUY1rulZ12CarbI5zljdGyc+ZvW2FlwWqpQf+7nBSqH39C4BVsmTLxsfy30xNlRgSJK9UFUqVqQ/594aBGsspdzCSkAiVkpTQbu9Glbx2Ojxk9fZ7kfSeF9nlNVx2hgvVz+2JPbgbyQWjK/Vft+KhqjkAHbx4Lv/xJ1/rH5ChbgmIGxx6EETTQVEpLiTspBnFqt71/8OnEZ/Jn/TUfWBBfjI5w0tRh9Xo36C66Gp93e9jSc2U0uXim+rkSUYuLGhg1QxBCMWzaiFoF+rP4kRn0eQ9rJZCqHgqWwE32M75zQl0rSjkXxk/di3w15IAw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=WhOPYugYS8IAlxhVK4j8GFmlxMI8zB9aRUrs4eYC9Us=; b=CyCGJ4rgBZyMVl5p23Wi1pGlnIMTP5vVeIIftiEJ8+sOA8AQofzMtpMtrdmLCQ0iwDiibEhOHlkeokqBpC8wOrgL9HJo4NEq9mqaGJcB/aFg5C4gJex5kmL7zpkam6iV7YQtPUWmpp96eHznXEoJh2NWFvBz29Cc/8RxlkItANQZ00lfAgbxjC1EZRDciYyuyKRbbfqhWCY2LfCuhPVGF8j7juNAhvHkaxA7GYfqzQ+n+TNXvk8r1Fk/3AC7UW8vW68h2YTlGRzuj7ZbQGdXs2fesgDFgM8jVGBkjvK+CYWm+wzWt6nzFhuOo6gBiKBGrgf+ggjLJaMLthAWeWYRPw== Received: from DM5PR07CA0065.namprd07.prod.outlook.com (2603:10b6:4:ad::30) by SA1PR12MB8141.namprd12.prod.outlook.com (2603:10b6:806:339::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Wed, 11 Jan 2023 09:23:16 +0000 Received: from CY4PEPF0000B8EC.namprd05.prod.outlook.com (2603:10b6:4:ad:cafe::92) by DM5PR07CA0065.outlook.office365.com (2603:10b6:4:ad::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13 via Frontend Transport; Wed, 11 Jan 2023 09:23:16 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CY4PEPF0000B8EC.mail.protection.outlook.com (10.167.241.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.3 via Frontend Transport; Wed, 11 Jan 2023 09:23:15 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 11 Jan 2023 01:23:07 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 11 Jan 2023 01:23:05 -0800 From: Gregory Etelson To: CC: , , , Ori Kam , Thomas Monjalon , Ferruh Yigit , Andrew Rybchenko Subject: [PATCH v2 1/2] ethdev: add query_update sync and async function calls Date: Wed, 11 Jan 2023 11:22:49 +0200 Message-ID: <20230111092250.30880-1-getelson@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221221073547.988-1-getelson@nvidia.com> References: <20221221073547.988-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000B8EC:EE_|SA1PR12MB8141:EE_ X-MS-Office365-Filtering-Correlation-Id: 75479b9d-658c-4844-25ee-08daf3b5784d X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 2YlU2+9GHfzuZcvk4a2DRlTAqKBZmnURo3PinEJ5XccoDksXynNG86n53Ut2c8Y+6/cXQ5fIYzS4bzdtOv2fO8KsziWJxTye0x7TYSPOUXOGmJnXIrIgfxix8WqSUJ41tXap+aCpZfYh5brZIwXtKiWsQuHMzTO37Y9KxvTlKeySx/Ox7j2+0q0kFPdb6cfcGADh65K11whk/mVSeUl4ezXMzDKtAefC32vkhER9qxVbm37cY9F81nT9aF6whswrbbSvFuqZjUHTuYkA3qfm6SXvNJBeBWHgEIFkD07MxRoqV5FR8Zz8+xnOKTsY4WFkbqltnhQzmY1+CMyOHuC8YghG5d/qf8PczuQNXaWQKfU609V2zthpM1yg/Dq1R4/Orw/f0HorsT3Vj8/DbRwT5ULc03ewxuXmpuPQxZUdK2DHpRDApWVZ5vQR2HMeNi5DBO8Th0fvG8PPAusCmU3yFVd/BNYtDg3F+ypb1PjcWn7Ua1lEXCT2LQ+3eA3spuSaCtWG0/pQ2wO1uTJQ89WOi2JVtPq70+O1AhoYhjPGwNNV2AapZXtl11UvJJsEKUuDjPsoscuX8AGBIztfM4IivgCMkLJi+owQ/vqvBU0bzYFHlA4xmDm+UJzx57k1f6BviVnUwvLvUZ+wfNdM5/MhM7qlvjpxy2pmlzsdDYh+EuxRncISCH9JFIH24/9QsLI3l9kZ9xX7YS+UVXF8zyVrdg== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(376002)(346002)(396003)(136003)(39860400002)(451199015)(36840700001)(46966006)(40470700004)(47076005)(6666004)(40480700001)(2616005)(336012)(4326008)(55016003)(186003)(26005)(1076003)(6286002)(316002)(70206006)(6916009)(16526019)(478600001)(54906003)(7696005)(70586007)(86362001)(36860700001)(356005)(82740400003)(7636003)(426003)(40460700003)(83380400001)(82310400005)(41300700001)(8676002)(36756003)(5660300002)(2906002)(8936002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jan 2023 09:23:15.8048 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 75479b9d-658c-4844-25ee-08daf3b5784d X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000B8EC.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB8141 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Current API allows either query or update indirect flow action. If port hardware allows both update and query in a single operation, application still has to issue 2 separate hardware requests. The patch adds `rte_flow_action_handle_query_update` function call, and it's async version `rte_flow_async_action_handle_query_update` to atomically query and update flow action. int rte_flow_action_handle_query_update (uint16_t port_id, struct rte_flow_action_handle *handle, const void *update, void *query, enum rte_flow_query_update_mode mode, struct rte_flow_error *error); int rte_flow_async_action_handle_query_update (uint16_t port_id, uint32_t queue_id, const struct rte_flow_op_attr *op_attr, struct rte_flow_action_handle *action_handle, const void *update, void *query, enum rte_flow_query_update_mode mode, void *user_data, struct rte_flow_error *error); Application can control query and update order, if that is supported by port hardware, by setting qu_mode parameter to RTE_FLOW_QU_QUERY_FIRST or RTE_FLOW_QU_UPDATE_FIRST. RTE_FLOW_QU_QUERY and RTE_FLOW_QU_UPDATE parameter values provide query only and update only functionality for backward compatibility with existing API. Signed-off-by: Gregory Etelson --- v2: remove RTE_FLOW_QU_DEFAULT query-update mode --- lib/ethdev/rte_flow.c | 39 +++++++++++++ lib/ethdev/rte_flow.h | 103 +++++++++++++++++++++++++++++++++++ lib/ethdev/rte_flow_driver.h | 15 +++++ lib/ethdev/version.map | 5 ++ 4 files changed, 162 insertions(+) diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 7d0c24366c..8b8aa940be 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -1883,3 +1883,42 @@ rte_flow_async_action_handle_query(uint16_t port_id, action_handle, data, user_data, error); return flow_err(port_id, ret, error); } + +int +rte_flow_action_handle_query_update(uint16_t port_id, + struct rte_flow_action_handle *handle, + const void *update, void *query, + enum rte_flow_query_update_mode mode, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + int ret; + + if (!ops || !ops->action_handle_query_update) + return -ENOTSUP; + ret = ops->action_handle_query_update(dev, handle, update, query, + mode, error); + return flow_err(port_id, ret, error); +} + +int +rte_flow_async_action_handle_query_update(uint16_t port_id, uint32_t queue_id, + const struct rte_flow_op_attr *attr, + struct rte_flow_action_handle *handle, + const void *update, void *query, + enum rte_flow_query_update_mode mode, + void *user_data, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + int ret; + + if (!ops || !ops->async_action_handle_query_update) + return -ENOTSUP; + ret = ops->async_action_handle_query_update(dev, queue_id, attr, + handle, update, query, mode, + user_data, error); + return flow_err(port_id, ret, error); +} diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index b60987db4b..f1ba163ac5 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -5622,6 +5622,109 @@ rte_flow_async_action_handle_query(uint16_t port_id, void *user_data, struct rte_flow_error *error); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Query_update operational mode. + * + * RTE_FLOW_QU_QUERY_FIRST + * Force port to query action before update. + * RTE_FLOW_QU_UPDATE_FIRST + * Force port to update action before update. + * + * @see rte_flow_action_handle_query_update() + * @see rte_flow_async_action_handle_query_update() + */ +enum rte_flow_query_update_mode { + RTE_FLOW_QU_QUERY_FIRST, /* query before update */ + RTE_FLOW_QU_UPDATE_FIRST, /* query after update */ +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Query and/or update indirect flow action. + * If update parameter is NULL the function queries indirect action. + * If query parameter is NULL the function updates indirect action. + * If both query and update not NULL, the function atomically + * queries and updates indirect action. Query and update carried in order + * specified in the mode parameter. + * + * @param[in] port_id + * Port identifier of Ethernet device. + * @param[in] handle + * Handle for the indirect action object to be updated. + * @param update[in] + * Update profile specification used to modify the action pointed by handle. + * *update* could be with the same type of the immediate action corresponding + * to the *handle* argument when creating, or a wrapper structure includes + * action configuration to be updated and bit fields to indicate the member + * of fields inside the action to update. + * @param[out] query + * Pointer to storage for the associated query data type. + * @param[in] mode + * Operational mode. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_action_handle_query_update(uint16_t port_id, + struct rte_flow_action_handle *handle, + const void *update, void *query, + enum rte_flow_query_update_mode mode, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue async indirect flow action query and/or update + * + * @param[in] port_id + * Port identifier of Ethernet device. + * @param[in] queue_id + * Flow queue which is used to update the rule. + * @param[in] attr + * Indirect action update operation attributes. + * @param[in] handle + * Handle for the indirect action object to be updated. + * @param[in] update + * Update profile specification used to modify the action pointed by handle. + * *update* could be with the same type of the immediate action corresponding + * to the *handle* argument when creating, or a wrapper structure includes + * action configuration to be updated and bit fields to indicate the member + * of fields inside the action to update. + * @param[in] query + * Pointer to storage for the associated query data type. + * Query result returned on async completion event. + * @param[in] mode + * Operational mode. + * @param[in] user_data + * The user data that will be returned on async completion event. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_async_action_handle_query_update(uint16_t port_id, uint32_t queue_id, + const struct rte_flow_op_attr *attr, + struct rte_flow_action_handle *handle, + const void *update, void *query, + enum rte_flow_query_update_mode mode, + void *user_data, + struct rte_flow_error *error); + #ifdef __cplusplus } #endif diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h index c7d0699c91..7358c10a7a 100644 --- a/lib/ethdev/rte_flow_driver.h +++ b/lib/ethdev/rte_flow_driver.h @@ -114,6 +114,13 @@ struct rte_flow_ops { const struct rte_flow_action_handle *handle, void *data, struct rte_flow_error *error); + /** See rte_flow_action_handle_query_update() */ + int (*action_handle_query_update) + (struct rte_eth_dev *dev, + struct rte_flow_action_handle *handle, + const void *update, void *query, + enum rte_flow_query_update_mode qu_mode, + struct rte_flow_error *error); /** See rte_flow_tunnel_decap_set() */ int (*tunnel_decap_set) (struct rte_eth_dev *dev, @@ -276,6 +283,14 @@ struct rte_flow_ops { void *data, void *user_data, struct rte_flow_error *error); + /** See rte_flow_async_action_handle_query_update */ + int (*async_action_handle_query_update) + (struct rte_eth_dev *dev, uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_action_handle *action_handle, + const void *update, void *query, + enum rte_flow_query_update_mode qu_mode, + void *user_data, struct rte_flow_error *error); }; /** diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 17201fbe0f..42f0d7b30c 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -298,6 +298,11 @@ EXPERIMENTAL { rte_flow_get_q_aged_flows; rte_mtr_meter_policy_get; rte_mtr_meter_profile_get; + + # future + rte_flow_action_handle_query_update; + rte_flow_async_action_handle_query_update; + }; INTERNAL {