From patchwork Thu Jan 26 23:40:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 122596 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AC84642498; Fri, 27 Jan 2023 00:41:36 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6628442D71; Fri, 27 Jan 2023 00:41:27 +0100 (CET) Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam04on2074.outbound.protection.outlook.com [40.107.101.74]) by mails.dpdk.org (Postfix) with ESMTP id 5E64E42D57 for ; Fri, 27 Jan 2023 00:41:26 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=K/Do+isGbRkPwLaohoCovti57fbbXmlRzCRRL1h9oGSfHq0e2+lVeynzcVD8NNYjYmIdFeHhCkCyLCWZ+FG8xbdCTvFHHniGIvp+7xRwV0cABdIzkOHbb84gNicZzZEZhMeCGJmPbRFa41AVzroRnokvnsLvT+72hq2VcKiCDPgfy5B2Xr9cVuggSc8jo1m9FJNyvToTjdBVQGTetajVkX7D429tzFamon79lzvninz6uQx0DzY6emiHVVsqbPxsgQjYSjN3aO4UxfsBrBgSAaRC2h0azYshiq/lqbvz06nOsirk6XhpKxUB/vSWrjR6MFv7Drn+0pC+Dk+1FlzlVQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=kmaw/igEstx+ZDOWaUD8XpPO1dQVH0p7T7MId/JT6CM=; b=Ivw+xf6iifdgvQW9Jl58jW3So0qsm9KYXMBmwCnQzXDk+clyUzj/logoHQNBgjo27xdlUIi6flggTgz2kwaeFQT30CjQqmakSMW6bosQWIAmhMjKSFV21qNkI+dzmk3Ta3VulSwe1NHwCGnd+rwK1+MkZn78yjx/3LbGMU87VsqkWutP/45KDs1u+ZsOr/627x8XpAPrIjuOK9ldHrLRlssiSI9NNQx0cMe2djXJkl3RLoisCBnkkXM2UNjiDVLpa9/9luM4I72Xijtoj38RE7Bg+PFq5m+DdLkmAH+hICr34SGAfJ8+qTPadWyHMJBqhMUf4RY31pkRG8rRMCYQaw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=kmaw/igEstx+ZDOWaUD8XpPO1dQVH0p7T7MId/JT6CM=; b=nFD3qZ9/K5kBgtpvorvi/WUmIcp0qZNpmYuFsYox/plByQqjw4ldYLVGdzhsRZ7bbxbLHl1PMCWt9241VIJAbO+V79KPlQ4G+y0fvMdWs5O5G9YjJyJyfbpd2gakZwXg340GmNX0RldSQsmEM6BXiPw/P9X6oJzBQOci582lh5uv3pHNAJzcrS6mzgxxZWCpdf6fUxkiSmi04wzHfVl2mMtKlS43IY72A+TS5OvF3DIhwWsXygZSoedotByyo6YhFAF+lho/+cXPZOPEnGSADDrkmFHM3GvJNd9XsbEM+K4JShs4xl6mO3XECVlEwvwVCdBXGm3GoGr97b+Rl3dB2g== Received: from MW4PR03CA0235.namprd03.prod.outlook.com (2603:10b6:303:b9::30) by CY5PR12MB6407.namprd12.prod.outlook.com (2603:10b6:930:3c::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22; Thu, 26 Jan 2023 23:41:24 +0000 Received: from CO1NAM11FT024.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b9:cafe::65) by MW4PR03CA0235.outlook.office365.com (2603:10b6:303:b9::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22 via Frontend Transport; Thu, 26 Jan 2023 23:41:24 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CO1NAM11FT024.mail.protection.outlook.com (10.13.174.162) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.21 via Frontend Transport; Thu, 26 Jan 2023 23:41:24 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Thu, 26 Jan 2023 15:41:16 -0800 Received: from pegasus01.mtr.labs.mlnx (10.126.230.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Thu, 26 Jan 2023 15:41:14 -0800 From: Alexander Kozyrev To: CC: , , , , Subject: [PATCH 2/4] net/mlx5: add flow rule insertion by index Date: Fri, 27 Jan 2023 01:40:52 +0200 Message-ID: <20230126234054.3960463-3-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20230126234054.3960463-1-akozyrev@nvidia.com> References: <20230126234054.3960463-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.37] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT024:EE_|CY5PR12MB6407:EE_ X-MS-Office365-Filtering-Correlation-Id: e605de28-be7a-4168-cfb3-08dafff6d5e4 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: yRlxQnBfft9XHIRhCkPQwxSz7oYVQ2jiZLNOax8vZi3oGK9tDEzmPr3iWHyFe1K91CqfOu9zFLpQfuaJoAdzuCnVnAucxgliuQDtzLCjfRl4B0N9GIvhxwy7vS6zSVz0n6Cj03TmXSlBDwNDSD8YG73Luz5J+PMlesZjJyl2ZxHQ4MPL+A/axeWOepIMC37njmAjIrtQ+b90Bf8ejLwH1eX5oC0f9Z8xrbVemiM41GT/c7kY8ATS77qo60ofYx1TtS85QCMvudSiqCOWwi0zycj8+KgwWeKLSZ+lz46RJAZuCkc+Nss6QRa29fyftA2lo8ZW/BbOiCvAK7wWI/5l03AYQP+sZDPAo9lKhm4RVroL2rzRUqbx/L5slKqxn1RaTvzIcQDbwHfH8RYz3h6lIQAhlyQGOYfPPvLPH4BzsanW9WRaKX3XNVDikwLMau+TxVmYFdSpljrmSEUKpm6hI7DP6nurPkOacqdhf5jjF5I4ZDzPqoikcANUhJUotuYra8csS81Jx9g9FQ4fE1GaRvgytSVGZLTLybOb6GhSYjh/F2gDta8L/CtH3MV8OPRw+WJrC6PMn+FRIccsSenU8P3w5lR2Xs4POngfkboud+ymqaWvOUCx4sDFgIh+e2OL4S+2m7pi3w78O+UB3Dj09biB119pu6Ztmw2gxLzwfk4JyqkrstHnHva0HkN0k0zSeQBbIXgicdN+IZ/ViLUaaA== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(136003)(39860400002)(376002)(346002)(396003)(451199018)(46966006)(36840700001)(40470700004)(5660300002)(8676002)(8936002)(82740400003)(6916009)(7636003)(356005)(82310400005)(41300700001)(36860700001)(2906002)(4326008)(86362001)(83380400001)(70586007)(70206006)(47076005)(426003)(336012)(40480700001)(54906003)(2616005)(478600001)(107886003)(16526019)(26005)(36756003)(186003)(40460700003)(1076003)(6666004)(316002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jan 2023 23:41:24.0989 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e605de28-be7a-4168-cfb3-08dafff6d5e4 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT024.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY5PR12MB6407 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org New Flow API allows to insert flow rules into a specified index for tables with the index-based insertion type. Implement rte_flow_async_create_by_index API in mlx5 PMD. Signed-off-by: Alexander Kozyrev Acked-by: Viacheslav Ovsiienko --- drivers/net/mlx5/mlx5_flow.c | 61 +++++++++++++++++ drivers/net/mlx5/mlx5_flow.h | 12 ++++ drivers/net/mlx5/mlx5_flow_hw.c | 114 ++++++++++++++++++++++++++++++++ 3 files changed, 187 insertions(+) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index f5e2831480..ba1eb5309b 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -1027,6 +1027,16 @@ mlx5_flow_async_flow_create(struct rte_eth_dev *dev, uint8_t action_template_index, void *user_data, struct rte_flow_error *error); +static struct rte_flow * +mlx5_flow_async_flow_create_by_index(struct rte_eth_dev *dev, + uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow_template_table *table, + uint32_t rule_index, + const struct rte_flow_action actions[], + uint8_t action_template_index, + void *user_data, + struct rte_flow_error *error); static int mlx5_flow_async_flow_destroy(struct rte_eth_dev *dev, uint32_t queue, @@ -1107,6 +1117,7 @@ static const struct rte_flow_ops mlx5_flow_ops = { .template_table_create = mlx5_flow_table_create, .template_table_destroy = mlx5_flow_table_destroy, .async_create = mlx5_flow_async_flow_create, + .async_create_by_index = mlx5_flow_async_flow_create_by_index, .async_destroy = mlx5_flow_async_flow_destroy, .pull = mlx5_flow_pull, .push = mlx5_flow_push, @@ -8853,6 +8864,56 @@ mlx5_flow_async_flow_create(struct rte_eth_dev *dev, user_data, error); } +/** + * Enqueue flow creation by index. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue_id + * The queue to create the flow. + * @param[in] attr + * Pointer to the flow operation attributes. + * @param[in] rule_index + * The item pattern flow follows from the table. + * @param[in] actions + * Action with flow spec value. + * @param[in] action_template_index + * The action pattern flow follows from the table. + * @param[in] user_data + * Pointer to the user_data. + * @param[out] error + * Pointer to error structure. + * + * @return + * Flow pointer on success, NULL otherwise and rte_errno is set. + */ +static struct rte_flow * +mlx5_flow_async_flow_create_by_index(struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_op_attr *attr, + struct rte_flow_template_table *table, + uint32_t rule_index, + const struct rte_flow_action actions[], + uint8_t action_template_index, + void *user_data, + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops; + struct rte_flow_attr fattr = {0}; + + if (flow_get_drv_type(dev, &fattr) != MLX5_FLOW_TYPE_HW) { + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "flow_q create with incorrect steering mode"); + return NULL; + } + fops = flow_get_drv_ops(MLX5_FLOW_TYPE_HW); + return fops->async_flow_create_by_index(dev, queue_id, attr, table, + rule_index, actions, action_template_index, + user_data, error); +} + /** * Enqueue flow destruction. * diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index e376dcae93..c2f9ffd760 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1149,6 +1149,7 @@ struct rte_flow_hw { uint32_t age_idx; cnt_id_t cnt_id; uint32_t mtr_id; + uint32_t rule_idx; uint8_t rule[0]; /* HWS layer data struct. */ } __rte_packed; @@ -1810,6 +1811,16 @@ typedef struct rte_flow *(*mlx5_flow_async_flow_create_t) uint8_t action_template_index, void *user_data, struct rte_flow_error *error); +typedef struct rte_flow *(*mlx5_flow_async_flow_create_by_index_t) + (struct rte_eth_dev *dev, + uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow_template_table *table, + uint32_t rule_index, + const struct rte_flow_action actions[], + uint8_t action_template_index, + void *user_data, + struct rte_flow_error *error); typedef int (*mlx5_flow_async_flow_destroy_t) (struct rte_eth_dev *dev, uint32_t queue, @@ -1912,6 +1923,7 @@ struct mlx5_flow_driver_ops { mlx5_flow_table_create_t template_table_create; mlx5_flow_table_destroy_t template_table_destroy; mlx5_flow_async_flow_create_t async_flow_create; + mlx5_flow_async_flow_create_by_index_t async_flow_create_by_index; mlx5_flow_async_flow_destroy_t async_flow_destroy; mlx5_flow_pull_t pull; mlx5_flow_push_t push; diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 8002c88e4a..b209b448c6 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -2586,6 +2586,118 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, return NULL; } +/** + * Enqueue HW steering flow creation by index. + * + * The flow will be applied to the HW only if the postpone bit is not set or + * the extra push function is called. + * The flow creation status should be checked from dequeue result. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue + * The queue to create the flow. + * @param[in] attr + * Pointer to the flow operation attributes. + * @param[in] rule_index + * The item pattern flow follows from the table. + * @param[in] actions + * Action with flow spec value. + * @param[in] action_template_index + * The action pattern flow follows from the table. + * @param[in] user_data + * Pointer to the user_data. + * @param[out] error + * Pointer to error structure. + * + * @return + * Flow pointer on success, NULL otherwise and rte_errno is set. + */ +static struct rte_flow * +flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, + uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow_template_table *table, + uint32_t rule_index, + const struct rte_flow_action actions[], + uint8_t action_template_index, + void *user_data, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5dr_rule_attr rule_attr = { + .queue_id = queue, + .user_data = user_data, + .burst = attr->postpone, + }; + struct mlx5dr_rule_action rule_acts[MLX5_HW_MAX_ACTS]; + struct rte_flow_hw *flow; + struct mlx5_hw_q_job *job; + uint32_t flow_idx; + int ret; + + if (unlikely(rule_index >= table->cfg.attr.nb_flows)) { + rte_errno = EINVAL; + goto error; + } + if (unlikely(!priv->hw_q[queue].job_idx)) { + rte_errno = ENOMEM; + goto error; + } + flow = mlx5_ipool_zmalloc(table->flow, &flow_idx); + if (!flow) + goto error; + /* + * Set the table here in order to know the destination table + * when free the flow afterwards. + */ + flow->table = table; + flow->idx = flow_idx; + job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx]; + /* + * Set the job type here in order to know if the flow memory + * should be freed or not when get the result from dequeue. + */ + job->type = MLX5_HW_Q_JOB_TYPE_CREATE; + job->flow = flow; + job->user_data = user_data; + rule_attr.user_data = job; + /* + * Set the rule index. + */ + MLX5_ASSERT(flow_idx > 0); + rule_attr.rule_idx = rule_index; + flow->rule_idx = rule_index; + /* + * Construct the flow actions based on the input actions. + * The implicitly appended action is always fixed, like metadata + * copy action from FDB to NIC Rx. + * No need to copy and contrust a new "actions" list based on the + * user's input, in order to save the cost. + */ + if (flow_hw_actions_construct(dev, job, + &table->ats[action_template_index], + action_template_index, actions, + rule_acts, queue, error)) { + rte_errno = EINVAL; + goto free; + } + ret = mlx5dr_rule_create(table->matcher, + 0, NULL, action_template_index, rule_acts, + &rule_attr, (struct mlx5dr_rule *)flow->rule); + if (likely(!ret)) + return (struct rte_flow *)flow; +free: + /* Flow created fail, return the descriptor and flow memory. */ + mlx5_ipool_free(table->flow, flow_idx); + priv->hw_q[queue].job_idx++; +error: + rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "fail to create rte flow"); + return NULL; +} + /** * Enqueue HW steering flow destruction. * @@ -2636,6 +2748,7 @@ flow_hw_async_flow_destroy(struct rte_eth_dev *dev, job->user_data = user_data; job->flow = fh; rule_attr.user_data = job; + rule_attr.rule_idx = fh->rule_idx; ret = mlx5dr_rule_destroy((struct mlx5dr_rule *)fh->rule, &rule_attr); if (likely(!ret)) return 0; @@ -8345,6 +8458,7 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { .template_table_create = flow_hw_template_table_create, .template_table_destroy = flow_hw_table_destroy, .async_flow_create = flow_hw_async_flow_create, + .async_flow_create_by_index = flow_hw_async_flow_create_by_index, .async_flow_destroy = flow_hw_async_flow_destroy, .pull = flow_hw_pull, .push = flow_hw_push,