From patchwork Tue Sep 26 15:58:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tomer Shmilovich X-Patchwork-Id: 131971 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E7B8A42644; Tue, 26 Sep 2023 17:59:33 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BABDD402D6; Tue, 26 Sep 2023 17:59:33 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2040.outbound.protection.outlook.com [40.107.236.40]) by mails.dpdk.org (Postfix) with ESMTP id 8E2B540269 for ; Tue, 26 Sep 2023 17:59:32 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=KH7Ree+i2WkHdQwPOR7FrwyxbhuwssYUo1vhK0mUAyTVAeWhJxB1LpWltLe4w8khXqs4lRYh954G/942uppj1cIrCWg6b3sITb1aeXgmfv1JEjJR7HZgaYSyrbfRQHeQlCAcWRvfj5gMXO3/bzN729h8GrQqFTZKAbSjB5HYSmitq6+gpUV6VRvYuzPYCUsl9rTaQQvhSsdTjTQvZX4q5sXFXxoyvnp06Q2JaqVP/SUPHmeW3IkY5SMo3Iuyb+dgqRq9k8qInStajcrfl3uSfZWqeoSl/XX4CLSxqBskpQsvhRHYKrMY4nHjcoNe84FcjM4bhRc0Id33VGoXwcoajA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ffad8hF4X2iQtlwGWztnCWErLqn4h8+zDkFOVb2FSqI=; b=FsHXjqNwoRkxZhVs2x9dnDA75NbC+cFXu/hU+BLx3SL/Ynnta855he/u4yzTik0tpTlizOk55bDs6Xm9y1j8Bzpq6OhqS5izJl0/3ebbiJM16F7i9fVofKyuA5VuMtF5oYk2ChqEWv100IylafECE7xPrDtYk3X71Rfk/D2oRemnGlo+BHcPb+sZrj+/ibAX6+dTOcL0gzFuYC1mOWdXxpDR3ee8kmrr0HcUIqSt7dlV8Yd4RooyMc3v8oYqmgkOtxsflH33K6YjQQ5kZ3qLsdV/LQtc3EnRntxEY+A26TokNJ/oswyUozhCoOPOtxZ/TN0PLLpKTYLpIpSa39Y3OA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ffad8hF4X2iQtlwGWztnCWErLqn4h8+zDkFOVb2FSqI=; b=JPTN57fFW9zhWg545lSio4HwXUbgyI1DuFbz6R5gIKBu2JCMiZH614ZXgoSGU+Bgv3T2H1AIuxx7PLEw4n3mfKyhOhzRBcBGAzZ2KqQBovxACjHscXObCxJ5w9HN+6QpzYzVpf1QF9opSk/OtGJfLcS+5jMqlfm+sX9IXjkjPvbbqXU4BOjrEH2alDpaAlp+Z39vDcymBd8rtBCboXeHDYiEbP+lDa+9/4fbJHz7X3y3Qg/IZxAwWz9wiDuN0YmLCNwj/Ch5+aCfLRe/sb9gnfYvb5jAVAEj0XwpQ5Ex+184t3YgUoaXkfpcss1UJ34plhh1umpSLbAyt409RIZCzg== Received: from CH2PR17CA0003.namprd17.prod.outlook.com (2603:10b6:610:53::13) by MW4PR12MB6897.namprd12.prod.outlook.com (2603:10b6:303:208::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6792.28; Tue, 26 Sep 2023 15:59:30 +0000 Received: from DS2PEPF00003443.namprd04.prod.outlook.com (2603:10b6:610:53:cafe::1b) by CH2PR17CA0003.outlook.office365.com (2603:10b6:610:53::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6792.35 via Frontend Transport; Tue, 26 Sep 2023 15:59:30 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by DS2PEPF00003443.mail.protection.outlook.com (10.167.17.70) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.14 via Frontend Transport; Tue, 26 Sep 2023 15:59:30 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 26 Sep 2023 08:59:21 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail202.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 26 Sep 2023 08:59:18 -0700 From: Tomer Shmilovich To: Matan Azrad , Viacheslav Ovsiienko , Ori Kam , Suanming Mou CC: Subject: [PATCH] net/mlx5: supporting group set miss actions API Date: Tue, 26 Sep 2023 15:58:35 +0000 Message-ID: <20230926155836.3290061-1-tshmilovich@nvidia.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail202.nvidia.com (10.129.68.7) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS2PEPF00003443:EE_|MW4PR12MB6897:EE_ X-MS-Office365-Filtering-Correlation-Id: 0cf1c953-b8fc-42fa-0428-08dbbea99173 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: OdhXFIYBTPnvTxb3i2qtTSKO4Hhz9PxS4hXNA+5t32jg64oyExS+7a/iqjQMBlx7VZT2eJ7w6B8BC1H9XujpIau1PwuxE39ioBIrcuDaW0hxtM5TloeOa9eYzcB0sm7XWmKDGwREctmf82GBEVApjeDCkUEbr6/L+FXaExNDV+EDzBJXWLsnPh5xFlXxE9hwYzs/cX3BigX8Fq6P8Cy4LrtxfScXL0ewnpaG6H1ggWRNMRjMfzIizDuXo+ydITSCRDZsRTbUspWmrkmrj1lNTEpXIlsUq+x/Fj8sWxgODVvMvQBxmYrIzPMnXEoToUF5lz3jZmRPuyefnwo2qR1ty+dYThZunPkbxSronB765sjFNWBFFsyOGV/m8RJpFyEg6smz9mAVxi2NS+O180fY0pGdThiPr+EpVShQdzgLYBergFSvG5PVI/kRRwORTcenBLY0A26X/PzK+p2oRulWSLgbw4NaEL+j5hT7H6UHlTybJkNwjU73Zwk3TFi18O5sMpn93W7AKWIfAOoZrCST9nMfQVnYaDDenmw9kcZbkFKxab3JyNAqt0EdBvVOj8rDeAR4f4xuzyMwRS2CCh3d+yRfn68Rf6YpT+cWup2HatKzF79qBQdzoQXGeEtaD7A+dtHwGbngeb9RoXlhsVK+ovxMq8/unq65sy+J2OqPFjEJ05hBgxNxPbuW1AdgND8BRxXIcZbJKzXe+NRVMTD1daHHB/oSMXWqGHnfURx9yh4= X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(136003)(376002)(396003)(39860400002)(346002)(230922051799003)(82310400011)(451199024)(186009)(1800799009)(46966006)(40470700004)(36840700001)(6666004)(36756003)(7696005)(40460700003)(86362001)(7636003)(83380400001)(356005)(82740400003)(47076005)(36860700001)(2616005)(16526019)(40480700001)(6636002)(55016003)(70206006)(70586007)(6286002)(110136005)(26005)(8676002)(41300700001)(316002)(1076003)(2906002)(4326008)(5660300002)(30864003)(426003)(336012)(8936002)(478600001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Sep 2023 15:59:30.0479 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0cf1c953-b8fc-42fa-0428-08dbbea99173 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS2PEPF00003443.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB6897 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add implementation for rte_flow_group_set_miss_actions() API. Signed-off-by: Tomer Shmilovich Acked-by: Ori Kam --- Depends-on: series-29572 ("ethdev: add group set miss actions API") Depends-on: patch-130772 ("net/mlx5: fix jump ipool entry size") Depends-on: patch-131567 ("net/mlx5/hws: supporting default miss table in HWS") drivers/net/mlx5/mlx5.h | 2 + drivers/net/mlx5/mlx5_flow.c | 41 +++++ drivers/net/mlx5/mlx5_flow.h | 9 + drivers/net/mlx5/mlx5_flow_hw.c | 301 ++++++++++++++++++++++++++++++++ 4 files changed, 353 insertions(+) -- 2.34.1 diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index c587e13c63..1323bb4165 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1848,6 +1848,8 @@ struct mlx5_priv { struct mlx5_hw_q *hw_q; /* HW steering rte flow table list header. */ LIST_HEAD(flow_hw_tbl, rte_flow_template_table) flow_hw_tbl; + /* HW steering rte flow group list header */ + LIST_HEAD(flow_hw_grp, mlx5_flow_group) flow_hw_grp; struct mlx5dr_action *hw_push_vlan[MLX5DR_TABLE_TYPE_MAX]; struct mlx5dr_action *hw_pop_vlan[MLX5DR_TABLE_TYPE_MAX]; struct mlx5dr_action **hw_vport; diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index f7f8f54eb4..2204fa05d2 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -1027,6 +1027,12 @@ static int mlx5_flow_table_destroy(struct rte_eth_dev *dev, struct rte_flow_template_table *table, struct rte_flow_error *error); +static int +mlx5_flow_group_set_miss_actions(struct rte_eth_dev *dev, + uint32_t group_id, + const struct rte_flow_group_attr *attr, + const struct rte_flow_action actions[], + struct rte_flow_error *error); static struct rte_flow * mlx5_flow_async_flow_create(struct rte_eth_dev *dev, uint32_t queue, @@ -1151,6 +1157,7 @@ static const struct rte_flow_ops mlx5_flow_ops = { .actions_template_destroy = mlx5_flow_actions_template_destroy, .template_table_create = mlx5_flow_table_create, .template_table_destroy = mlx5_flow_table_destroy, + .group_set_miss_actions = mlx5_flow_group_set_miss_actions, .async_create = mlx5_flow_async_flow_create, .async_create_by_index = mlx5_flow_async_flow_create_by_index, .async_destroy = mlx5_flow_async_flow_destroy, @@ -9286,6 +9293,40 @@ mlx5_flow_table_destroy(struct rte_eth_dev *dev, return fops->template_table_destroy(dev, table, error); } +/** + * PMD group set miss actions. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] attr + * Pointer to group attributes + * @param[in] actions + * Array of actions + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_flow_group_set_miss_actions(struct rte_eth_dev *dev, + uint32_t group_id, + const struct rte_flow_group_attr *attr, + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops; + struct rte_flow_attr fattr = {0}; + + if (flow_get_drv_type(dev, &fattr) != MLX5_FLOW_TYPE_HW) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "group set miss actions with incorrect steering mode"); + fops = flow_get_drv_ops(MLX5_FLOW_TYPE_HW); + return fops->group_set_miss_actions(dev, group_id, attr, actions, error); +} + /** * Enqueue flow creation. * diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 3a97975d69..5963474e10 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1369,9 +1369,11 @@ struct mlx5_hw_action_template { /* mlx5 flow group struct. */ struct mlx5_flow_group { struct mlx5_list_entry entry; + LIST_ENTRY(mlx5_flow_group) next; struct rte_eth_dev *dev; /* Reference to corresponding device. */ struct mlx5dr_table *tbl; /* HWS table object. */ struct mlx5_hw_jump_action jump; /* Jump action. */ + struct mlx5_flow_group *miss_group; /* Group pointed to by miss action. */ enum mlx5dr_table_type type; /* Table type. */ uint32_t group_id; /* Group id. */ uint32_t idx; /* Group memory index. */ @@ -1872,6 +1874,12 @@ typedef int (*mlx5_flow_table_destroy_t) (struct rte_eth_dev *dev, struct rte_flow_template_table *table, struct rte_flow_error *error); +typedef int (*mlx5_flow_group_set_miss_actions_t) + (struct rte_eth_dev *dev, + uint32_t group_id, + const struct rte_flow_group_attr *attr, + const struct rte_flow_action actions[], + struct rte_flow_error *error); typedef struct rte_flow *(*mlx5_flow_async_flow_create_t) (struct rte_eth_dev *dev, uint32_t queue, @@ -2010,6 +2018,7 @@ struct mlx5_flow_driver_ops { mlx5_flow_actions_template_destroy_t actions_template_destroy; mlx5_flow_table_create_t template_table_create; mlx5_flow_table_destroy_t template_table_destroy; + mlx5_flow_group_set_miss_actions_t group_set_miss_actions; mlx5_flow_async_flow_create_t async_flow_create; mlx5_flow_async_flow_create_by_index_t async_flow_create_by_index; mlx5_flow_async_flow_update_t async_flow_update; diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index cbd741605b..91c6c749a2 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -3800,6 +3800,301 @@ flow_hw_table_destroy(struct rte_eth_dev *dev, return 0; } +/** + * Parse group's miss actions. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] cfg + * Pointer to the table_cfg structure. + * @param[in] actions + * Array of actions to perform on group miss. Supported types: + * RTE_FLOW_ACTION_TYPE_JUMP, RTE_FLOW_ACTION_TYPE_VOID, RTE_FLOW_ACTION_TYPE_END. + * @param[out] dst_group_id + * Pointer to destination group id output. will be set to 0 if actions is END, + * otherwise will be set to destination group id. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ + +static int +flow_hw_group_parse_miss_actions(struct rte_eth_dev *dev, + struct mlx5_flow_template_table_cfg *cfg, + const struct rte_flow_action actions[], + uint32_t *dst_group_id, + struct rte_flow_error *error) +{ + const struct rte_flow_action_jump *jump_conf; + uint32_t temp = 0; + uint32_t i; + + for (i = 0; actions[i].type != RTE_FLOW_ACTION_TYPE_END; i++) { + switch (actions[i].type) { + case RTE_FLOW_ACTION_TYPE_VOID: + continue; + case RTE_FLOW_ACTION_TYPE_JUMP: + if (temp) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, actions, + "Miss actions can contain only a single JUMP"); + + jump_conf = (const struct rte_flow_action_jump *)actions[i].conf; + if (!jump_conf) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + jump_conf, "Jump conf must not be NULL"); + + if (flow_hw_translate_group(dev, cfg, jump_conf->group, &temp, error)) + return -rte_errno; + + if (!temp) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Failed to set group miss actions - Invalid target group"); + break; + default: + return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, + &actions[i], "Unsupported default miss action type"); + } + } + + *dst_group_id = temp; + return 0; +} + +/** + * Set group's miss group. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] cfg + * Pointer to the table_cfg structure. + * @param[in] src_grp + * Pointer to source group structure. + * if NULL, a new group will be created based on group id from cfg->attr.flow_attr.group. + * @param[in] dst_grp + * Pointer to destination group structure. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ + +static int +flow_hw_group_set_miss_group(struct rte_eth_dev *dev, + struct mlx5_flow_template_table_cfg *cfg, + struct mlx5_flow_group *src_grp, + struct mlx5_flow_group *dst_grp, + struct rte_flow_error *error) +{ + struct rte_flow_error sub_error = { + .type = RTE_FLOW_ERROR_TYPE_NONE, + .cause = NULL, + .message = NULL, + }; + struct mlx5_flow_cb_ctx ctx = { + .dev = dev, + .error = &sub_error, + .data = &cfg->attr.flow_attr, + }; + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_list_entry *ge; + bool ref = false; + int ret; + + if (!dst_grp) + return -EINVAL; + + /* If group doesn't exist - needs to be created. */ + if (!src_grp) { + ge = mlx5_hlist_register(priv->sh->groups, cfg->attr.flow_attr.group, &ctx); + if (!ge) + return -rte_errno; + + src_grp = container_of(ge, struct mlx5_flow_group, entry); + LIST_INSERT_HEAD(&priv->flow_hw_grp, src_grp, next); + ref = true; + } else if (!src_grp->miss_group) { + /* If group exists, but has no miss actions - need to increase ref_cnt. */ + LIST_INSERT_HEAD(&priv->flow_hw_grp, src_grp, next); + src_grp->entry.ref_cnt++; + ref = true; + } + + ret = mlx5dr_table_set_default_miss(src_grp->tbl, dst_grp->tbl); + if (ret) + goto mlx5dr_error; + + /* If group existed and had old miss actions - ref_cnt is already correct. + * However, need to reduce ref counter for old miss group. + */ + if (src_grp->miss_group) + mlx5_hlist_unregister(priv->sh->groups, &src_grp->miss_group->entry); + + src_grp->miss_group = dst_grp; + return 0; + +mlx5dr_error: + /* Reduce src_grp ref_cnt back & remove from grp list in case of mlx5dr error */ + if (ref) { + mlx5_hlist_unregister(priv->sh->groups, &src_grp->entry); + LIST_REMOVE(src_grp, next); + } + + return rte_flow_error_set(error, -ret, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Failed to set group miss actions"); +} + +/** + * Unset group's miss group. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] grp + * Pointer to group structure. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ + +static int +flow_hw_group_unset_miss_group(struct rte_eth_dev *dev, + struct mlx5_flow_group *grp, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + int ret; + + /* If group doesn't exist - no need to change anything. */ + if (!grp) + return 0; + + /* If group exists, but miss actions is already default behavior - + * no need to change anything. + */ + if (!grp->miss_group) + return 0; + + ret = mlx5dr_table_set_default_miss(grp->tbl, NULL); + if (ret) + return rte_flow_error_set(error, -ret, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Failed to unset group miss actions"); + + mlx5_hlist_unregister(priv->sh->groups, &grp->miss_group->entry); + grp->miss_group = NULL; + + LIST_REMOVE(grp, next); + mlx5_hlist_unregister(priv->sh->groups, &grp->entry); + + return 0; +} + +/** + * Set group miss actions. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] group_id + * Group id. + * @param[in] attr + * Pointer to group attributes structure. + * @param[in] actions + * Array of actions to perform on group miss. Supported types: + * RTE_FLOW_ACTION_TYPE_JUMP, RTE_FLOW_ACTION_TYPE_VOID, RTE_FLOW_ACTION_TYPE_END. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ + +static int +flow_hw_group_set_miss_actions(struct rte_eth_dev *dev, + uint32_t group_id, + const struct rte_flow_group_attr *attr, + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct rte_flow_error sub_error = { + .type = RTE_FLOW_ERROR_TYPE_NONE, + .cause = NULL, + .message = NULL, + }; + struct mlx5_flow_template_table_cfg cfg = { + .external = true, + .attr = { + .flow_attr = { + .group = group_id, + .ingress = attr->ingress, + .egress = attr->egress, + .transfer = attr->transfer, + }, + }, + }; + struct mlx5_flow_cb_ctx ctx = { + .dev = dev, + .error = &sub_error, + .data = &cfg.attr.flow_attr, + }; + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_group *src_grp = NULL; + struct mlx5_flow_group *dst_grp = NULL; + struct mlx5_list_entry *ge; + uint32_t dst_group_id = 0; + int ret; + + if (flow_hw_translate_group(dev, &cfg, group_id, &group_id, error)) + return -rte_errno; + + if (!group_id) + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Failed to set group miss actions - invalid group id"); + + ret = flow_hw_group_parse_miss_actions(dev, &cfg, actions, &dst_group_id, error); + if (ret) + return -rte_errno; + + if (dst_group_id == group_id) { + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Failed to set group miss actions - target group id must differ from group_id"); + } + + cfg.attr.flow_attr.group = group_id; + ge = mlx5_hlist_lookup(priv->sh->groups, group_id, &ctx); + if (ge) + src_grp = container_of(ge, struct mlx5_flow_group, entry); + + if (dst_group_id) { + /* Increase ref_cnt for new miss group. */ + cfg.attr.flow_attr.group = dst_group_id; + ge = mlx5_hlist_register(priv->sh->groups, dst_group_id, &ctx); + if (!ge) + return -rte_errno; + + dst_grp = container_of(ge, struct mlx5_flow_group, entry); + + cfg.attr.flow_attr.group = group_id; + ret = flow_hw_group_set_miss_group(dev, &cfg, src_grp, dst_grp, error); + if (ret) + goto error; + } else { + return flow_hw_group_unset_miss_group(dev, src_grp, error); + } + + return 0; + +error: + if (dst_grp) + mlx5_hlist_unregister(priv->sh->groups, &dst_grp->entry); + return -rte_errno; +} + static bool flow_hw_modify_field_is_used(const struct rte_flow_action_modify_field *action, enum rte_flow_field_id field) @@ -8009,6 +8304,7 @@ flow_hw_resource_release(struct rte_eth_dev *dev) struct rte_flow_template_table *tbl; struct rte_flow_pattern_template *it; struct rte_flow_actions_template *at; + struct mlx5_flow_group *grp; uint32_t i; if (!priv->dr_ctx) @@ -8017,6 +8313,10 @@ flow_hw_resource_release(struct rte_eth_dev *dev) flow_hw_flush_all_ctrl_flows(dev); flow_hw_cleanup_tx_repr_tagging(dev); flow_hw_cleanup_ctrl_rx_tables(dev); + while (!LIST_EMPTY(&priv->flow_hw_grp)) { + grp = LIST_FIRST(&priv->flow_hw_grp); + flow_hw_group_unset_miss_group(dev, grp, NULL); + } while (!LIST_EMPTY(&priv->flow_hw_tbl_ongo)) { tbl = LIST_FIRST(&priv->flow_hw_tbl_ongo); flow_hw_table_destroy(dev, tbl, NULL); @@ -9344,6 +9644,7 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { .actions_template_destroy = flow_hw_actions_template_destroy, .template_table_create = flow_hw_template_table_create, .template_table_destroy = flow_hw_table_destroy, + .group_set_miss_actions = flow_hw_group_set_miss_actions, .async_flow_create = flow_hw_async_flow_create, .async_flow_create_by_index = flow_hw_async_flow_create_by_index, .async_flow_update = flow_hw_async_flow_update,