From patchwork Wed Mar 6 20:21:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dariusz Sosnowski X-Patchwork-Id: 138064 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 091AE43C5A; Wed, 6 Mar 2024 21:23:37 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5CA5042ECE; Wed, 6 Mar 2024 21:23:24 +0100 (CET) Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1nam02on2069.outbound.protection.outlook.com [40.107.96.69]) by mails.dpdk.org (Postfix) with ESMTP id 2248A42E95; Wed, 6 Mar 2024 21:23:21 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=esmalNeTfb8Kw6bQjvvK/MrN1MLk0eJNNwMG30jVXGXz3FEh83tmuvKuvsWX3jH06y053QeTzFm9YtA5rTjiqyYwIoEqhELFW/eIEcIfjLg9mCRhSI+VoSjkdtfCY3qWN7tJgB4HC6ygjLl72LV88yVcqqpKgCJIljEmTXj0a1X/sSzDr44jEeaf7Cc4H/aqieg7+c0eIjNuh499UMNp4YGilvQ86bzmRI464ccBtT3NKZAQURVfSPwBGmPvq0t+GqED0KWHnmJ2WXsoJczsYN6aqT8lILbHD/WUKtpjmJbHafsrIWkLtGXAt9ZYhv0IKBPwexLx4kRNgz2SAGqMUw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=gUaXMhLYfo6eNy9/dhAl4X/Tfy0neJ6D0HEcnN5hemk=; b=BJKnUBn5jZ3dwrjJOPP+L8ob1TcSWNgp/C8NYJ1IwJeDCfcU8Bu8FdApQBn6fwMNF31aGKZn9/MM8Br9fRhSWJ0QurME0VVV+3srE0MwLKNdgd9RM6BfPv6umU/AidvC5xRz6zENAsWUIEfmxDRR3VY23w1i2/TERJX3nZY4tRv3ygiw1kjWWpeB/0n4h9fXB6MqkRjWVQnXXtYAfFoaWDrl6MyAOOzR9xnuaEuhccIbkbsLhQIH3tvazYyyiqS2JlsfI9Cxdbi5WRpoVFGIZwAcZbunFGcshh3QqMeK07+G6WW8WB2GKGqfP94l89Rc7voHeEj6Iewjt681QPaoJA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=gUaXMhLYfo6eNy9/dhAl4X/Tfy0neJ6D0HEcnN5hemk=; b=iG9BLPBUN2vI0pg9vraxfqGJ49jF6ylCenlAQrhKfs447nZ+MouBZmKj8HiHJRAS3qSDVB8yO/eDt4zGeN7Au9iadqukRMj1krkLlfwLKqj8zbt9eW3qS73YGdnNZjO3s5aNiKa44C4K5FgCRPhfn8UFZ+TEy2mIofhxIAaXqBWHTydQ0b6+R+DiaCZCSrPqzKsp6buEF4MRoCp2MynWds2BtGlrsTaLZFdYcrqwl4ihDv6Kwo8JPljhH26/n4o/rMcjqfHytff4llxwWCD4DhrPJNT/ADavhcOtMdrvt3U0IvnLaCpwjVaZ9yotFyoaF2RMPhCVb13lLKkYQrbtAw== Received: from DM6PR12CA0033.namprd12.prod.outlook.com (2603:10b6:5:1c0::46) by PH7PR12MB7212.namprd12.prod.outlook.com (2603:10b6:510:207::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7316.24; Wed, 6 Mar 2024 20:23:08 +0000 Received: from CY4PEPF0000FCBE.namprd03.prod.outlook.com (2603:10b6:5:1c0:cafe::96) by DM6PR12CA0033.outlook.office365.com (2603:10b6:5:1c0::46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7339.40 via Frontend Transport; Wed, 6 Mar 2024 20:23:08 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CY4PEPF0000FCBE.mail.protection.outlook.com (10.167.242.100) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7362.11 via Frontend Transport; Wed, 6 Mar 2024 20:23:08 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Wed, 6 Mar 2024 12:22:53 -0800 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Wed, 6 Mar 2024 12:22:51 -0800 From: Dariusz Sosnowski To: Viacheslav Ovsiienko , Ori Kam , Suanming Mou , Matan Azrad , "Bing Zhao" CC: , Subject: [PATCH 2/4] net/mlx5: fix templates clean up of FDB control flow rules Date: Wed, 6 Mar 2024 21:21:48 +0100 Message-ID: <20240306202150.79577-2-dsosnowski@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240306202150.79577-1-dsosnowski@nvidia.com> References: <20240306202150.79577-1-dsosnowski@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000FCBE:EE_|PH7PR12MB7212:EE_ X-MS-Office365-Filtering-Correlation-Id: 654039ea-2755-4499-fbfe-08dc3e1b3c91 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: WBVaSZLEGDfeax58TFwlUXVk13Fyu55R/mQRTujY7ZJ8iFEgwNPjVDDRJuuDcUua+8Y1OLP6bJA7DpcnSElZQEAbmG+5QYwjPCfqQENQShFiqoqmoOCJ4oJfsqvWiVnUIDJVFc6RIjZFjxcpLcT2HeirOTsddv1v1OlOY5NknhlMtJV5URHkrI2qVHQi/fHqajeYaLzWakckxq+aP0t2ksxezhQXFEJLoZW4hrWPAdIM3iodE8D9KsJXU45qyugdo+tg70uO0o/Gk7ganq1eop0VSsEZSoZVZq1y2UwjPHPz+WPlFPABoKAphjw6/BeHaOB0xd0nyplm2BdKGec4NSJyzkomnJloD63Gp2QN4wGPYzaEbXbRf9tbrsCeowXzK8UQQ493e/uKvjNgZekgJlox2sfxmauVKAKFjeH83yun0QpMzQNkGYJDkisnJXuk8M25VZmbTlo5abd08tG42cV/7fC1GFlsN+sOjqmA3fHfHQ2l6SQilxcRUqthBisHUBh6POXMD6zujcBYyt3MIxo2Lvkfp3kUKyOxqZ2quUxk9UggN3wWG9P0dFNIuJqdSCbRfkcrgeKTziuoEkSYUiC6+d+H7ZmjItOuchjkejeLskaU78wWSdhy7EYuIEKCQX0EQ23XGJrBX63PvVXUdVf55butb7i0Ikf+KW2DKGGBM/qzPTADXFM2KSLRbzcdHEuBk8nrS+DM5Qkrjj6WteBnIQ/bswEXwZuYAfvuULwArpFSiOJ93QpEYfwwzQO+ X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(82310400014)(376005)(36860700004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Mar 2024 20:23:08.0069 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 654039ea-2755-4499-fbfe-08dc3e1b3c91 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000FCBE.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB7212 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch refactors the creation and clean up of templates used for FDB control flow rules, when HWS is enabled. All pattern and actions templates, and template tables are stored in a separate structure, `mlx5_flow_hw_ctrl_fdb`. It is allocated if and only if E-Switch is enabled. During HWS clean up, all of these templates are explicitly destroyed, instead of relying on templates general templates clean up. Fixes: 1939eb6f660c ("net/mlx5: support flow port action with HWS") Fixes: 49dffadf4b0c ("net/mlx5: fix LACP redirection in Rx domain") Cc: stable@dpdk.org Signed-off-by: Dariusz Sosnowski Acked-by: Ori Kam --- drivers/net/mlx5/mlx5.h | 6 +- drivers/net/mlx5/mlx5_flow.h | 19 +++ drivers/net/mlx5/mlx5_flow_hw.c | 255 ++++++++++++++++++-------------- 3 files changed, 166 insertions(+), 114 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 2fb3bb65cc..db68c8f884 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1894,11 +1894,7 @@ struct mlx5_priv { rte_spinlock_t hw_ctrl_lock; LIST_HEAD(hw_ctrl_flow, mlx5_hw_ctrl_flow) hw_ctrl_flows; LIST_HEAD(hw_ext_ctrl_flow, mlx5_hw_ctrl_flow) hw_ext_ctrl_flows; - struct rte_flow_template_table *hw_esw_sq_miss_root_tbl; - struct rte_flow_template_table *hw_esw_sq_miss_tbl; - struct rte_flow_template_table *hw_esw_zero_tbl; - struct rte_flow_template_table *hw_tx_meta_cpy_tbl; - struct rte_flow_template_table *hw_lacp_rx_tbl; + struct mlx5_flow_hw_ctrl_fdb *hw_ctrl_fdb; struct rte_flow_pattern_template *hw_tx_repr_tagging_pt; struct rte_flow_actions_template *hw_tx_repr_tagging_at; struct rte_flow_template_table *hw_tx_repr_tagging_tbl; diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 714a41e997..d58290e5b4 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -2775,6 +2775,25 @@ struct mlx5_flow_hw_ctrl_rx { [MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_MAX]; }; +/* Contains all templates required for control flow rules in FDB with HWS. */ +struct mlx5_flow_hw_ctrl_fdb { + struct rte_flow_pattern_template *esw_mgr_items_tmpl; + struct rte_flow_actions_template *regc_jump_actions_tmpl; + struct rte_flow_template_table *hw_esw_sq_miss_root_tbl; + struct rte_flow_pattern_template *regc_sq_items_tmpl; + struct rte_flow_actions_template *port_actions_tmpl; + struct rte_flow_template_table *hw_esw_sq_miss_tbl; + struct rte_flow_pattern_template *port_items_tmpl; + struct rte_flow_actions_template *jump_one_actions_tmpl; + struct rte_flow_template_table *hw_esw_zero_tbl; + struct rte_flow_pattern_template *tx_meta_items_tmpl; + struct rte_flow_actions_template *tx_meta_actions_tmpl; + struct rte_flow_template_table *hw_tx_meta_cpy_tbl; + struct rte_flow_pattern_template *lacp_rx_items_tmpl; + struct rte_flow_actions_template *lacp_rx_actions_tmpl; + struct rte_flow_template_table *hw_lacp_rx_tbl; +}; + #define MLX5_CTRL_PROMISCUOUS (RTE_BIT32(0)) #define MLX5_CTRL_ALL_MULTICAST (RTE_BIT32(1)) #define MLX5_CTRL_BROADCAST (RTE_BIT32(2)) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 4216433c6e..21c37b7539 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -9327,6 +9327,72 @@ flow_hw_create_ctrl_jump_table(struct rte_eth_dev *dev, return flow_hw_table_create(dev, &cfg, &it, 1, &at, 1, error); } +/** + * Cleans up all template tables and pattern, and actions templates used for + * FDB control flow rules. + * + * @param dev + * Pointer to Ethernet device. + */ +static void +flow_hw_cleanup_ctrl_fdb_tables(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_hw_ctrl_fdb *hw_ctrl_fdb; + + if (!priv->hw_ctrl_fdb) + return; + hw_ctrl_fdb = priv->hw_ctrl_fdb; + /* Clean up templates used for LACP default miss table. */ + if (hw_ctrl_fdb->hw_lacp_rx_tbl) + claim_zero(flow_hw_table_destroy(dev, hw_ctrl_fdb->hw_lacp_rx_tbl, NULL)); + if (hw_ctrl_fdb->lacp_rx_actions_tmpl) + claim_zero(flow_hw_actions_template_destroy(dev, hw_ctrl_fdb->lacp_rx_actions_tmpl, + NULL)); + if (hw_ctrl_fdb->lacp_rx_items_tmpl) + claim_zero(flow_hw_pattern_template_destroy(dev, hw_ctrl_fdb->lacp_rx_items_tmpl, + NULL)); + /* Clean up templates used for default Tx metadata copy. */ + if (hw_ctrl_fdb->hw_tx_meta_cpy_tbl) + claim_zero(flow_hw_table_destroy(dev, hw_ctrl_fdb->hw_tx_meta_cpy_tbl, NULL)); + if (hw_ctrl_fdb->tx_meta_actions_tmpl) + claim_zero(flow_hw_actions_template_destroy(dev, hw_ctrl_fdb->tx_meta_actions_tmpl, + NULL)); + if (hw_ctrl_fdb->tx_meta_items_tmpl) + claim_zero(flow_hw_pattern_template_destroy(dev, hw_ctrl_fdb->tx_meta_items_tmpl, + NULL)); + /* Clean up templates used for default FDB jump rule. */ + if (hw_ctrl_fdb->hw_esw_zero_tbl) + claim_zero(flow_hw_table_destroy(dev, hw_ctrl_fdb->hw_esw_zero_tbl, NULL)); + if (hw_ctrl_fdb->jump_one_actions_tmpl) + claim_zero(flow_hw_actions_template_destroy(dev, hw_ctrl_fdb->jump_one_actions_tmpl, + NULL)); + if (hw_ctrl_fdb->port_items_tmpl) + claim_zero(flow_hw_pattern_template_destroy(dev, hw_ctrl_fdb->port_items_tmpl, + NULL)); + /* Clean up templates used for default SQ miss flow rules - non-root table. */ + if (hw_ctrl_fdb->hw_esw_sq_miss_tbl) + claim_zero(flow_hw_table_destroy(dev, hw_ctrl_fdb->hw_esw_sq_miss_tbl, NULL)); + if (hw_ctrl_fdb->regc_sq_items_tmpl) + claim_zero(flow_hw_pattern_template_destroy(dev, hw_ctrl_fdb->regc_sq_items_tmpl, + NULL)); + if (hw_ctrl_fdb->port_actions_tmpl) + claim_zero(flow_hw_actions_template_destroy(dev, hw_ctrl_fdb->port_actions_tmpl, + NULL)); + /* Clean up templates used for default SQ miss flow rules - root table. */ + if (hw_ctrl_fdb->hw_esw_sq_miss_root_tbl) + claim_zero(flow_hw_table_destroy(dev, hw_ctrl_fdb->hw_esw_sq_miss_root_tbl, NULL)); + if (hw_ctrl_fdb->regc_jump_actions_tmpl) + claim_zero(flow_hw_actions_template_destroy(dev, + hw_ctrl_fdb->regc_jump_actions_tmpl, NULL)); + if (hw_ctrl_fdb->esw_mgr_items_tmpl) + claim_zero(flow_hw_pattern_template_destroy(dev, hw_ctrl_fdb->esw_mgr_items_tmpl, + NULL)); + /* Clean up templates structure for FDB control flow rules. */ + mlx5_free(hw_ctrl_fdb); + priv->hw_ctrl_fdb = NULL; +} + /* * Create a table on the root group to for the LACP traffic redirecting. * @@ -9376,110 +9442,109 @@ flow_hw_create_lacp_rx_table(struct rte_eth_dev *dev, * @return * 0 on success, negative values otherwise */ -static __rte_unused int +static int flow_hw_create_ctrl_tables(struct rte_eth_dev *dev, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; - struct rte_flow_pattern_template *esw_mgr_items_tmpl = NULL; - struct rte_flow_pattern_template *regc_sq_items_tmpl = NULL; - struct rte_flow_pattern_template *port_items_tmpl = NULL; - struct rte_flow_pattern_template *tx_meta_items_tmpl = NULL; - struct rte_flow_pattern_template *lacp_rx_items_tmpl = NULL; - struct rte_flow_actions_template *regc_jump_actions_tmpl = NULL; - struct rte_flow_actions_template *port_actions_tmpl = NULL; - struct rte_flow_actions_template *jump_one_actions_tmpl = NULL; - struct rte_flow_actions_template *tx_meta_actions_tmpl = NULL; - struct rte_flow_actions_template *lacp_rx_actions_tmpl = NULL; + struct mlx5_flow_hw_ctrl_fdb *hw_ctrl_fdb; uint32_t xmeta = priv->sh->config.dv_xmeta_en; uint32_t repr_matching = priv->sh->config.repr_matching; - int ret; + MLX5_ASSERT(priv->hw_ctrl_fdb == NULL); + hw_ctrl_fdb = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*hw_ctrl_fdb), 0, SOCKET_ID_ANY); + if (!hw_ctrl_fdb) { + DRV_LOG(ERR, "port %u failed to allocate memory for FDB control flow templates", + dev->data->port_id); + rte_errno = ENOMEM; + goto err; + } + priv->hw_ctrl_fdb = hw_ctrl_fdb; /* Create templates and table for default SQ miss flow rules - root table. */ - esw_mgr_items_tmpl = flow_hw_create_ctrl_esw_mgr_pattern_template(dev, error); - if (!esw_mgr_items_tmpl) { + hw_ctrl_fdb->esw_mgr_items_tmpl = flow_hw_create_ctrl_esw_mgr_pattern_template(dev, error); + if (!hw_ctrl_fdb->esw_mgr_items_tmpl) { DRV_LOG(ERR, "port %u failed to create E-Switch Manager item" " template for control flows", dev->data->port_id); goto err; } - regc_jump_actions_tmpl = flow_hw_create_ctrl_regc_jump_actions_template(dev, error); - if (!regc_jump_actions_tmpl) { + hw_ctrl_fdb->regc_jump_actions_tmpl = flow_hw_create_ctrl_regc_jump_actions_template + (dev, error); + if (!hw_ctrl_fdb->regc_jump_actions_tmpl) { DRV_LOG(ERR, "port %u failed to create REG_C set and jump action template" " for control flows", dev->data->port_id); goto err; } - MLX5_ASSERT(priv->hw_esw_sq_miss_root_tbl == NULL); - priv->hw_esw_sq_miss_root_tbl = flow_hw_create_ctrl_sq_miss_root_table - (dev, esw_mgr_items_tmpl, regc_jump_actions_tmpl, error); - if (!priv->hw_esw_sq_miss_root_tbl) { + hw_ctrl_fdb->hw_esw_sq_miss_root_tbl = flow_hw_create_ctrl_sq_miss_root_table + (dev, hw_ctrl_fdb->esw_mgr_items_tmpl, hw_ctrl_fdb->regc_jump_actions_tmpl, + error); + if (!hw_ctrl_fdb->hw_esw_sq_miss_root_tbl) { DRV_LOG(ERR, "port %u failed to create table for default sq miss (root table)" " for control flows", dev->data->port_id); goto err; } /* Create templates and table for default SQ miss flow rules - non-root table. */ - regc_sq_items_tmpl = flow_hw_create_ctrl_regc_sq_pattern_template(dev, error); - if (!regc_sq_items_tmpl) { + hw_ctrl_fdb->regc_sq_items_tmpl = flow_hw_create_ctrl_regc_sq_pattern_template(dev, error); + if (!hw_ctrl_fdb->regc_sq_items_tmpl) { DRV_LOG(ERR, "port %u failed to create SQ item template for" " control flows", dev->data->port_id); goto err; } - port_actions_tmpl = flow_hw_create_ctrl_port_actions_template(dev, error); - if (!port_actions_tmpl) { + hw_ctrl_fdb->port_actions_tmpl = flow_hw_create_ctrl_port_actions_template(dev, error); + if (!hw_ctrl_fdb->port_actions_tmpl) { DRV_LOG(ERR, "port %u failed to create port action template" " for control flows", dev->data->port_id); goto err; } - MLX5_ASSERT(priv->hw_esw_sq_miss_tbl == NULL); - priv->hw_esw_sq_miss_tbl = flow_hw_create_ctrl_sq_miss_table(dev, regc_sq_items_tmpl, - port_actions_tmpl, error); - if (!priv->hw_esw_sq_miss_tbl) { + hw_ctrl_fdb->hw_esw_sq_miss_tbl = flow_hw_create_ctrl_sq_miss_table + (dev, hw_ctrl_fdb->regc_sq_items_tmpl, hw_ctrl_fdb->port_actions_tmpl, + error); + if (!hw_ctrl_fdb->hw_esw_sq_miss_tbl) { DRV_LOG(ERR, "port %u failed to create table for default sq miss (non-root table)" " for control flows", dev->data->port_id); goto err; } /* Create templates and table for default FDB jump flow rules. */ - port_items_tmpl = flow_hw_create_ctrl_port_pattern_template(dev, error); - if (!port_items_tmpl) { + hw_ctrl_fdb->port_items_tmpl = flow_hw_create_ctrl_port_pattern_template(dev, error); + if (!hw_ctrl_fdb->port_items_tmpl) { DRV_LOG(ERR, "port %u failed to create SQ item template for" " control flows", dev->data->port_id); goto err; } - jump_one_actions_tmpl = flow_hw_create_ctrl_jump_actions_template + hw_ctrl_fdb->jump_one_actions_tmpl = flow_hw_create_ctrl_jump_actions_template (dev, MLX5_HW_LOWEST_USABLE_GROUP, error); - if (!jump_one_actions_tmpl) { + if (!hw_ctrl_fdb->jump_one_actions_tmpl) { DRV_LOG(ERR, "port %u failed to create jump action template" " for control flows", dev->data->port_id); goto err; } - MLX5_ASSERT(priv->hw_esw_zero_tbl == NULL); - priv->hw_esw_zero_tbl = flow_hw_create_ctrl_jump_table(dev, port_items_tmpl, - jump_one_actions_tmpl, - error); - if (!priv->hw_esw_zero_tbl) { + hw_ctrl_fdb->hw_esw_zero_tbl = flow_hw_create_ctrl_jump_table + (dev, hw_ctrl_fdb->port_items_tmpl, hw_ctrl_fdb->jump_one_actions_tmpl, + error); + if (!hw_ctrl_fdb->hw_esw_zero_tbl) { DRV_LOG(ERR, "port %u failed to create table for default jump to group 1" " for control flows", dev->data->port_id); goto err; } /* Create templates and table for default Tx metadata copy flow rule. */ if (!repr_matching && xmeta == MLX5_XMETA_MODE_META32_HWS) { - tx_meta_items_tmpl = + hw_ctrl_fdb->tx_meta_items_tmpl = flow_hw_create_tx_default_mreg_copy_pattern_template(dev, error); - if (!tx_meta_items_tmpl) { + if (!hw_ctrl_fdb->tx_meta_items_tmpl) { DRV_LOG(ERR, "port %u failed to Tx metadata copy pattern" " template for control flows", dev->data->port_id); goto err; } - tx_meta_actions_tmpl = + hw_ctrl_fdb->tx_meta_actions_tmpl = flow_hw_create_tx_default_mreg_copy_actions_template(dev, error); - if (!tx_meta_actions_tmpl) { + if (!hw_ctrl_fdb->tx_meta_actions_tmpl) { DRV_LOG(ERR, "port %u failed to Tx metadata copy actions" " template for control flows", dev->data->port_id); goto err; } - MLX5_ASSERT(priv->hw_tx_meta_cpy_tbl == NULL); - priv->hw_tx_meta_cpy_tbl = - flow_hw_create_tx_default_mreg_copy_table(dev, tx_meta_items_tmpl, - tx_meta_actions_tmpl, error); - if (!priv->hw_tx_meta_cpy_tbl) { + hw_ctrl_fdb->hw_tx_meta_cpy_tbl = + flow_hw_create_tx_default_mreg_copy_table + (dev, hw_ctrl_fdb->tx_meta_items_tmpl, + hw_ctrl_fdb->tx_meta_actions_tmpl, error); + if (!hw_ctrl_fdb->hw_tx_meta_cpy_tbl) { DRV_LOG(ERR, "port %u failed to create table for default" " Tx metadata copy flow rule", dev->data->port_id); goto err; @@ -9487,71 +9552,34 @@ flow_hw_create_ctrl_tables(struct rte_eth_dev *dev, struct rte_flow_error *error } /* Create LACP default miss table. */ if (!priv->sh->config.lacp_by_user && priv->pf_bond >= 0 && priv->master) { - lacp_rx_items_tmpl = flow_hw_create_lacp_rx_pattern_template(dev, error); - if (!lacp_rx_items_tmpl) { + hw_ctrl_fdb->lacp_rx_items_tmpl = + flow_hw_create_lacp_rx_pattern_template(dev, error); + if (!hw_ctrl_fdb->lacp_rx_items_tmpl) { DRV_LOG(ERR, "port %u failed to create pattern template" " for LACP Rx traffic", dev->data->port_id); goto err; } - lacp_rx_actions_tmpl = flow_hw_create_lacp_rx_actions_template(dev, error); - if (!lacp_rx_actions_tmpl) { + hw_ctrl_fdb->lacp_rx_actions_tmpl = + flow_hw_create_lacp_rx_actions_template(dev, error); + if (!hw_ctrl_fdb->lacp_rx_actions_tmpl) { DRV_LOG(ERR, "port %u failed to create actions template" " for LACP Rx traffic", dev->data->port_id); goto err; } - priv->hw_lacp_rx_tbl = flow_hw_create_lacp_rx_table(dev, lacp_rx_items_tmpl, - lacp_rx_actions_tmpl, error); - if (!priv->hw_lacp_rx_tbl) { + hw_ctrl_fdb->hw_lacp_rx_tbl = flow_hw_create_lacp_rx_table + (dev, hw_ctrl_fdb->lacp_rx_items_tmpl, + hw_ctrl_fdb->lacp_rx_actions_tmpl, error); + if (!hw_ctrl_fdb->hw_lacp_rx_tbl) { DRV_LOG(ERR, "port %u failed to create template table for" " for LACP Rx traffic", dev->data->port_id); goto err; } } return 0; + err: - /* Do not overwrite the rte_errno. */ - ret = -rte_errno; - if (ret == 0) - ret = rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Failed to create control tables."); - if (priv->hw_tx_meta_cpy_tbl) { - flow_hw_table_destroy(dev, priv->hw_tx_meta_cpy_tbl, NULL); - priv->hw_tx_meta_cpy_tbl = NULL; - } - if (priv->hw_esw_zero_tbl) { - flow_hw_table_destroy(dev, priv->hw_esw_zero_tbl, NULL); - priv->hw_esw_zero_tbl = NULL; - } - if (priv->hw_esw_sq_miss_tbl) { - flow_hw_table_destroy(dev, priv->hw_esw_sq_miss_tbl, NULL); - priv->hw_esw_sq_miss_tbl = NULL; - } - if (priv->hw_esw_sq_miss_root_tbl) { - flow_hw_table_destroy(dev, priv->hw_esw_sq_miss_root_tbl, NULL); - priv->hw_esw_sq_miss_root_tbl = NULL; - } - if (lacp_rx_actions_tmpl) - flow_hw_actions_template_destroy(dev, lacp_rx_actions_tmpl, NULL); - if (tx_meta_actions_tmpl) - flow_hw_actions_template_destroy(dev, tx_meta_actions_tmpl, NULL); - if (jump_one_actions_tmpl) - flow_hw_actions_template_destroy(dev, jump_one_actions_tmpl, NULL); - if (port_actions_tmpl) - flow_hw_actions_template_destroy(dev, port_actions_tmpl, NULL); - if (regc_jump_actions_tmpl) - flow_hw_actions_template_destroy(dev, regc_jump_actions_tmpl, NULL); - if (lacp_rx_items_tmpl) - flow_hw_pattern_template_destroy(dev, lacp_rx_items_tmpl, NULL); - if (tx_meta_items_tmpl) - flow_hw_pattern_template_destroy(dev, tx_meta_items_tmpl, NULL); - if (port_items_tmpl) - flow_hw_pattern_template_destroy(dev, port_items_tmpl, NULL); - if (regc_sq_items_tmpl) - flow_hw_pattern_template_destroy(dev, regc_sq_items_tmpl, NULL); - if (esw_mgr_items_tmpl) - flow_hw_pattern_template_destroy(dev, esw_mgr_items_tmpl, NULL); - return ret; + flow_hw_cleanup_ctrl_fdb_tables(dev); + return -EINVAL; } static void @@ -10583,6 +10611,7 @@ flow_hw_configure(struct rte_eth_dev *dev, action_template_drop_release(dev); mlx5_flow_quota_destroy(dev); flow_hw_destroy_send_to_kernel_action(priv); + flow_hw_cleanup_ctrl_fdb_tables(dev); flow_hw_free_vport_actions(priv); for (i = 0; i < MLX5_HW_ACTION_FLAG_MAX; i++) { if (priv->hw_drop[i]) @@ -10645,6 +10674,7 @@ flow_hw_resource_release(struct rte_eth_dev *dev) dev->flow_fp_ops = &rte_flow_fp_default_ops; flow_hw_rxq_flag_set(dev, false); flow_hw_flush_all_ctrl_flows(dev); + flow_hw_cleanup_ctrl_fdb_tables(dev); flow_hw_cleanup_tx_repr_tagging(dev); flow_hw_cleanup_ctrl_rx_tables(dev); action_template_drop_release(dev); @@ -13211,8 +13241,9 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool proxy_port_id, port_id); return 0; } - if (!proxy_priv->hw_esw_sq_miss_root_tbl || - !proxy_priv->hw_esw_sq_miss_tbl) { + if (!proxy_priv->hw_ctrl_fdb || + !proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_root_tbl || + !proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_tbl) { DRV_LOG(ERR, "Transfer proxy port (port %u) of port %u was configured, but " "default flow tables were not created.", proxy_port_id, port_id); @@ -13244,7 +13275,8 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool actions[2] = (struct rte_flow_action) { .type = RTE_FLOW_ACTION_TYPE_END, }; - ret = flow_hw_create_ctrl_flow(dev, proxy_dev, proxy_priv->hw_esw_sq_miss_root_tbl, + ret = flow_hw_create_ctrl_flow(dev, proxy_dev, + proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_root_tbl, items, 0, actions, 0, &flow_info, external); if (ret) { DRV_LOG(ERR, "Port %u failed to create root SQ miss flow rule for SQ %u, ret %d", @@ -13275,7 +13307,8 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool .type = RTE_FLOW_ACTION_TYPE_END, }; flow_info.type = MLX5_HW_CTRL_FLOW_TYPE_SQ_MISS; - ret = flow_hw_create_ctrl_flow(dev, proxy_dev, proxy_priv->hw_esw_sq_miss_tbl, + ret = flow_hw_create_ctrl_flow(dev, proxy_dev, + proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_tbl, items, 0, actions, 0, &flow_info, external); if (ret) { DRV_LOG(ERR, "Port %u failed to create HWS SQ miss flow rule for SQ %u, ret %d", @@ -13321,8 +13354,9 @@ mlx5_flow_hw_esw_destroy_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn) proxy_priv = proxy_dev->data->dev_private; if (!proxy_priv->dr_ctx) return 0; - if (!proxy_priv->hw_esw_sq_miss_root_tbl || - !proxy_priv->hw_esw_sq_miss_tbl) + if (!proxy_priv->hw_ctrl_fdb || + !proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_root_tbl || + !proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_tbl) return 0; cf = LIST_FIRST(&proxy_priv->hw_ctrl_flows); while (cf != NULL) { @@ -13389,7 +13423,7 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev) proxy_port_id, port_id); return 0; } - if (!proxy_priv->hw_esw_zero_tbl) { + if (!proxy_priv->hw_ctrl_fdb || !proxy_priv->hw_ctrl_fdb->hw_esw_zero_tbl) { DRV_LOG(ERR, "Transfer proxy port (port %u) of port %u was configured, but " "default flow tables were not created.", proxy_port_id, port_id); @@ -13397,7 +13431,7 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev) return -rte_errno; } return flow_hw_create_ctrl_flow(dev, proxy_dev, - proxy_priv->hw_esw_zero_tbl, + proxy_priv->hw_ctrl_fdb->hw_esw_zero_tbl, items, 0, actions, 0, &flow_info, false); } @@ -13449,10 +13483,12 @@ mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev) }; MLX5_ASSERT(priv->master); - if (!priv->dr_ctx || !priv->hw_tx_meta_cpy_tbl) + if (!priv->dr_ctx || + !priv->hw_ctrl_fdb || + !priv->hw_ctrl_fdb->hw_tx_meta_cpy_tbl) return 0; return flow_hw_create_ctrl_flow(dev, dev, - priv->hw_tx_meta_cpy_tbl, + priv->hw_ctrl_fdb->hw_tx_meta_cpy_tbl, eth_all, 0, copy_reg_action, 0, &flow_info, false); } @@ -13544,10 +13580,11 @@ mlx5_flow_hw_lacp_rx_flow(struct rte_eth_dev *dev) .type = MLX5_HW_CTRL_FLOW_TYPE_LACP_RX, }; - if (!priv->dr_ctx || !priv->hw_lacp_rx_tbl) + if (!priv->dr_ctx || !priv->hw_ctrl_fdb || !priv->hw_ctrl_fdb->hw_lacp_rx_tbl) return 0; - return flow_hw_create_ctrl_flow(dev, dev, priv->hw_lacp_rx_tbl, eth_lacp, 0, - miss_action, 0, &flow_info, false); + return flow_hw_create_ctrl_flow(dev, dev, + priv->hw_ctrl_fdb->hw_lacp_rx_tbl, + eth_lacp, 0, miss_action, 0, &flow_info, false); } static uint32_t