From patchwork Fri Sep 30 12:53:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 117226 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 05522A00C4; Fri, 30 Sep 2022 14:56:01 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C110642BC7; Fri, 30 Sep 2022 14:54:25 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2060.outbound.protection.outlook.com [40.107.92.60]) by mails.dpdk.org (Postfix) with ESMTP id 0141141109 for ; Fri, 30 Sep 2022 14:54:22 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=TDLYcMz5ATlMNPvhYv2W47slg9NkVIAnHeExFn8lwERrFi7PxwtM3Uxk8pizFCrXPXjUhVqaSOB7En++qw+zGammqzEW34ONhNxB2CoPxqIf25f9OxPHgjfZpFRYY9Ox/ix3D0iSS+jrPohC3Li7ENieZzcBAyCXV+fKR0KoCaD4F4CgypSQziGPYwYysre5W53XueySfkrpCzXLLWuPIfR6MASMbHkVP2aAJmTJmABhiL9fAjtl3k+/BuPSiUdPKfjDhSTakZDDb0GMV+0t3O8XsJZgObT8USTIq1u0MBlbm13Ih+zdwqOwHWiMpy3t7eH1lR9MEKXX+7kBG/tbww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=v36ljghLrNMYblQNTT0nvGiQac2omdnznjCdqAZemug=; b=O4z2iP8Sh1592nYWtCy7vns8fq3OiXwY0uBjArGVzui/yRcdaSUcZcswhEHYJeMz4FpO3jd5oL1/eoKbPZLWh954Da5kxKWmluTuqWC76GQIoyGim0tW2RX2qTRmAmya90Iq48OBBLqNJ1PZvgaLYxgKrbJ2g44Zszk0tirik2gYd/ug4v/c2HNhu1HbutXc07qXZ/UDxznI7HuuQjX7RidtPHfW3f0r5BrZXwt9PefVckAX1I2db1scyvjmIepEHYmzMXHbjyNJdMX2H4nc6U/51RVowZth+GOW1pBh4NLoyhEedO0ONUthm+4yP4iSVldU6b2pcTOZm19+4Z2ETQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=ashroe.eu smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=v36ljghLrNMYblQNTT0nvGiQac2omdnznjCdqAZemug=; b=H1DlHlgDg33toICgt6NuYgftaEVQQKOnXYWgIV8b+g3ySzsN67zA8LoPpruTQ+OTSfip9IM+LL8dnyLGmBrqx1Wn/BP5AYrZ+jFCK8tF92imPFp1ONbNIrDfTYA9PtMo30ErNngExqZHAwJ9EYgvn2PHaFjfnvQy+n8eIJ2unMRA/oKXXvFmP73bAxl0qEQ0fUqiP6ASRNTe5stpEVG3XwNUytlSUMKeRUxw4NbhvniiDvvazLyIEg7bEv9XRi+zzfuJrLKs3dG9+4aa69xUerNMeRcM6GRWo2H1vv5WrBbprouIONlD2zvx0r3dYY0Xpgt6jZQW9A3H1K/AL5o69g== Received: from DM6PR07CA0070.namprd07.prod.outlook.com (2603:10b6:5:74::47) by CH2PR12MB4102.namprd12.prod.outlook.com (2603:10b6:610:a9::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.23; Fri, 30 Sep 2022 12:54:19 +0000 Received: from DM6NAM11FT081.eop-nam11.prod.protection.outlook.com (2603:10b6:5:74:cafe::77) by DM6PR07CA0070.outlook.office365.com (2603:10b6:5:74::47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.17 via Frontend Transport; Fri, 30 Sep 2022 12:54:19 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DM6NAM11FT081.mail.protection.outlook.com (10.13.172.136) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.17 via Frontend Transport; Fri, 30 Sep 2022 12:54:19 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Fri, 30 Sep 2022 05:54:11 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Fri, 30 Sep 2022 05:54:09 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko , Ray Kinsella CC: , , , "Dariusz Sosnowski" , Xueming Li Subject: [PATCH v3 16/17] net/mlx5: support device control for E-Switch default rule Date: Fri, 30 Sep 2022 15:53:14 +0300 Message-ID: <20220930125315.5079-17-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220930125315.5079-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> <20220930125315.5079-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT081:EE_|CH2PR12MB4102:EE_ X-MS-Office365-Filtering-Correlation-Id: 28604d66-0263-49c3-6f6f-08daa2e2e3f6 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Th/EVjdKV3NsWvr/rMOWtmcLiwK+HAxP+kUZvod6zBqsiX1RBVO0BYM9RfF0YUYpDpxV75ODById4s6iE8bB3c0iWtLLu9luPAFolByCqgT2kQIBM9h1tx+cIqT7fEoJbJn4sk1RCe7D8bwKn+xBSYEqKc9Uv4lJgMkOVKB8RXqma2vfjYuLf5vv20guM4rD3LuIZA2dxtaq6fUketV4jpqyZU0s26HmMDhSrOJDvzqqHHqZMKDnyNP5wVEamvgUeHkO/IjaI0EIDgy55kH9Ri6jPmxawmZjkuHXjmViuDqWdnl9sGdcFk2hcF8kD4nwNYnvnkJIBqJOefrYdFE2P41Qhv31+sAraw3sJ68D25JDWrltgXCyHFSZfofYRXma9Ma+BpLdclsR8j5eIktxspqr8sKhvM1xv77MrAMe92Czu32szSdmZGkqWJJvrFi751nK/JXoFsWdP+vMsycvCghxUimkZm5HXs8T5YvkcFGMS5p3Nc2usuylDO185/o+nSQGOSeg//BckrBeVN0sdbraOj6FxcJtp5FKycCNPlDqtEJGE9lHGYL1ZlGD8MC7jByMsDZ5UiV+A34UN1zeTCJG/p518roNQE2Hqt2YQwOmQokjnpHFDsR1KMDWtujK2iocJdXCb5EAENWtUHY5jrn0DO4mGoS9qidHodJV+fOxVvtHmDixNk8fqopRcz3/oKAwWL17PlHsH0kHjl4+hoQayV3fqyvE44ZJ8ST/fId5XUq7tJXkxWsv5A0cKpvRXUQ70ycV4VSZ9gsNhGaiWQ== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(376002)(136003)(346002)(39860400002)(396003)(451199015)(40470700004)(36840700001)(46966006)(40480700001)(41300700001)(36756003)(36860700001)(7696005)(30864003)(55016003)(40460700003)(8936002)(1076003)(16526019)(478600001)(82310400005)(2616005)(70206006)(336012)(107886003)(5660300002)(26005)(186003)(6666004)(6286002)(86362001)(70586007)(8676002)(4326008)(110136005)(83380400001)(426003)(47076005)(316002)(54906003)(82740400003)(7636003)(2906002)(356005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Sep 2022 12:54:19.5671 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 28604d66-0263-49c3-6f6f-08daa2e2e3f6 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT081.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4102 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Dariusz Sosnowski This patch introduces This patch adds support for fdb_def_rule_en device argument to HW Steering, which controls: - creation of default FDB jump flow rule, - ability of the user to create transfer flow rules in root table. A new PMD API to allow user application to enable traffic with port ID and SQ number is also added to direct packet to wire. Signed-off-by: Dariusz Sosnowski Signed-off-by: Xueming Li --- drivers/net/mlx5/linux/mlx5_os.c | 14 ++ drivers/net/mlx5/mlx5.h | 4 +- drivers/net/mlx5/mlx5_flow.c | 28 ++-- drivers/net/mlx5/mlx5_flow.h | 11 +- drivers/net/mlx5/mlx5_flow_dv.c | 78 +++++---- drivers/net/mlx5/mlx5_flow_hw.c | 279 +++++++++++++++---------------- drivers/net/mlx5/mlx5_trigger.c | 31 ++-- drivers/net/mlx5/mlx5_tx.h | 1 + drivers/net/mlx5/mlx5_txq.c | 47 ++++++ drivers/net/mlx5/rte_pmd_mlx5.h | 17 ++ drivers/net/mlx5/version.map | 1 + 11 files changed, 305 insertions(+), 206 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 60a1a391fb..de8c003d02 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1554,6 +1554,20 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, rte_rwlock_init(&priv->ind_tbls_lock); if (priv->sh->config.dv_flow_en == 2) { #ifdef HAVE_IBV_FLOW_DV_SUPPORT + if (priv->sh->config.dv_esw_en) { + if (priv->sh->dv_regc0_mask == UINT32_MAX) { + DRV_LOG(ERR, "E-Switch port metadata is required when using HWS " + "but it is disabled (configure it through devlink)"); + err = ENOTSUP; + goto error; + } + if (priv->sh->dv_regc0_mask == 0) { + DRV_LOG(ERR, "E-Switch with HWS is not supported " + "(no available bits in reg_c[0])"); + err = ENOTSUP; + goto error; + } + } if (priv->vport_meta_mask) flow_hw_set_port_info(eth_dev); if (priv->sh->config.dv_esw_en && diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index f6033710aa..419b5a18ca 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -2015,7 +2015,7 @@ int mlx5_flow_ops_get(struct rte_eth_dev *dev, const struct rte_flow_ops **ops); int mlx5_flow_start_default(struct rte_eth_dev *dev); void mlx5_flow_stop_default(struct rte_eth_dev *dev); int mlx5_flow_verify(struct rte_eth_dev *dev); -int mlx5_ctrl_flow_source_queue(struct rte_eth_dev *dev, uint32_t queue); +int mlx5_ctrl_flow_source_queue(struct rte_eth_dev *dev, uint32_t sq_num); int mlx5_ctrl_flow_vlan(struct rte_eth_dev *dev, struct rte_flow_item_eth *eth_spec, struct rte_flow_item_eth *eth_mask, @@ -2027,7 +2027,7 @@ int mlx5_ctrl_flow(struct rte_eth_dev *dev, int mlx5_flow_lacp_miss(struct rte_eth_dev *dev); struct rte_flow *mlx5_flow_create_esw_table_zero_flow(struct rte_eth_dev *dev); uint32_t mlx5_flow_create_devx_sq_miss_flow(struct rte_eth_dev *dev, - uint32_t txq); + uint32_t sq_num); void mlx5_flow_async_pool_query_handle(struct mlx5_dev_ctx_shared *sh, uint64_t async_id, int status); void mlx5_set_query_alarm(struct mlx5_dev_ctx_shared *sh); diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index bc2ccb4d3c..2142cd828a 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -7155,14 +7155,14 @@ mlx5_flow_create_esw_table_zero_flow(struct rte_eth_dev *dev) * * @param dev * Pointer to Ethernet device. - * @param txq - * Txq index. + * @param sq_num + * SQ number. * * @return * Flow ID on success, 0 otherwise and rte_errno is set. */ uint32_t -mlx5_flow_create_devx_sq_miss_flow(struct rte_eth_dev *dev, uint32_t txq) +mlx5_flow_create_devx_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sq_num) { struct rte_flow_attr attr = { .group = 0, @@ -7174,8 +7174,8 @@ mlx5_flow_create_devx_sq_miss_flow(struct rte_eth_dev *dev, uint32_t txq) struct rte_flow_item_port_id port_spec = { .id = MLX5_PORT_ESW_MGR, }; - struct mlx5_rte_flow_item_tx_queue txq_spec = { - .queue = txq, + struct mlx5_rte_flow_item_sq sq_spec = { + .queue = sq_num, }; struct rte_flow_item pattern[] = { { @@ -7184,8 +7184,8 @@ mlx5_flow_create_devx_sq_miss_flow(struct rte_eth_dev *dev, uint32_t txq) }, { .type = (enum rte_flow_item_type) - MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE, - .spec = &txq_spec, + MLX5_RTE_FLOW_ITEM_TYPE_SQ, + .spec = &sq_spec, }, { .type = RTE_FLOW_ITEM_TYPE_END, @@ -7556,30 +7556,30 @@ mlx5_flow_verify(struct rte_eth_dev *dev __rte_unused) * * @param dev * Pointer to Ethernet device. - * @param queue - * The queue index. + * @param sq_num + * The SQ hw number. * * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_ctrl_flow_source_queue(struct rte_eth_dev *dev, - uint32_t queue) + uint32_t sq_num) { const struct rte_flow_attr attr = { .egress = 1, .priority = 0, }; - struct mlx5_rte_flow_item_tx_queue queue_spec = { - .queue = queue, + struct mlx5_rte_flow_item_sq queue_spec = { + .queue = sq_num, }; - struct mlx5_rte_flow_item_tx_queue queue_mask = { + struct mlx5_rte_flow_item_sq queue_mask = { .queue = UINT32_MAX, }; struct rte_flow_item items[] = { { .type = (enum rte_flow_item_type) - MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE, + MLX5_RTE_FLOW_ITEM_TYPE_SQ, .spec = &queue_spec, .last = NULL, .mask = &queue_mask, diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 3f4aa080bb..63f946473d 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -29,7 +29,7 @@ enum mlx5_rte_flow_item_type { MLX5_RTE_FLOW_ITEM_TYPE_END = INT_MIN, MLX5_RTE_FLOW_ITEM_TYPE_TAG, - MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE, + MLX5_RTE_FLOW_ITEM_TYPE_SQ, MLX5_RTE_FLOW_ITEM_TYPE_VLAN, MLX5_RTE_FLOW_ITEM_TYPE_TUNNEL, }; @@ -115,8 +115,8 @@ struct mlx5_flow_action_copy_mreg { }; /* Matches on source queue. */ -struct mlx5_rte_flow_item_tx_queue { - uint32_t queue; +struct mlx5_rte_flow_item_sq { + uint32_t queue; /* DevX SQ number */ }; /* Feature name to allocate metadata register. */ @@ -179,7 +179,7 @@ enum mlx5_feature_name { #define MLX5_FLOW_LAYER_GENEVE (1u << 26) /* Queue items. */ -#define MLX5_FLOW_ITEM_TX_QUEUE (1u << 27) +#define MLX5_FLOW_ITEM_SQ (1u << 27) /* Pattern tunnel Layer bits (continued). */ #define MLX5_FLOW_LAYER_GTP (1u << 28) @@ -2475,9 +2475,8 @@ int mlx5_flow_pick_transfer_proxy(struct rte_eth_dev *dev, int mlx5_flow_hw_flush_ctrl_flows(struct rte_eth_dev *dev); -int mlx5_flow_hw_esw_create_mgr_sq_miss_flow(struct rte_eth_dev *dev); int mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, - uint32_t txq); + uint32_t sqn); int mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev); int mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev); int mlx5_flow_actions_validate(struct rte_eth_dev *dev, diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index e86a06eae6..0f6fd34a8b 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -7453,8 +7453,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, return ret; last_item = MLX5_FLOW_ITEM_TAG; break; - case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: - last_item = MLX5_FLOW_ITEM_TX_QUEUE; + case MLX5_RTE_FLOW_ITEM_TYPE_SQ: + last_item = MLX5_FLOW_ITEM_SQ; break; case MLX5_RTE_FLOW_ITEM_TYPE_TAG: break; @@ -8343,7 +8343,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, * work due to metadata regC0 mismatch. */ if ((!attr->transfer && attr->egress) && priv->representor && - !(item_flags & MLX5_FLOW_ITEM_TX_QUEUE)) + !(item_flags & MLX5_FLOW_ITEM_SQ)) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, @@ -10123,6 +10123,29 @@ flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *key, return 0; } +/** + * Translate port representor item to eswitch match on port id. + * + * @param[in] dev + * The devich to configure through. + * @param[in, out] key + * Flow matcher value. + * @param[in] key_type + * Set flow matcher mask or value. + * + * @return + * 0 on success, a negative errno value otherwise. + */ +static int +flow_dv_translate_item_port_representor(struct rte_eth_dev *dev, void *key, + uint32_t key_type) +{ + flow_dv_translate_item_source_vport(key, + key_type & MLX5_SET_MATCHER_V ? + mlx5_flow_get_esw_manager_vport_id(dev) : 0xffff); + return 0; +} + /** * Translate represented port item to eswitch match on port id. * @@ -11402,10 +11425,10 @@ flow_dv_translate_create_counter(struct rte_eth_dev *dev, } /** - * Add Tx queue matcher + * Add SQ matcher * - * @param[in] dev - * Pointer to the dev struct. + * @param[in, out] matcher + * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -11414,40 +11437,29 @@ flow_dv_translate_create_counter(struct rte_eth_dev *dev, * Set flow matcher mask or value. */ static void -flow_dv_translate_item_tx_queue(struct rte_eth_dev *dev, - void *key, - const struct rte_flow_item *item, - uint32_t key_type) +flow_dv_translate_item_sq(void *key, + const struct rte_flow_item *item, + uint32_t key_type) { - const struct mlx5_rte_flow_item_tx_queue *queue_m; - const struct mlx5_rte_flow_item_tx_queue *queue_v; - const struct mlx5_rte_flow_item_tx_queue queue_mask = { + const struct mlx5_rte_flow_item_sq *queue_m; + const struct mlx5_rte_flow_item_sq *queue_v; + const struct mlx5_rte_flow_item_sq queue_mask = { .queue = UINT32_MAX, }; - void *misc_v = - MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - struct mlx5_txq_ctrl *txq = NULL; + void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); uint32_t queue; MLX5_ITEM_UPDATE(item, key_type, queue_v, queue_m, &queue_mask); if (!queue_m || !queue_v) return; if (key_type & MLX5_SET_MATCHER_V) { - txq = mlx5_txq_get(dev, queue_v->queue); - if (!txq) - return; - if (txq->is_hairpin) - queue = txq->obj->sq->id; - else - queue = txq->obj->sq_obj.sq->id; + queue = queue_v->queue; if (key_type == MLX5_SET_MATCHER_SW_V) queue &= queue_m->queue; } else { queue = queue_m->queue; } MLX5_SET(fte_match_set_misc, misc_v, source_sqn, queue); - if (txq) - mlx5_txq_release(dev, queue_v->queue); } /** @@ -13148,6 +13160,11 @@ flow_dv_translate_items(struct rte_eth_dev *dev, (dev, key, items, wks->attr, key_type); last_item = MLX5_FLOW_ITEM_PORT_ID; break; + case RTE_FLOW_ITEM_TYPE_PORT_REPRESENTOR: + flow_dv_translate_item_port_representor + (dev, key, key_type); + last_item = MLX5_FLOW_ITEM_PORT_REPRESENTOR; + break; case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: flow_dv_translate_item_represented_port (dev, key, items, wks->attr, key_type); @@ -13353,9 +13370,9 @@ flow_dv_translate_items(struct rte_eth_dev *dev, flow_dv_translate_mlx5_item_tag(dev, key, items, key_type); last_item = MLX5_FLOW_ITEM_TAG; break; - case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: - flow_dv_translate_item_tx_queue(dev, key, items, key_type); - last_item = MLX5_FLOW_ITEM_TX_QUEUE; + case MLX5_RTE_FLOW_ITEM_TYPE_SQ: + flow_dv_translate_item_sq(key, items, key_type); + last_item = MLX5_FLOW_ITEM_SQ; break; case RTE_FLOW_ITEM_TYPE_GTP: flow_dv_translate_item_gtp(key, items, tunnel, key_type); @@ -13564,7 +13581,6 @@ flow_dv_translate_items_sws(struct rte_eth_dev *dev, wks.last_item = tunnel ? MLX5_FLOW_ITEM_INNER_FLEX : MLX5_FLOW_ITEM_OUTER_FLEX; break; - default: ret = flow_dv_translate_items(dev, items, &wks_m, match_mask, MLX5_SET_MATCHER_SW_M, error); @@ -13587,7 +13603,9 @@ flow_dv_translate_items_sws(struct rte_eth_dev *dev, * in use. */ if (!(wks.item_flags & MLX5_FLOW_ITEM_PORT_ID) && - !(wks.item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) && priv->sh->esw_mode && + !(wks.item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) && + !(wks.item_flags & MLX5_FLOW_ITEM_PORT_REPRESENTOR) && + priv->sh->esw_mode && !(attr->egress && !attr->transfer) && attr->group != MLX5_FLOW_MREG_CP_TABLE_GROUP) { if (flow_dv_translate_item_port_id_all(dev, match_mask, diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 2b5eab6659..b2824ad8fe 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -3173,7 +3173,10 @@ flow_hw_translate_group(struct rte_eth_dev *dev, struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_attr *flow_attr = &cfg->attr.flow_attr; - if (priv->sh->config.dv_esw_en && cfg->external && flow_attr->transfer) { + if (priv->sh->config.dv_esw_en && + priv->fdb_def_rule && + cfg->external && + flow_attr->transfer) { if (group > MLX5_HW_MAX_TRANSFER_GROUP) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR_GROUP, @@ -4648,7 +4651,7 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev, case RTE_FLOW_ITEM_TYPE_GTP: case RTE_FLOW_ITEM_TYPE_GTP_PSC: case RTE_FLOW_ITEM_TYPE_VXLAN: - case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: + case MLX5_RTE_FLOW_ITEM_TYPE_SQ: case RTE_FLOW_ITEM_TYPE_GRE: case RTE_FLOW_ITEM_TYPE_GRE_KEY: case RTE_FLOW_ITEM_TYPE_GRE_OPTION: @@ -5141,14 +5144,23 @@ flow_hw_free_vport_actions(struct mlx5_priv *priv) } static uint32_t -flow_hw_usable_lsb_vport_mask(struct mlx5_priv *priv) +flow_hw_esw_mgr_regc_marker_mask(struct rte_eth_dev *dev) { - uint32_t usable_mask = ~priv->vport_meta_mask; + uint32_t mask = MLX5_SH(dev)->dv_regc0_mask; - if (usable_mask) - return (1 << rte_bsf32(usable_mask)); - else - return 0; + /* Mask is verified during device initialization. */ + MLX5_ASSERT(mask != 0); + return mask; +} + +static uint32_t +flow_hw_esw_mgr_regc_marker(struct rte_eth_dev *dev) +{ + uint32_t mask = MLX5_SH(dev)->dv_regc0_mask; + + /* Mask is verified during device initialization. */ + MLX5_ASSERT(mask != 0); + return RTE_BIT32(rte_bsf32(mask)); } /** @@ -5174,12 +5186,19 @@ flow_hw_create_ctrl_esw_mgr_pattern_template(struct rte_eth_dev *dev) struct rte_flow_item_ethdev port_mask = { .port_id = UINT16_MAX, }; + struct mlx5_rte_flow_item_sq sq_mask = { + .queue = UINT32_MAX, + }; struct rte_flow_item items[] = { { .type = RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT, .spec = &port_spec, .mask = &port_mask, }, + { + .type = (enum rte_flow_item_type)MLX5_RTE_FLOW_ITEM_TYPE_SQ, + .mask = &sq_mask, + }, { .type = RTE_FLOW_ITEM_TYPE_END, }, @@ -5189,9 +5208,10 @@ flow_hw_create_ctrl_esw_mgr_pattern_template(struct rte_eth_dev *dev) } /** - * Creates a flow pattern template used to match REG_C_0 and a TX queue. - * Matching on REG_C_0 is set up to match on least significant bit usable - * by user-space, which is set when packet was originated from E-Switch Manager. + * Creates a flow pattern template used to match REG_C_0 and a SQ. + * Matching on REG_C_0 is set up to match on all bits usable by user-space. + * If traffic was sent from E-Switch Manager, then all usable bits will be set to 0, + * except the least significant bit, which will be set to 1. * * This template is used to set up a table for SQ miss default flow. * @@ -5204,8 +5224,6 @@ flow_hw_create_ctrl_esw_mgr_pattern_template(struct rte_eth_dev *dev) static struct rte_flow_pattern_template * flow_hw_create_ctrl_regc_sq_pattern_template(struct rte_eth_dev *dev) { - struct mlx5_priv *priv = dev->data->dev_private; - uint32_t marker_bit = flow_hw_usable_lsb_vport_mask(priv); struct rte_flow_pattern_template_attr attr = { .relaxed_matching = 0, .transfer = 1, @@ -5215,8 +5233,9 @@ flow_hw_create_ctrl_regc_sq_pattern_template(struct rte_eth_dev *dev) }; struct rte_flow_item_tag reg_c0_mask = { .index = 0xff, + .data = flow_hw_esw_mgr_regc_marker_mask(dev), }; - struct mlx5_rte_flow_item_tx_queue queue_mask = { + struct mlx5_rte_flow_item_sq queue_mask = { .queue = UINT32_MAX, }; struct rte_flow_item items[] = { @@ -5228,7 +5247,7 @@ flow_hw_create_ctrl_regc_sq_pattern_template(struct rte_eth_dev *dev) }, { .type = (enum rte_flow_item_type) - MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE, + MLX5_RTE_FLOW_ITEM_TYPE_SQ, .mask = &queue_mask, }, { @@ -5236,12 +5255,6 @@ flow_hw_create_ctrl_regc_sq_pattern_template(struct rte_eth_dev *dev) }, }; - if (!marker_bit) { - DRV_LOG(ERR, "Unable to set up pattern template for SQ miss table"); - return NULL; - } - reg_c0_spec.data = marker_bit; - reg_c0_mask.data = marker_bit; return flow_hw_pattern_template_create(dev, &attr, items, NULL); } @@ -5333,9 +5346,8 @@ flow_hw_create_tx_default_mreg_copy_pattern_template(struct rte_eth_dev *dev) static struct rte_flow_actions_template * flow_hw_create_ctrl_regc_jump_actions_template(struct rte_eth_dev *dev) { - struct mlx5_priv *priv = dev->data->dev_private; - uint32_t marker_bit = flow_hw_usable_lsb_vport_mask(priv); - uint32_t marker_bit_mask = UINT32_MAX; + uint32_t marker_mask = flow_hw_esw_mgr_regc_marker_mask(dev); + uint32_t marker_bits = flow_hw_esw_mgr_regc_marker(dev); struct rte_flow_actions_template_attr attr = { .transfer = 1, }; @@ -5348,7 +5360,7 @@ flow_hw_create_ctrl_regc_jump_actions_template(struct rte_eth_dev *dev) .src = { .field = RTE_FLOW_FIELD_VALUE, }, - .width = 1, + .width = __builtin_popcount(marker_mask), }; struct rte_flow_action_modify_field set_reg_m = { .operation = RTE_FLOW_MODIFY_SET, @@ -5395,13 +5407,9 @@ flow_hw_create_ctrl_regc_jump_actions_template(struct rte_eth_dev *dev) } }; - if (!marker_bit) { - DRV_LOG(ERR, "Unable to set up actions template for SQ miss table"); - return NULL; - } - set_reg_v.dst.offset = rte_bsf32(marker_bit); - rte_memcpy(set_reg_v.src.value, &marker_bit, sizeof(marker_bit)); - rte_memcpy(set_reg_m.src.value, &marker_bit_mask, sizeof(marker_bit_mask)); + set_reg_v.dst.offset = rte_bsf32(marker_mask); + rte_memcpy(set_reg_v.src.value, &marker_bits, sizeof(marker_bits)); + rte_memcpy(set_reg_m.src.value, &marker_mask, sizeof(marker_mask)); return flow_hw_actions_template_create(dev, &attr, actions_v, actions_m, NULL); } @@ -5588,7 +5596,7 @@ flow_hw_create_ctrl_sq_miss_root_table(struct rte_eth_dev *dev, struct rte_flow_template_table_attr attr = { .flow_attr = { .group = 0, - .priority = 0, + .priority = MLX5_HW_LOWEST_PRIO_ROOT, .ingress = 0, .egress = 0, .transfer = 1, @@ -5703,7 +5711,7 @@ flow_hw_create_ctrl_jump_table(struct rte_eth_dev *dev, struct rte_flow_template_table_attr attr = { .flow_attr = { .group = 0, - .priority = MLX5_HW_LOWEST_PRIO_ROOT, + .priority = 0, .ingress = 0, .egress = 0, .transfer = 1, @@ -7765,141 +7773,123 @@ flow_hw_flush_all_ctrl_flows(struct rte_eth_dev *dev) } int -mlx5_flow_hw_esw_create_mgr_sq_miss_flow(struct rte_eth_dev *dev) +mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn) { - struct mlx5_priv *priv = dev->data->dev_private; - struct rte_flow_item_ethdev port_spec = { + uint16_t port_id = dev->data->port_id; + struct rte_flow_item_ethdev esw_mgr_spec = { .port_id = MLX5_REPRESENTED_PORT_ESW_MGR, }; - struct rte_flow_item_ethdev port_mask = { + struct rte_flow_item_ethdev esw_mgr_mask = { .port_id = MLX5_REPRESENTED_PORT_ESW_MGR, }; - struct rte_flow_item items[] = { - { - .type = RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT, - .spec = &port_spec, - .mask = &port_mask, - }, - { - .type = RTE_FLOW_ITEM_TYPE_END, - }, - }; - struct rte_flow_action_modify_field modify_field = { - .operation = RTE_FLOW_MODIFY_SET, - .dst = { - .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG, - }, - .src = { - .field = RTE_FLOW_FIELD_VALUE, - }, - .width = 1, - }; - struct rte_flow_action_jump jump = { - .group = 1, - }; - struct rte_flow_action actions[] = { - { - .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, - .conf = &modify_field, - }, - { - .type = RTE_FLOW_ACTION_TYPE_JUMP, - .conf = &jump, - }, - { - .type = RTE_FLOW_ACTION_TYPE_END, - }, - }; - - MLX5_ASSERT(priv->master); - if (!priv->dr_ctx || - !priv->hw_esw_sq_miss_root_tbl) - return 0; - return flow_hw_create_ctrl_flow(dev, dev, - priv->hw_esw_sq_miss_root_tbl, - items, 0, actions, 0); -} - -int -mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t txq) -{ - uint16_t port_id = dev->data->port_id; struct rte_flow_item_tag reg_c0_spec = { .index = (uint8_t)REG_C_0, + .data = flow_hw_esw_mgr_regc_marker(dev), }; struct rte_flow_item_tag reg_c0_mask = { .index = 0xff, + .data = flow_hw_esw_mgr_regc_marker_mask(dev), }; - struct mlx5_rte_flow_item_tx_queue queue_spec = { - .queue = txq, - }; - struct mlx5_rte_flow_item_tx_queue queue_mask = { - .queue = UINT32_MAX, - }; - struct rte_flow_item items[] = { - { - .type = (enum rte_flow_item_type) - MLX5_RTE_FLOW_ITEM_TYPE_TAG, - .spec = ®_c0_spec, - .mask = ®_c0_mask, - }, - { - .type = (enum rte_flow_item_type) - MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE, - .spec = &queue_spec, - .mask = &queue_mask, - }, - { - .type = RTE_FLOW_ITEM_TYPE_END, - }, + struct mlx5_rte_flow_item_sq sq_spec = { + .queue = sqn, }; struct rte_flow_action_ethdev port = { .port_id = port_id, }; - struct rte_flow_action actions[] = { - { - .type = RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT, - .conf = &port, - }, - { - .type = RTE_FLOW_ACTION_TYPE_END, - }, - }; + struct rte_flow_item items[3] = { { 0 } }; + struct rte_flow_action actions[3] = { { 0 } }; struct rte_eth_dev *proxy_dev; struct mlx5_priv *proxy_priv; uint16_t proxy_port_id = dev->data->port_id; - uint32_t marker_bit; int ret; - RTE_SET_USED(txq); ret = rte_flow_pick_transfer_proxy(port_id, &proxy_port_id, NULL); if (ret) { - DRV_LOG(ERR, "Unable to pick proxy port for port %u", port_id); + DRV_LOG(ERR, "Unable to pick transfer proxy port for port %u. Transfer proxy " + "port must be present to create default SQ miss flows.", + port_id); return ret; } proxy_dev = &rte_eth_devices[proxy_port_id]; proxy_priv = proxy_dev->data->dev_private; - if (!proxy_priv->dr_ctx) + if (!proxy_priv->dr_ctx) { + DRV_LOG(DEBUG, "Transfer proxy port (port %u) of port %u must be configured " + "for HWS to create default SQ miss flows. Default flows will " + "not be created.", + proxy_port_id, port_id); return 0; + } if (!proxy_priv->hw_esw_sq_miss_root_tbl || !proxy_priv->hw_esw_sq_miss_tbl) { - DRV_LOG(ERR, "port %u proxy port %u was configured but default" - " flow tables are not created", - port_id, proxy_port_id); + DRV_LOG(ERR, "Transfer proxy port (port %u) of port %u was configured, but " + "default flow tables were not created.", + proxy_port_id, port_id); rte_errno = ENOMEM; return -rte_errno; } - marker_bit = flow_hw_usable_lsb_vport_mask(proxy_priv); - if (!marker_bit) { - DRV_LOG(ERR, "Unable to set up control flow in SQ miss table"); - rte_errno = EINVAL; - return -rte_errno; + /* + * Create a root SQ miss flow rule - match E-Switch Manager and SQ, + * and jump to group 1. + */ + items[0] = (struct rte_flow_item){ + .type = RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT, + .spec = &esw_mgr_spec, + .mask = &esw_mgr_mask, + }; + items[1] = (struct rte_flow_item){ + .type = (enum rte_flow_item_type)MLX5_RTE_FLOW_ITEM_TYPE_SQ, + .spec = &sq_spec, + }; + items[2] = (struct rte_flow_item){ + .type = RTE_FLOW_ITEM_TYPE_END, + }; + actions[0] = (struct rte_flow_action){ + .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, + }; + actions[1] = (struct rte_flow_action){ + .type = RTE_FLOW_ACTION_TYPE_JUMP, + }; + actions[2] = (struct rte_flow_action) { + .type = RTE_FLOW_ACTION_TYPE_END, + }; + ret = flow_hw_create_ctrl_flow(dev, proxy_dev, proxy_priv->hw_esw_sq_miss_root_tbl, + items, 0, actions, 0); + if (ret) { + DRV_LOG(ERR, "Port %u failed to create root SQ miss flow rule for SQ %u, ret %d", + port_id, sqn, ret); + return ret; } - reg_c0_spec.data = marker_bit; - reg_c0_mask.data = marker_bit; - return flow_hw_create_ctrl_flow(dev, proxy_dev, - proxy_priv->hw_esw_sq_miss_tbl, - items, 0, actions, 0); + /* + * Create a non-root SQ miss flow rule - match REG_C_0 marker and SQ, + * and forward to port. + */ + items[0] = (struct rte_flow_item){ + .type = (enum rte_flow_item_type)MLX5_RTE_FLOW_ITEM_TYPE_TAG, + .spec = ®_c0_spec, + .mask = ®_c0_mask, + }; + items[1] = (struct rte_flow_item){ + .type = (enum rte_flow_item_type)MLX5_RTE_FLOW_ITEM_TYPE_SQ, + .spec = &sq_spec, + }; + items[2] = (struct rte_flow_item){ + .type = RTE_FLOW_ITEM_TYPE_END, + }; + actions[0] = (struct rte_flow_action){ + .type = RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT, + .conf = &port, + }; + actions[1] = (struct rte_flow_action){ + .type = RTE_FLOW_ACTION_TYPE_END, + }; + ret = flow_hw_create_ctrl_flow(dev, proxy_dev, proxy_priv->hw_esw_sq_miss_tbl, + items, 0, actions, 0); + if (ret) { + DRV_LOG(ERR, "Port %u failed to create HWS SQ miss flow rule for SQ %u, ret %d", + port_id, sqn, ret); + return ret; + } + return 0; } int @@ -7937,17 +7927,24 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev) ret = rte_flow_pick_transfer_proxy(port_id, &proxy_port_id, NULL); if (ret) { - DRV_LOG(ERR, "Unable to pick proxy port for port %u", port_id); + DRV_LOG(ERR, "Unable to pick transfer proxy port for port %u. Transfer proxy " + "port must be present to create default FDB jump rule.", + port_id); return ret; } proxy_dev = &rte_eth_devices[proxy_port_id]; proxy_priv = proxy_dev->data->dev_private; - if (!proxy_priv->dr_ctx) + if (!proxy_priv->dr_ctx) { + DRV_LOG(DEBUG, "Transfer proxy port (port %u) of port %u must be configured " + "for HWS to create default FDB jump rule. Default rule will " + "not be created.", + proxy_port_id, port_id); return 0; + } if (!proxy_priv->hw_esw_zero_tbl) { - DRV_LOG(ERR, "port %u proxy port %u was configured but default" - " flow tables are not created", - port_id, proxy_port_id); + DRV_LOG(ERR, "Transfer proxy port (port %u) of port %u was configured, but " + "default flow tables were not created.", + proxy_port_id, port_id); rte_errno = EINVAL; return -rte_errno; } diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index 2603196933..a973cbc5e3 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -426,7 +426,7 @@ mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue, mlx5_txq_release(dev, peer_queue); return -rte_errno; } - peer_info->qp_id = txq_ctrl->obj->sq->id; + peer_info->qp_id = mlx5_txq_get_sqn(txq_ctrl); peer_info->vhca_id = priv->sh->cdev->config.hca_attr.vhca_id; /* 1-to-1 mapping, only the first one is used. */ peer_info->peer_q = txq_ctrl->hairpin_conf.peers[0].queue; @@ -818,7 +818,7 @@ mlx5_hairpin_bind_single_port(struct rte_eth_dev *dev, uint16_t rx_port) } /* Pass TxQ's information to peer RxQ and try binding. */ cur.peer_q = rx_queue; - cur.qp_id = txq_ctrl->obj->sq->id; + cur.qp_id = mlx5_txq_get_sqn(txq_ctrl); cur.vhca_id = priv->sh->cdev->config.hca_attr.vhca_id; cur.tx_explicit = txq_ctrl->hairpin_conf.tx_explicit; cur.manual_bind = txq_ctrl->hairpin_conf.manual_bind; @@ -1300,8 +1300,6 @@ mlx5_traffic_enable_hws(struct rte_eth_dev *dev) int ret; if (priv->sh->config.dv_esw_en && priv->master) { - if (mlx5_flow_hw_esw_create_mgr_sq_miss_flow(dev)) - goto error; if (priv->sh->config.dv_xmeta_en == MLX5_XMETA_MODE_META32_HWS) if (mlx5_flow_hw_create_tx_default_mreg_copy_flow(dev)) goto error; @@ -1312,10 +1310,7 @@ mlx5_traffic_enable_hws(struct rte_eth_dev *dev) if (!txq) continue; - if (txq->is_hairpin) - queue = txq->obj->sq->id; - else - queue = txq->obj->sq_obj.sq->id; + queue = mlx5_txq_get_sqn(txq); if ((priv->representor || priv->master) && priv->sh->config.dv_esw_en) { if (mlx5_flow_hw_esw_create_sq_miss_flow(dev, queue)) { @@ -1325,9 +1320,15 @@ mlx5_traffic_enable_hws(struct rte_eth_dev *dev) } mlx5_txq_release(dev, i); } - if ((priv->master || priv->representor) && priv->sh->config.dv_esw_en) { - if (mlx5_flow_hw_esw_create_default_jump_flow(dev)) - goto error; + if (priv->sh->config.fdb_def_rule) { + if ((priv->master || priv->representor) && priv->sh->config.dv_esw_en) { + if (!mlx5_flow_hw_esw_create_default_jump_flow(dev)) + priv->fdb_def_rule = 1; + else + goto error; + } + } else { + DRV_LOG(INFO, "port %u FDB default rule is disabled", dev->data->port_id); } return 0; error: @@ -1393,14 +1394,18 @@ mlx5_traffic_enable(struct rte_eth_dev *dev) txq_ctrl->hairpin_conf.tx_explicit == 0 && txq_ctrl->hairpin_conf.peers[0].port == priv->dev_data->port_id) { - ret = mlx5_ctrl_flow_source_queue(dev, i); + ret = mlx5_ctrl_flow_source_queue(dev, + mlx5_txq_get_sqn(txq_ctrl)); if (ret) { mlx5_txq_release(dev, i); goto error; } } if (priv->sh->config.dv_esw_en) { - if (mlx5_flow_create_devx_sq_miss_flow(dev, i) == 0) { + uint32_t q = mlx5_txq_get_sqn(txq_ctrl); + + if (mlx5_flow_create_devx_sq_miss_flow(dev, q) == 0) { + mlx5_txq_release(dev, i); DRV_LOG(ERR, "Port %u Tx queue %u SQ create representor devx default miss rule failed.", dev->data->port_id, i); diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h index e0fc1872fe..6471ebf59f 100644 --- a/drivers/net/mlx5/mlx5_tx.h +++ b/drivers/net/mlx5/mlx5_tx.h @@ -213,6 +213,7 @@ struct mlx5_txq_ctrl *mlx5_txq_get(struct rte_eth_dev *dev, uint16_t idx); int mlx5_txq_release(struct rte_eth_dev *dev, uint16_t idx); int mlx5_txq_releasable(struct rte_eth_dev *dev, uint16_t idx); int mlx5_txq_verify(struct rte_eth_dev *dev); +int mlx5_txq_get_sqn(struct mlx5_txq_ctrl *txq); void txq_alloc_elts(struct mlx5_txq_ctrl *txq_ctrl); void txq_free_elts(struct mlx5_txq_ctrl *txq_ctrl); uint64_t mlx5_get_tx_port_offloads(struct rte_eth_dev *dev); diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c index 9150ced72d..7a0f1d61a5 100644 --- a/drivers/net/mlx5/mlx5_txq.c +++ b/drivers/net/mlx5/mlx5_txq.c @@ -27,6 +27,8 @@ #include "mlx5_tx.h" #include "mlx5_rxtx.h" #include "mlx5_autoconf.h" +#include "rte_pmd_mlx5.h" +#include "mlx5_flow.h" /** * Allocate TX queue elements. @@ -1274,6 +1276,51 @@ mlx5_txq_verify(struct rte_eth_dev *dev) return ret; } +int +mlx5_txq_get_sqn(struct mlx5_txq_ctrl *txq) +{ + return txq->is_hairpin ? txq->obj->sq->id : txq->obj->sq_obj.sq->id; +} + +int +rte_pmd_mlx5_external_sq_enable(uint16_t port_id, uint32_t sq_num) +{ + struct rte_eth_dev *dev; + struct mlx5_priv *priv; + uint32_t flow; + + if (rte_eth_dev_is_valid_port(port_id) < 0) { + DRV_LOG(ERR, "There is no Ethernet device for port %u.", + port_id); + rte_errno = ENODEV; + return -rte_errno; + } + dev = &rte_eth_devices[port_id]; + priv = dev->data->dev_private; + if ((!priv->representor && !priv->master) || + !priv->sh->config.dv_esw_en) { + DRV_LOG(ERR, "Port %u must be represetnor or master port in E-Switch mode.", + port_id); + rte_errno = EINVAL; + return -rte_errno; + } + if (sq_num == 0) { + DRV_LOG(ERR, "Invalid SQ number."); + rte_errno = EINVAL; + return -rte_errno; + } +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + if (priv->sh->config.dv_flow_en == 2) + return mlx5_flow_hw_esw_create_sq_miss_flow(dev, sq_num); +#endif + flow = mlx5_flow_create_devx_sq_miss_flow(dev, sq_num); + if (flow > 0) + return 0; + DRV_LOG(ERR, "Port %u failed to create default miss flow for SQ %u.", + port_id, sq_num); + return -rte_errno; +} + /** * Set the Tx queue dynamic timestamp (mask and offset) * diff --git a/drivers/net/mlx5/rte_pmd_mlx5.h b/drivers/net/mlx5/rte_pmd_mlx5.h index fbfdd9737b..d4caea5b20 100644 --- a/drivers/net/mlx5/rte_pmd_mlx5.h +++ b/drivers/net/mlx5/rte_pmd_mlx5.h @@ -139,6 +139,23 @@ int rte_pmd_mlx5_external_rx_queue_id_unmap(uint16_t port_id, __rte_experimental int rte_pmd_mlx5_host_shaper_config(int port_id, uint8_t rate, uint32_t flags); +/** + * Enable traffic for external SQ. + * + * @param[in] port_id + * The port identifier of the Ethernet device. + * @param[in] sq_num + * SQ HW number. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + * Possible values for rte_errno: + * - EINVAL - invalid sq_number or port type. + * - ENODEV - there is no Ethernet device for this port id. + */ +__rte_experimental +int rte_pmd_mlx5_external_sq_enable(uint16_t port_id, uint32_t sq_num); + #ifdef __cplusplus } #endif diff --git a/drivers/net/mlx5/version.map b/drivers/net/mlx5/version.map index 9942de5079..848270da13 100644 --- a/drivers/net/mlx5/version.map +++ b/drivers/net/mlx5/version.map @@ -14,4 +14,5 @@ EXPERIMENTAL { rte_pmd_mlx5_external_rx_queue_id_unmap; # added in 22.07 rte_pmd_mlx5_host_shaper_config; + rte_pmd_mlx5_external_sq_enable; };