From patchwork Thu Feb 24 23:25:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 108332 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 606A6A034C; Fri, 25 Feb 2022 00:25:51 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F21A54115C; Fri, 25 Feb 2022 00:25:34 +0100 (CET) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2089.outbound.protection.outlook.com [40.107.243.89]) by mails.dpdk.org (Postfix) with ESMTP id 13C2941163 for ; Fri, 25 Feb 2022 00:25:32 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=OlEVSO9/Q9rHKov3cVdFVO6JTXv0LPtYkvCDAYa/oMtPiPqPPivDTWepxOuEHS4DpYDv3NhZYo9OoALPHm6me+Bf5jt0gIR3QqNyYhvZ8DKkoNKjgYf0cvLpjv1P2C1RHvfWiSPrZX78275xJYRvB/Sm/wSc1gC0VIRJT2MFPBFGUdyN4/xk88A8kv+St6yWmUqSks4mEMJeXUg6tyQT4z7dLS13m6GRPM7RiLPRQFVjlZu5SIpurCKLEddAJSBKadPR0h/qMPR0h40sxxh/0xxYVgrJT3Sm9WF6/9zh3KTWUUSjetwT1JhZlAtuwcJglWiwmgCkMyvvzQ6ljYLo4Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=4FtxAfFs4++8b/6QMnrYqP72sUjd61O7Gvv0l/Am940=; b=A+MWfcis/wKw4p5bspaheo4Bt0FulK+S72RfVdN7tbvlkBrA193SebxhLxfjVGijdFAzblGVEN7gRK9pbyqy6Z3uEWRqq0jAsnEQo/SbAfS2/dK1Cl1aK/QnsE3axMWFVdLRSlHdDi9//72jH5ALhMqAdf/CGhbdn7RQfXPURW7FWkHM5cmFYyILh5JbywULscQUDiIIsNBNvy5SjZlRj+ot7DZVXKi5JhtaHxQalrro4eLjhlJYCVcyUFrwEJc8qK3/ZDF24Sj9kyvwaOPaawmlYQyLigHeP4GNoP7TbD323cTBwORC5cW+UhZlziXed5giRf6su/fb8DhgcuL9/w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=4FtxAfFs4++8b/6QMnrYqP72sUjd61O7Gvv0l/Am940=; b=Ddm3UQZJZld6Hi6WbLogjuofMSX3DGCipae0AUuYVhy3253dO4bPEuEBDtBMaLtIET0NluCxdoKJsKpc8bIAXOcPdu+GgNQAIrNxIe7ty5VgDONY96mPkZNbYwsb9mB48JpbY964wVvaDykTXAHSeLq2dR7TeJu/YRtKgiSiaaRzuQ2VkUI+4p+3O6nqhRPMnR8Cn3ada+20/fCcwxyWiHE77pIecgrZty1GD9/yCx8NH0imKJGF08Y5TVSQ/BbhnNKGqwBbDyN2WGOXyhUsGOytpXWvqM6IGujzIKQfANA42/UroJmA8ROUm3oApiV2xqQGFJKhzRRAatsQ9PUorw== Received: from MWHPR18CA0065.namprd18.prod.outlook.com (2603:10b6:300:39::27) by SA0PR12MB4560.namprd12.prod.outlook.com (2603:10b6:806:97::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.22; Thu, 24 Feb 2022 23:25:30 +0000 Received: from CO1NAM11FT018.eop-nam11.prod.protection.outlook.com (2603:10b6:300:39:cafe::e5) by MWHPR18CA0065.outlook.office365.com (2603:10b6:300:39::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.11 via Frontend Transport; Thu, 24 Feb 2022 23:25:29 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.234) by CO1NAM11FT018.mail.protection.outlook.com (10.13.175.16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5017.22 via Frontend Transport; Thu, 24 Feb 2022 23:25:29 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 24 Feb 2022 23:25:28 +0000 Received: from rnnvmail203.nvidia.com (10.129.68.9) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 24 Feb 2022 15:25:27 -0800 Received: from nvidia.com (10.127.8.13) by mail.nvidia.com (10.129.68.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9 via Frontend Transport; Thu, 24 Feb 2022 15:25:26 -0800 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko Subject: [PATCH v3 6/6] net/mlx5: support queue/RSS action for external RxQ Date: Fri, 25 Feb 2022 01:25:11 +0200 Message-ID: <20220224232511.3238707-7-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220224232511.3238707-1-michaelba@nvidia.com> References: <20220223184835.3061161-1-michaelba@nvidia.com> <20220224232511.3238707-1-michaelba@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 716eb88f-1eb2-4437-cc05-08d9f7ecf226 X-MS-TrafficTypeDiagnostic: SA0PR12MB4560:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: QR/c2Fj6sDmomY0v6rrZ+hrjYnn5bzXRf7azq8kYFWNPh3bk87cSS94e1h73nEbotU8FVURytJFZdEi1gcyYs+l2QSodeHZF2jewUpKxeQ64envwydYklLhwdSlumDhXz3ZRS0p7SqhvgCmMsU0zzCL++sN/qYuEJp8axsTflxLoAIQpSFlPF4NVPB9IPWl85dCdLVU392oskhKiSWrURLtXFQ55aVrNmMTef0a+ejwFLMeCwiUPviXxXwbJZOCgd+9FZOAaHldFE+bqDD7OCGzvBS+2WV+qcfkkuzZ1Fn5+SCl1dNM91fYZmlCcxV5YH6Kr1OHBtcTggBtxIjRjDM/B0TUd2i7qp/13tHzdZvqtSzu/sO1PlWpkYqemeYJgekLL+5vZe6mE/0PlSOVwePG54XMelFV2YY4qaBzqfwNqNzRzLZPL4RptP6DIRhvf4Ru6kd4Oxr753C9fiMg3Y/XjFoKcX2BcUnDPCeOOxPYB33+7JW+SdAKS7UZxUv6YJzPgYKEOuvraPbkMoYkweNn6G82c02FMEKpww0vshmlJV32QjsmuHvJgG6cEHxVT8Lo3acMz9JVCDz5cA3ys5DMQjSGgZbipzS8YpbFUwk3NpFmTWVUh4LF0NQTTOaX70i+uGEEpK73/PB0w2O+TKxK3GL/3B+m6XP2+OCidbewxlnmZ5T4zMEUzWLSTQpuXQCLAbksgLDgxtd0vynr7WQ== X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(8676002)(83380400001)(316002)(107886003)(55016003)(6916009)(86362001)(8936002)(40460700003)(70586007)(70206006)(2616005)(54906003)(1076003)(6666004)(26005)(36756003)(4326008)(426003)(6286002)(336012)(186003)(2906002)(30864003)(47076005)(82310400004)(508600001)(81166007)(7696005)(5660300002)(356005)(36860700001)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2022 23:25:29.5357 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 716eb88f-1eb2-4437-cc05-08d9f7ecf226 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT018.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR12MB4560 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support queue/RSS action for external RxQ. In indirection table creation, the queue index will be taken from mapping array. This feature supports neither LRO nor Hairpin. Signed-off-by: Michael Baum Acked-by: Matan Azrad --- doc/guides/nics/mlx5.rst | 1 + doc/guides/rel_notes/release_22_03.rst | 1 + drivers/net/mlx5/mlx5.c | 4 + drivers/net/mlx5/mlx5_devx.c | 30 +++++-- drivers/net/mlx5/mlx5_flow.c | 29 +++++-- drivers/net/mlx5/mlx5_rx.h | 30 +++++++ drivers/net/mlx5/mlx5_rxq.c | 116 +++++++++++++++++++++++-- 7 files changed, 187 insertions(+), 24 deletions(-) diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index 7b04e9bac5..a5b3298f0c 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -38,6 +38,7 @@ Features - Multiple TX and RX queues. - Shared Rx queue. - Rx queue delay drop. +- Support steering for external Rx queue created outside the PMD. - Support for scattered TX frames. - Advanced support for scattered Rx frames with tunable buffer attributes. - IPv4, IPv6, TCPv4, TCPv6, UDPv4 and UDPv6 RSS on any number of queues. diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst index e66548558c..a29e96c37c 100644 --- a/doc/guides/rel_notes/release_22_03.rst +++ b/doc/guides/rel_notes/release_22_03.rst @@ -164,6 +164,7 @@ New Features * Support ConnectX-7 capability to schedule traffic sending on timestamp * Added WQE based hardware steering support with ``rte_flow_async`` API. + * Support steering for external Rx queue created outside the PMD. * **Updated Wangxun ngbe driver.** diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 5ecca2dd1b..74841caaf9 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1912,6 +1912,10 @@ mlx5_dev_close(struct rte_eth_dev *dev) if (ret) DRV_LOG(WARNING, "port %u some Rx queue objects still remain", dev->data->port_id); + ret = mlx5_ext_rxq_verify(dev); + if (ret) + DRV_LOG(WARNING, "Port %u some external RxQ still remain.", + dev->data->port_id); ret = mlx5_rxq_verify(dev); if (ret) DRV_LOG(WARNING, "port %u some Rx queues still remain", diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index bcd2358165..af106bda50 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -580,13 +580,21 @@ mlx5_devx_ind_table_create_rqt_attr(struct rte_eth_dev *dev, return rqt_attr; } for (i = 0; i != queues_n; ++i) { - struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, queues[i]); + if (mlx5_is_external_rxq(dev, queues[i])) { + struct mlx5_external_rxq *ext_rxq = + mlx5_ext_rxq_get(dev, queues[i]); - MLX5_ASSERT(rxq != NULL); - if (rxq->ctrl->is_hairpin) - rqt_attr->rq_list[i] = rxq->ctrl->obj->rq->id; - else - rqt_attr->rq_list[i] = rxq->devx_rq.rq->id; + rqt_attr->rq_list[i] = ext_rxq->hw_id; + } else { + struct mlx5_rxq_priv *rxq = + mlx5_rxq_get(dev, queues[i]); + + MLX5_ASSERT(rxq != NULL); + if (rxq->ctrl->is_hairpin) + rqt_attr->rq_list[i] = rxq->ctrl->obj->rq->id; + else + rqt_attr->rq_list[i] = rxq->devx_rq.rq->id; + } } MLX5_ASSERT(i > 0); for (j = 0; i != rqt_n; ++j, ++i) @@ -711,7 +719,13 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key, uint32_t i; /* NULL queues designate drop queue. */ - if (ind_tbl->queues != NULL) { + if (ind_tbl->queues == NULL) { + is_hairpin = priv->drop_queue.rxq->ctrl->is_hairpin; + } else if (mlx5_is_external_rxq(dev, ind_tbl->queues[0])) { + /* External RxQ supports neither Hairpin nor LRO. */ + is_hairpin = false; + lro = false; + } else { is_hairpin = mlx5_rxq_is_hairpin(dev, ind_tbl->queues[0]); /* Enable TIR LRO only if all the queues were configured for. */ for (i = 0; i < ind_tbl->queues_n; ++i) { @@ -723,8 +737,6 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key, break; } } - } else { - is_hairpin = priv->drop_queue.rxq->ctrl->is_hairpin; } memset(tir_attr, 0, sizeof(*tir_attr)); tir_attr->disp_type = MLX5_TIRC_DISP_TYPE_INDIRECT; diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 09701a73c1..3875160708 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -1743,6 +1743,12 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action, RTE_FLOW_ERROR_TYPE_ACTION, NULL, "can't have 2 fate actions in" " same flow"); + if (attr->egress) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, NULL, + "queue action not supported for egress."); + if (mlx5_is_external_rxq(dev, queue->index)) + return 0; if (!priv->rxqs_n) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION_CONF, @@ -1757,11 +1763,6 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action, RTE_FLOW_ERROR_TYPE_ACTION_CONF, &queue->index, "queue is not configured"); - if (attr->egress) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, NULL, - "queue action not supported for " - "egress"); return 0; } @@ -1776,7 +1777,7 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action, * Size of the @p queues array. * @param[out] error * On error, filled with a textual error description. - * @param[out] queue + * @param[out] queue_idx * On error, filled with an offending queue index in @p queues array. * * @return @@ -1789,17 +1790,27 @@ mlx5_validate_rss_queues(struct rte_eth_dev *dev, { const struct mlx5_priv *priv = dev->data->dev_private; bool is_hairpin = false; + bool is_ext_rss = false; uint32_t i; for (i = 0; i != queues_n; ++i) { - struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, - queues[i]); + struct mlx5_rxq_ctrl *rxq_ctrl; + if (mlx5_is_external_rxq(dev, queues[0])) { + is_ext_rss = true; + continue; + } + if (is_ext_rss) { + *error = "Combining external and regular RSS queues is not supported"; + *queue_idx = i; + return -ENOTSUP; + } if (queues[i] >= priv->rxqs_n) { *error = "queue index out of range"; *queue_idx = i; return -EINVAL; } + rxq_ctrl = mlx5_rxq_ctrl_get(dev, queues[i]); if (rxq_ctrl == NULL) { *error = "queue is not configured"; *queue_idx = i; @@ -1894,7 +1905,7 @@ mlx5_validate_action_rss(struct rte_eth_dev *dev, RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL, "L4 partial RSS requested but L4 RSS" " type not specified"); - if (!priv->rxqs_n) + if (!priv->rxqs_n && priv->ext_rxqs == NULL) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL, "No Rx queues configured"); diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index aba05dffa7..acebe3348c 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -18,6 +18,7 @@ #include "mlx5.h" #include "mlx5_autoconf.h" +#include "rte_pmd_mlx5.h" /* Support tunnel matching. */ #define MLX5_FLOW_TUNNEL 10 @@ -217,8 +218,14 @@ uint32_t mlx5_rxq_deref(struct rte_eth_dev *dev, uint16_t idx); struct mlx5_rxq_priv *mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx); struct mlx5_rxq_ctrl *mlx5_rxq_ctrl_get(struct rte_eth_dev *dev, uint16_t idx); struct mlx5_rxq_data *mlx5_rxq_data_get(struct rte_eth_dev *dev, uint16_t idx); +struct mlx5_external_rxq *mlx5_ext_rxq_ref(struct rte_eth_dev *dev, + uint16_t idx); +uint32_t mlx5_ext_rxq_deref(struct rte_eth_dev *dev, uint16_t idx); +struct mlx5_external_rxq *mlx5_ext_rxq_get(struct rte_eth_dev *dev, + uint16_t idx); int mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx); int mlx5_rxq_verify(struct rte_eth_dev *dev); +int mlx5_ext_rxq_verify(struct rte_eth_dev *dev); int rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl); int mlx5_ind_table_obj_verify(struct rte_eth_dev *dev); struct mlx5_ind_table_obj *mlx5_ind_table_obj_get(struct rte_eth_dev *dev, @@ -643,4 +650,27 @@ mlx5_mprq_enabled(struct rte_eth_dev *dev) return n == n_ibv; } +/** + * Check whether given RxQ is external. + * + * @param dev + * Pointer to Ethernet device. + * @param queue_idx + * Rx queue index. + * + * @return + * True if is external RxQ, otherwise false. + */ +static __rte_always_inline bool +mlx5_is_external_rxq(struct rte_eth_dev *dev, uint16_t queue_idx) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_external_rxq *rxq; + + if (!priv->ext_rxqs || queue_idx < MLX5_EXTERNAL_RX_QUEUE_ID_MIN) + return false; + rxq = &priv->ext_rxqs[queue_idx - MLX5_EXTERNAL_RX_QUEUE_ID_MIN]; + return !!__atomic_load_n(&rxq->refcnt, __ATOMIC_RELAXED); +} + #endif /* RTE_PMD_MLX5_RX_H_ */ diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 889428f48a..ff293d9d56 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -2084,6 +2084,65 @@ mlx5_rxq_data_get(struct rte_eth_dev *dev, uint16_t idx) return rxq == NULL ? NULL : &rxq->ctrl->rxq; } +/** + * Increase an external Rx queue reference count. + * + * @param dev + * Pointer to Ethernet device. + * @param idx + * External RX queue index. + * + * @return + * A pointer to the queue if it exists, NULL otherwise. + */ +struct mlx5_external_rxq * +mlx5_ext_rxq_ref(struct rte_eth_dev *dev, uint16_t idx) +{ + struct mlx5_external_rxq *rxq = mlx5_ext_rxq_get(dev, idx); + + __atomic_fetch_add(&rxq->refcnt, 1, __ATOMIC_RELAXED); + return rxq; +} + +/** + * Decrease an external Rx queue reference count. + * + * @param dev + * Pointer to Ethernet device. + * @param idx + * External RX queue index. + * + * @return + * Updated reference count. + */ +uint32_t +mlx5_ext_rxq_deref(struct rte_eth_dev *dev, uint16_t idx) +{ + struct mlx5_external_rxq *rxq = mlx5_ext_rxq_get(dev, idx); + + return __atomic_sub_fetch(&rxq->refcnt, 1, __ATOMIC_RELAXED); +} + +/** + * Get an external Rx queue. + * + * @param dev + * Pointer to Ethernet device. + * @param idx + * External Rx queue index. + * + * @return + * A pointer to the queue if it exists, NULL otherwise. + */ +struct mlx5_external_rxq * +mlx5_ext_rxq_get(struct rte_eth_dev *dev, uint16_t idx) +{ + struct mlx5_priv *priv = dev->data->dev_private; + + MLX5_ASSERT(mlx5_is_external_rxq(dev, idx)); + return &priv->ext_rxqs[idx - MLX5_EXTERNAL_RX_QUEUE_ID_MIN]; +} + /** * Release a Rx queue. * @@ -2167,6 +2226,37 @@ mlx5_rxq_verify(struct rte_eth_dev *dev) return ret; } +/** + * Verify the external Rx Queue list is empty. + * + * @param dev + * Pointer to Ethernet device. + * + * @return + * The number of object not released. + */ +int +mlx5_ext_rxq_verify(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_external_rxq *rxq; + uint32_t i; + int ret = 0; + + if (priv->ext_rxqs == NULL) + return 0; + + for (i = MLX5_EXTERNAL_RX_QUEUE_ID_MIN; i <= UINT16_MAX ; ++i) { + rxq = mlx5_ext_rxq_get(dev, i); + if (rxq->refcnt < 2) + continue; + DRV_LOG(DEBUG, "Port %u external RxQ %u still referenced.", + dev->data->port_id, i); + ++ret; + } + return ret; +} + /** * Check whether RxQ type is Hairpin. * @@ -2182,8 +2272,11 @@ bool mlx5_rxq_is_hairpin(struct rte_eth_dev *dev, uint16_t idx) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, idx); + struct mlx5_rxq_ctrl *rxq_ctrl; + if (mlx5_is_external_rxq(dev, idx)) + return false; + rxq_ctrl = mlx5_rxq_ctrl_get(dev, idx); return (idx < priv->rxqs_n && rxq_ctrl != NULL && rxq_ctrl->is_hairpin); } @@ -2358,9 +2451,16 @@ mlx5_ind_table_obj_setup(struct rte_eth_dev *dev, if (ref_qs) for (i = 0; i != queues_n; ++i) { - if (mlx5_rxq_ref(dev, queues[i]) == NULL) { - ret = -rte_errno; - goto error; + if (mlx5_is_external_rxq(dev, queues[i])) { + if (mlx5_ext_rxq_ref(dev, queues[i]) == NULL) { + ret = -rte_errno; + goto error; + } + } else { + if (mlx5_rxq_ref(dev, queues[i]) == NULL) { + ret = -rte_errno; + goto error; + } } } ret = priv->obj_ops.ind_table_new(dev, n, ind_tbl); @@ -2371,8 +2471,12 @@ mlx5_ind_table_obj_setup(struct rte_eth_dev *dev, error: if (ref_qs) { err = rte_errno; - for (j = 0; j < i; j++) - mlx5_rxq_deref(dev, queues[j]); + for (j = 0; j < i; j++) { + if (mlx5_is_external_rxq(dev, queues[j])) + mlx5_ext_rxq_deref(dev, queues[j]); + else + mlx5_rxq_deref(dev, queues[j]); + } rte_errno = err; } DRV_LOG(DEBUG, "Port %u cannot setup indirection table.",