From patchwork Thu Nov 4 12:33:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xueming(Steven) Li" X-Patchwork-Id: 103755 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 23110A0548; Thu, 4 Nov 2021 13:35:20 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C87E342743; Thu, 4 Nov 2021 13:34:49 +0100 (CET) Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam08on2048.outbound.protection.outlook.com [40.107.101.48]) by mails.dpdk.org (Postfix) with ESMTP id 23B6C42715 for ; Thu, 4 Nov 2021 13:34:47 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=WArSNLpSpY0UfIc6EnfQu1Td1C8xfJbeUda2DT0mNtj9RlCd9KKpDqO1nSe2ZN7YN/nEyKEY6ImELl0bA1zW0LQTiL1tTUhtqM6UoClhQNM8/pkAkRMUoQBgMzfxCQAHJCOgrjOi9e97DOgYzX7uc4uH7hRSu4PZI0DHFifOhqwfBGk071bF62v9sHtpal33zaWM8+JxYcZOU7GF5LztS7rp2l8vF+gOvakhV4Hn3a8ga3McBxEEvrvVeBo/D79biLTCfVViMSRVV447BzLPb475owOVrsiBCVhikPPPXmbmrvh/WDHEBXvcA2k1LpRENiWpQsOImD2WA0uYEPCzkQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=9KVM8kD4w7sMtCbkb0Xl8PWzKG/fY7mQoeIFJYMAyuA=; b=AfsFR+8cpfxhza2qH1Kd3MuhmvJPjF4ax57dQboCEO7z08AtH9o4pvsMa8Nj61DnXihBPFt5MGWxRI2HSt+CMqvO924+ti79bHrrtiBQYMdpSIvV9W1ZmVrrgseI+giim3XosVUbWk0+9yY9FyIewCbi6mZxn9Rim9sU6YmTZt18eLKBfuLssR3NxHYn/va1tM4ZMyM/YMsyvuT+xsyG8WwP8PnL/UYuuigmbOhjXbCR8dnq5ik/jhlnYqgLwVnVrf+uabFXK0nSBLht8TxXqg3RyTi0FmmKz1QYF2vE8nhIyuYuthl/XNQtm6iOrE7T1st06BLBa1jJ756xMtnEAA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=9KVM8kD4w7sMtCbkb0Xl8PWzKG/fY7mQoeIFJYMAyuA=; b=BILlgHNPddnZWtYiyYJ0i0MlQrwAV5RFMiRBVotjy5cKsPObdBb1reXWMT9Fv3uctfi9hTeYo0UjQ6QTzqPFtdCEUXgTavCDtFv7s+nGt0Wok52Yh3cdMVvAFtkmSr9AGlQeSyWVdn88N34DvJpyc2f8Su7HLZLJzmvFtKKAE4pUoc1asiuDL1pNvThr0E+j0F0gv+Z9/oDly5/2MvCQ7Yg+qohErRWmwdJUyZFRYJ0KQMSxNDcwi6BxBzzPvDt4hc2J0B69kMtHTGw02hTAYkHGQXauF4VXCpqhUWJT81uVNaPuGylblhOYN1PtBP8dkf2DxY4bljwqIgbqSPc0vQ== Received: from BN6PR14CA0022.namprd14.prod.outlook.com (2603:10b6:404:79::32) by DM6PR12MB5551.namprd12.prod.outlook.com (2603:10b6:5:1bc::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.11; Thu, 4 Nov 2021 12:34:45 +0000 Received: from BN8NAM11FT055.eop-nam11.prod.protection.outlook.com (2603:10b6:404:79:cafe::55) by BN6PR14CA0022.outlook.office365.com (2603:10b6:404:79::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 12:34:45 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT055.mail.protection.outlook.com (10.13.177.62) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4669.10 via Frontend Transport; Thu, 4 Nov 2021 12:34:44 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 4 Nov 2021 12:34:22 +0000 From: Xueming Li To: CC: , Lior Margalit , "Slava Ovsiienko" , Matan Azrad Date: Thu, 4 Nov 2021 20:33:14 +0800 Message-ID: <20211104123320.1638915-9-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211104123320.1638915-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20211104123320.1638915-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 567590d1-4188-4441-db35-08d99f8f7b52 X-MS-TrafficTypeDiagnostic: DM6PR12MB5551: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:513; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: KfOqD67Ilq6WJkHmHS5CJxLxuqzEl+pGoioPwO0oS/Oa0V20Pnh3pkUJn98iPeC8DIvmx2wtC7llJ17oSpxLFOO5fQCe7f0DUSRPbWVUFj0uPwoItWrOdYfbR+237HBstrJyGLTOqLUs+sxFNCpED5SGcpGKMLKZ661KtuWKEFzXA2atOV6R1GD8/6oSjYKpim3+4NilB42jZBZse8iNt4B/34x9/Fg/WMSI66b0/5FAOWACkOE7MM8GYKQO4/MlCa6XUimYhlOS8tEPaWq+ldn2zyP0fAw7GkgA85m1xX2PkZitI2MlZRGkwfWxS8MEYbYAlSl4k0rWNoenU0jCu3uzoEPcdsDQXVGo2kbNkYPJbzKJ2TjVXPqxM9ntHpCGGDmV7v8AygS2rkESvol2osrtvg/dqCE0k+uSXWE/659W7NwsSjO/X4SF1hoAcoDDcgyg6sAEMuCJuWvmnyEaXBHGxFI7b9yz5dT3b3okSLgE2+a+EFoZIqZ6N9sGoFP4DaxjpVhStWjzISP/sL9RqwlE52cvw9OUreVrnBIYm7HNPGF8f2tEAegWaWu0/ggzKT1qpVK7uj1ifOmfRD8k/+utRR/ioOfEhzX9+oYxKp3Fke8aPJWxwfZ6pzYB2EWwIkER1ICWaJoiB4K7eww4n12VJP1W6ajtg52KXwZ8nXNEsxJ0cg1HCA8ZRijh6fmkGwSundBFHTp0LUuvi79gog== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(70206006)(107886003)(6916009)(70586007)(8676002)(83380400001)(36756003)(16526019)(6286002)(26005)(86362001)(5660300002)(186003)(1076003)(8936002)(2906002)(7696005)(336012)(6666004)(2616005)(30864003)(47076005)(426003)(55016002)(54906003)(316002)(36860700001)(356005)(4326008)(7636003)(508600001)(82310400003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Nov 2021 12:34:44.6064 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 567590d1-4188-4441-db35-08d99f8f7b52 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT055.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB5551 Subject: [dpdk-dev] [PATCH v4 08/14] net/mlx5: move Rx queue reference count X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Rx queue reference count is counter of RQ, used to count reference to RQ object. To prepare for shared Rx queue, this patch moves it from rxq_ctrl to Rx queue private data. Signed-off-by: Xueming Li Acked-by: Slava Ovsiienko --- drivers/net/mlx5/mlx5_rx.h | 8 +- drivers/net/mlx5/mlx5_rxq.c | 169 +++++++++++++++++++++----------- drivers/net/mlx5/mlx5_trigger.c | 57 +++++------ 3 files changed, 142 insertions(+), 92 deletions(-) diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index fa24f5cdf3a..eccfbf1108d 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -149,7 +149,6 @@ enum mlx5_rxq_type { struct mlx5_rxq_ctrl { struct mlx5_rxq_data rxq; /* Data path structure. */ LIST_ENTRY(mlx5_rxq_ctrl) next; /* Pointer to the next element. */ - uint32_t refcnt; /* Reference counter. */ LIST_HEAD(priv, mlx5_rxq_priv) owners; /* Owner rxq list. */ struct mlx5_rxq_obj *obj; /* Verbs/DevX elements. */ struct mlx5_dev_ctx_shared *sh; /* Shared context. */ @@ -170,6 +169,7 @@ struct mlx5_rxq_ctrl { /* RX queue private data. */ struct mlx5_rxq_priv { uint16_t idx; /* Queue index. */ + uint32_t refcnt; /* Reference counter. */ struct mlx5_rxq_ctrl *ctrl; /* Shared Rx Queue. */ LIST_ENTRY(mlx5_rxq_priv) owner_entry; /* Entry in shared rxq_ctrl. */ struct mlx5_priv *priv; /* Back pointer to private data. */ @@ -207,7 +207,11 @@ struct mlx5_rxq_ctrl *mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_ctrl *mlx5_rxq_hairpin_new (struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, uint16_t desc, const struct rte_eth_hairpin_conf *hairpin_conf); -struct mlx5_rxq_ctrl *mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx); +struct mlx5_rxq_priv *mlx5_rxq_ref(struct rte_eth_dev *dev, uint16_t idx); +uint32_t mlx5_rxq_deref(struct rte_eth_dev *dev, uint16_t idx); +struct mlx5_rxq_priv *mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx); +struct mlx5_rxq_ctrl *mlx5_rxq_ctrl_get(struct rte_eth_dev *dev, uint16_t idx); +struct mlx5_rxq_data *mlx5_rxq_data_get(struct rte_eth_dev *dev, uint16_t idx); int mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx); int mlx5_rxq_verify(struct rte_eth_dev *dev); int rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl); diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 00df245a5c6..8071ddbd61c 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -386,15 +386,13 @@ mlx5_get_rx_port_offloads(void) static int mlx5_rxq_releasable(struct rte_eth_dev *dev, uint16_t idx) { - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_ctrl *rxq_ctrl; + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); - if (!(*priv->rxqs)[idx]) { + if (rxq == NULL) { rte_errno = EINVAL; return -rte_errno; } - rxq_ctrl = container_of((*priv->rxqs)[idx], struct mlx5_rxq_ctrl, rxq); - return (__atomic_load_n(&rxq_ctrl->refcnt, __ATOMIC_RELAXED) == 1); + return (__atomic_load_n(&rxq->refcnt, __ATOMIC_RELAXED) == 1); } /* Fetches and drops all SW-owned and error CQEs to synchronize CQ. */ @@ -874,8 +872,8 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev) for (i = 0; i != n; ++i) { /* This rxq obj must not be released in this function. */ - struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_get(dev, i); - struct mlx5_rxq_obj *rxq_obj = rxq_ctrl ? rxq_ctrl->obj : NULL; + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, i); + struct mlx5_rxq_obj *rxq_obj = rxq ? rxq->ctrl->obj : NULL; int rc; /* Skip queues that cannot request interrupts. */ @@ -885,11 +883,9 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev) if (rte_intr_vec_list_index_set(intr_handle, i, RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID)) return -rte_errno; - /* Decrease the rxq_ctrl's refcnt */ - if (rxq_ctrl) - mlx5_rxq_release(dev, i); continue; } + mlx5_rxq_ref(dev, i); if (count >= RTE_MAX_RXTX_INTR_VEC_ID) { DRV_LOG(ERR, "port %u too many Rx queues for interrupt" @@ -954,7 +950,7 @@ mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev) * Need to access directly the queue to release the reference * kept in mlx5_rx_intr_vec_enable(). */ - mlx5_rxq_release(dev, i); + mlx5_rxq_deref(dev, i); } free: rte_intr_free_epoll_fd(intr_handle); @@ -1003,19 +999,14 @@ mlx5_arm_cq(struct mlx5_rxq_data *rxq, int sq_n_rxq) int mlx5_rx_intr_enable(struct rte_eth_dev *dev, uint16_t rx_queue_id) { - struct mlx5_rxq_ctrl *rxq_ctrl; - - rxq_ctrl = mlx5_rxq_get(dev, rx_queue_id); - if (!rxq_ctrl) + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, rx_queue_id); + if (!rxq) goto error; - if (rxq_ctrl->irq) { - if (!rxq_ctrl->obj) { - mlx5_rxq_release(dev, rx_queue_id); + if (rxq->ctrl->irq) { + if (!rxq->ctrl->obj) goto error; - } - mlx5_arm_cq(&rxq_ctrl->rxq, rxq_ctrl->rxq.cq_arm_sn); + mlx5_arm_cq(&rxq->ctrl->rxq, rxq->ctrl->rxq.cq_arm_sn); } - mlx5_rxq_release(dev, rx_queue_id); return 0; error: rte_errno = EINVAL; @@ -1037,23 +1028,21 @@ int mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_ctrl *rxq_ctrl; + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, rx_queue_id); int ret = 0; - rxq_ctrl = mlx5_rxq_get(dev, rx_queue_id); - if (!rxq_ctrl) { + if (!rxq) { rte_errno = EINVAL; return -rte_errno; } - if (!rxq_ctrl->obj) + if (!rxq->ctrl->obj) goto error; - if (rxq_ctrl->irq) { - ret = priv->obj_ops.rxq_event_get(rxq_ctrl->obj); + if (rxq->ctrl->irq) { + ret = priv->obj_ops.rxq_event_get(rxq->ctrl->obj); if (ret < 0) goto error; - rxq_ctrl->rxq.cq_arm_sn++; + rxq->ctrl->rxq.cq_arm_sn++; } - mlx5_rxq_release(dev, rx_queue_id); return 0; error: /** @@ -1064,12 +1053,9 @@ mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id) rte_errno = errno; else rte_errno = EINVAL; - ret = rte_errno; /* Save rte_errno before cleanup. */ - mlx5_rxq_release(dev, rx_queue_id); - if (ret != EAGAIN) + if (rte_errno != EAGAIN) DRV_LOG(WARNING, "port %u unable to disable interrupt on Rx queue %d", dev->data->port_id, rx_queue_id); - rte_errno = ret; /* Restore rte_errno. */ return -rte_errno; } @@ -1657,7 +1643,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, tmpl->rxq.uar_lock_cq = &priv->sh->uar_lock_cq; #endif tmpl->rxq.idx = idx; - __atomic_fetch_add(&tmpl->refcnt, 1, __ATOMIC_RELAXED); + mlx5_rxq_ref(dev, idx); LIST_INSERT_HEAD(&priv->rxqsctrl, tmpl, next); return tmpl; error: @@ -1711,11 +1697,53 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, tmpl->rxq.mr_ctrl.cache_bh = (struct mlx5_mr_btree) { 0 }; tmpl->hairpin_conf = *hairpin_conf; tmpl->rxq.idx = idx; - __atomic_fetch_add(&tmpl->refcnt, 1, __ATOMIC_RELAXED); + mlx5_rxq_ref(dev, idx); LIST_INSERT_HEAD(&priv->rxqsctrl, tmpl, next); return tmpl; } +/** + * Increase Rx queue reference count. + * + * @param dev + * Pointer to Ethernet device. + * @param idx + * RX queue index. + * + * @return + * A pointer to the queue if it exists, NULL otherwise. + */ +struct mlx5_rxq_priv * +mlx5_rxq_ref(struct rte_eth_dev *dev, uint16_t idx) +{ + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); + + if (rxq != NULL) + __atomic_fetch_add(&rxq->refcnt, 1, __ATOMIC_RELAXED); + return rxq; +} + +/** + * Dereference a Rx queue. + * + * @param dev + * Pointer to Ethernet device. + * @param idx + * RX queue index. + * + * @return + * Updated reference count. + */ +uint32_t +mlx5_rxq_deref(struct rte_eth_dev *dev, uint16_t idx) +{ + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); + + if (rxq == NULL) + return 0; + return __atomic_sub_fetch(&rxq->refcnt, 1, __ATOMIC_RELAXED); +} + /** * Get a Rx queue. * @@ -1727,18 +1755,52 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, * @return * A pointer to the queue if it exists, NULL otherwise. */ -struct mlx5_rxq_ctrl * +struct mlx5_rxq_priv * mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx]; - struct mlx5_rxq_ctrl *rxq_ctrl = NULL; - if (rxq_data) { - rxq_ctrl = container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); - __atomic_fetch_add(&rxq_ctrl->refcnt, 1, __ATOMIC_RELAXED); - } - return rxq_ctrl; + if (priv->rxq_privs == NULL) + return NULL; + return (*priv->rxq_privs)[idx]; +} + +/** + * Get Rx queue shareable control. + * + * @param dev + * Pointer to Ethernet device. + * @param idx + * RX queue index. + * + * @return + * A pointer to the queue control if it exists, NULL otherwise. + */ +struct mlx5_rxq_ctrl * +mlx5_rxq_ctrl_get(struct rte_eth_dev *dev, uint16_t idx) +{ + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); + + return rxq == NULL ? NULL : rxq->ctrl; +} + +/** + * Get Rx queue shareable data. + * + * @param dev + * Pointer to Ethernet device. + * @param idx + * RX queue index. + * + * @return + * A pointer to the queue data if it exists, NULL otherwise. + */ +struct mlx5_rxq_data * +mlx5_rxq_data_get(struct rte_eth_dev *dev, uint16_t idx) +{ + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); + + return rxq == NULL ? NULL : &rxq->ctrl->rxq; } /** @@ -1756,13 +1818,12 @@ int mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_ctrl *rxq_ctrl; - struct mlx5_rxq_priv *rxq = (*priv->rxq_privs)[idx]; + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, idx); + struct mlx5_rxq_ctrl *rxq_ctrl = rxq->ctrl; if (priv->rxqs == NULL || (*priv->rxqs)[idx] == NULL) return 0; - rxq_ctrl = container_of((*priv->rxqs)[idx], struct mlx5_rxq_ctrl, rxq); - if (__atomic_sub_fetch(&rxq_ctrl->refcnt, 1, __ATOMIC_RELAXED) > 1) + if (mlx5_rxq_deref(dev, idx) > 1) return 1; if (rxq_ctrl->obj) { priv->obj_ops.rxq_obj_release(rxq_ctrl->obj); @@ -1774,7 +1835,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx) rxq_free_elts(rxq_ctrl); dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STOPPED; } - if (!__atomic_load_n(&rxq_ctrl->refcnt, __ATOMIC_RELAXED)) { + if (!__atomic_load_n(&rxq->refcnt, __ATOMIC_RELAXED)) { if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) mlx5_mr_btree_free(&rxq_ctrl->rxq.mr_ctrl.cache_bh); LIST_REMOVE(rxq, owner_entry); @@ -1952,7 +2013,7 @@ mlx5_ind_table_obj_release(struct rte_eth_dev *dev, return 1; priv->obj_ops.ind_table_destroy(ind_tbl); for (i = 0; i != ind_tbl->queues_n; ++i) - claim_nonzero(mlx5_rxq_release(dev, ind_tbl->queues[i])); + claim_nonzero(mlx5_rxq_deref(dev, ind_tbl->queues[i])); mlx5_free(ind_tbl); return 0; } @@ -2009,7 +2070,7 @@ mlx5_ind_table_obj_setup(struct rte_eth_dev *dev, log2above(priv->config.ind_table_max_size); for (i = 0; i != queues_n; ++i) { - if (!mlx5_rxq_get(dev, queues[i])) { + if (mlx5_rxq_ref(dev, queues[i]) == NULL) { ret = -rte_errno; goto error; } @@ -2022,7 +2083,7 @@ mlx5_ind_table_obj_setup(struct rte_eth_dev *dev, error: err = rte_errno; for (j = 0; j < i; j++) - mlx5_rxq_release(dev, ind_tbl->queues[j]); + mlx5_rxq_deref(dev, ind_tbl->queues[j]); rte_errno = err; DRV_LOG(DEBUG, "Port %u cannot setup indirection table.", dev->data->port_id); @@ -2118,7 +2179,7 @@ mlx5_ind_table_obj_modify(struct rte_eth_dev *dev, bool standalone) { struct mlx5_priv *priv = dev->data->dev_private; - unsigned int i, j; + unsigned int i; int ret = 0, err; const unsigned int n = rte_is_power_of_2(queues_n) ? log2above(queues_n) : @@ -2138,15 +2199,11 @@ mlx5_ind_table_obj_modify(struct rte_eth_dev *dev, ret = priv->obj_ops.ind_table_modify(dev, n, queues, queues_n, ind_tbl); if (ret) goto error; - for (j = 0; j < ind_tbl->queues_n; j++) - mlx5_rxq_release(dev, ind_tbl->queues[j]); ind_tbl->queues_n = queues_n; ind_tbl->queues = queues; return 0; error: err = rte_errno; - for (j = 0; j < i; j++) - mlx5_rxq_release(dev, queues[j]); rte_errno = err; DRV_LOG(DEBUG, "Port %u cannot setup indirection table.", dev->data->port_id); diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index ebeeae279e2..e5d74d275f8 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -201,10 +201,12 @@ mlx5_rxq_start(struct rte_eth_dev *dev) DRV_LOG(DEBUG, "Port %u device_attr.max_sge is %d.", dev->data->port_id, priv->sh->device_attr.max_sge); for (i = 0; i != priv->rxqs_n; ++i) { - struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_get(dev, i); + struct mlx5_rxq_priv *rxq = mlx5_rxq_ref(dev, i); + struct mlx5_rxq_ctrl *rxq_ctrl; - if (!rxq_ctrl) + if (rxq == NULL) continue; + rxq_ctrl = rxq->ctrl; if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) { /* * Pre-register the mempools. Regardless of whether @@ -266,6 +268,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) struct mlx5_devx_modify_sq_attr sq_attr = { 0 }; struct mlx5_devx_modify_rq_attr rq_attr = { 0 }; struct mlx5_txq_ctrl *txq_ctrl; + struct mlx5_rxq_priv *rxq; struct mlx5_rxq_ctrl *rxq_ctrl; struct mlx5_devx_obj *sq; struct mlx5_devx_obj *rq; @@ -310,9 +313,8 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) return -rte_errno; } sq = txq_ctrl->obj->sq; - rxq_ctrl = mlx5_rxq_get(dev, - txq_ctrl->hairpin_conf.peers[0].queue); - if (!rxq_ctrl) { + rxq = mlx5_rxq_get(dev, txq_ctrl->hairpin_conf.peers[0].queue); + if (rxq == NULL) { mlx5_txq_release(dev, i); rte_errno = EINVAL; DRV_LOG(ERR, "port %u no rxq object found: %d", @@ -320,6 +322,7 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) txq_ctrl->hairpin_conf.peers[0].queue); return -rte_errno; } + rxq_ctrl = rxq->ctrl; if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN || rxq_ctrl->hairpin_conf.peers[0].queue != i) { rte_errno = ENOMEM; @@ -354,12 +357,10 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) rxq_ctrl->hairpin_status = 1; txq_ctrl->hairpin_status = 1; mlx5_txq_release(dev, i); - mlx5_rxq_release(dev, txq_ctrl->hairpin_conf.peers[0].queue); } return 0; error: mlx5_txq_release(dev, i); - mlx5_rxq_release(dev, txq_ctrl->hairpin_conf.peers[0].queue); return -rte_errno; } @@ -432,27 +433,26 @@ mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue, peer_info->manual_bind = txq_ctrl->hairpin_conf.manual_bind; mlx5_txq_release(dev, peer_queue); } else { /* Peer port used as ingress. */ + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, peer_queue); struct mlx5_rxq_ctrl *rxq_ctrl; - rxq_ctrl = mlx5_rxq_get(dev, peer_queue); - if (rxq_ctrl == NULL) { + if (rxq == NULL) { rte_errno = EINVAL; DRV_LOG(ERR, "Failed to get port %u Rx queue %d", dev->data->port_id, peer_queue); return -rte_errno; } + rxq_ctrl = rxq->ctrl; if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) { rte_errno = EINVAL; DRV_LOG(ERR, "port %u queue %d is not a hairpin Rxq", dev->data->port_id, peer_queue); - mlx5_rxq_release(dev, peer_queue); return -rte_errno; } if (rxq_ctrl->obj == NULL || rxq_ctrl->obj->rq == NULL) { rte_errno = ENOMEM; DRV_LOG(ERR, "port %u no Rxq object found: %d", dev->data->port_id, peer_queue); - mlx5_rxq_release(dev, peer_queue); return -rte_errno; } peer_info->qp_id = rxq_ctrl->obj->rq->id; @@ -460,7 +460,6 @@ mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue, peer_info->peer_q = rxq_ctrl->hairpin_conf.peers[0].queue; peer_info->tx_explicit = rxq_ctrl->hairpin_conf.tx_explicit; peer_info->manual_bind = rxq_ctrl->hairpin_conf.manual_bind; - mlx5_rxq_release(dev, peer_queue); } return 0; } @@ -559,34 +558,32 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue, txq_ctrl->hairpin_status = 1; mlx5_txq_release(dev, cur_queue); } else { + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, cur_queue); struct mlx5_rxq_ctrl *rxq_ctrl; struct mlx5_devx_modify_rq_attr rq_attr = { 0 }; - rxq_ctrl = mlx5_rxq_get(dev, cur_queue); - if (rxq_ctrl == NULL) { + if (rxq == NULL) { rte_errno = EINVAL; DRV_LOG(ERR, "Failed to get port %u Rx queue %d", dev->data->port_id, cur_queue); return -rte_errno; } + rxq_ctrl = rxq->ctrl; if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) { rte_errno = EINVAL; DRV_LOG(ERR, "port %u queue %d not a hairpin Rxq", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return -rte_errno; } if (rxq_ctrl->obj == NULL || rxq_ctrl->obj->rq == NULL) { rte_errno = ENOMEM; DRV_LOG(ERR, "port %u no Rxq object found: %d", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return -rte_errno; } if (rxq_ctrl->hairpin_status != 0) { DRV_LOG(DEBUG, "port %u Rx queue %d is already bound", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return 0; } if (peer_info->tx_explicit != @@ -594,7 +591,6 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue, rte_errno = EINVAL; DRV_LOG(ERR, "port %u Rx queue %d and peer Tx rule mode" " mismatch", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return -rte_errno; } if (peer_info->manual_bind != @@ -602,7 +598,6 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue, rte_errno = EINVAL; DRV_LOG(ERR, "port %u Rx queue %d and peer binding mode" " mismatch", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return -rte_errno; } rq_attr.state = MLX5_SQC_STATE_RDY; @@ -612,7 +607,6 @@ mlx5_hairpin_queue_peer_bind(struct rte_eth_dev *dev, uint16_t cur_queue, ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr); if (ret == 0) rxq_ctrl->hairpin_status = 1; - mlx5_rxq_release(dev, cur_queue); } return ret; } @@ -677,34 +671,32 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue, txq_ctrl->hairpin_status = 0; mlx5_txq_release(dev, cur_queue); } else { + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, cur_queue); struct mlx5_rxq_ctrl *rxq_ctrl; struct mlx5_devx_modify_rq_attr rq_attr = { 0 }; - rxq_ctrl = mlx5_rxq_get(dev, cur_queue); - if (rxq_ctrl == NULL) { + if (rxq == NULL) { rte_errno = EINVAL; DRV_LOG(ERR, "Failed to get port %u Rx queue %d", dev->data->port_id, cur_queue); return -rte_errno; } + rxq_ctrl = rxq->ctrl; if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) { rte_errno = EINVAL; DRV_LOG(ERR, "port %u queue %d not a hairpin Rxq", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return -rte_errno; } if (rxq_ctrl->hairpin_status == 0) { DRV_LOG(DEBUG, "port %u Rx queue %d is already unbound", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return 0; } if (rxq_ctrl->obj == NULL || rxq_ctrl->obj->rq == NULL) { rte_errno = ENOMEM; DRV_LOG(ERR, "port %u no Rxq object found: %d", dev->data->port_id, cur_queue); - mlx5_rxq_release(dev, cur_queue); return -rte_errno; } rq_attr.state = MLX5_SQC_STATE_RST; @@ -712,7 +704,6 @@ mlx5_hairpin_queue_peer_unbind(struct rte_eth_dev *dev, uint16_t cur_queue, ret = mlx5_devx_cmd_modify_rq(rxq_ctrl->obj->rq, &rq_attr); if (ret == 0) rxq_ctrl->hairpin_status = 0; - mlx5_rxq_release(dev, cur_queue); } return ret; } @@ -1014,7 +1005,6 @@ mlx5_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports, { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_txq_ctrl *txq_ctrl; - struct mlx5_rxq_ctrl *rxq_ctrl; uint32_t i; uint16_t pp; uint32_t bits[(RTE_MAX_ETHPORTS + 31) / 32] = {0}; @@ -1043,24 +1033,23 @@ mlx5_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports, } } else { for (i = 0; i < priv->rxqs_n; i++) { - rxq_ctrl = mlx5_rxq_get(dev, i); - if (!rxq_ctrl) + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, i); + struct mlx5_rxq_ctrl *rxq_ctrl; + + if (rxq == NULL) continue; - if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) { - mlx5_rxq_release(dev, i); + rxq_ctrl = rxq->ctrl; + if (rxq_ctrl->type != MLX5_RXQ_TYPE_HAIRPIN) continue; - } pp = rxq_ctrl->hairpin_conf.peers[0].port; if (pp >= RTE_MAX_ETHPORTS) { rte_errno = ERANGE; - mlx5_rxq_release(dev, i); DRV_LOG(ERR, "port %hu queue %u peer port " "out of range %hu", priv->dev_data->port_id, i, pp); return -rte_errno; } bits[pp / 32] |= 1 << (pp % 32); - mlx5_rxq_release(dev, i); } } for (i = 0; i < RTE_MAX_ETHPORTS; i++) {