From patchwork Sat Oct 16 09:12:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 101885 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9D7D5A034F; Sat, 16 Oct 2021 11:13:55 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F2DF841173; Sat, 16 Oct 2021 11:13:50 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2079.outbound.protection.outlook.com [40.107.244.79]) by mails.dpdk.org (Postfix) with ESMTP id 47E4B4067C for ; Sat, 16 Oct 2021 11:13:48 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Oj+U3Td5rZxpBE80b+OBpBYCKR2gQ1jJWNnzkgzq89SxzmQosfazOd8zZ9LETIR2QRY7RIamhA1vggItfoQbjlMXS0urOaPWwXYvhmxuY8X8AKBTXdm4SBDvRLY/IApAtkH9+NAj/SH2OUUFMCDukSKclRT29mP16nrNr+3LqYKA2XJqL/HJTV7TguUSHglne3vXgtLNHeHKYUALHJtF0ywqpe1vRe4k+kUHYa825nqaCjks06yLHKmxbKjXfhYhlQo1ZJER+E6IcbexsICn4LNbrVEdQa8SwLAqf5Q9rphmadu6xIqdCKZqi9kLCb8BLXZ1YUMp1yzlCQLP5yzPoQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=SIEVrhQd/+v80SxxKPIYOvGumrDKkiEbh+2XW0ibtmw=; b=L9meOBL2dEqMxOM+Xyq2avvAV6KtfP10k715TCXnNdgEY5YsPj4MGJgU/oStFqx1ZundUboPEEj4vRt8x4cb/u5K3MYhcNUn5rcbOBd4FioVaV/hPpzXpUZ1HmCQWESIuop0bwRyfjdYPpKBLlZd+pVUM6e8/Q+sn0QdEBlOdQiueUvRx8ID4ut27ygaGcDg9CBMt8ZsW5bSwwLiYW61BmADbiS/dha+He9aQtKU/uDQUQ6rrNSlVzoA8DwW9l+uQNYHquFYAptTG92cYOBdFMgNMnFI1asRtajdBwd9b830n/xsqtmBTEyn/wXQuv1KF4lu09dGIlb7IWaMztYImQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=SIEVrhQd/+v80SxxKPIYOvGumrDKkiEbh+2XW0ibtmw=; b=BPdcvdlaiPztP3e6UbJ5KLBqfFe8dScbTIJ4peI7ocG/55yg2xn+d+qKuYLK464SnxBu9qmrgV0zKohBnhbFVkc4e/ns8vsmpwgJ+x27SZWXeMAhjAGbRyarH7I8C2pYZb4iwNKZVTLeP6LmiGuJjqvH2a7Gh68PsP3CHgK0P5/LybF1vD8q8E38gzTtn/dxy+vl8X9Nqq95Af782L/jPNtxsdCW2SwwVi4GfHyBZTXkP5UubHRNdFS0OWmaAgoquHvq7ZPIHVgDYlcM7LuBS83qsozYiCrEftHLuVeRIkMTzuEUug7C/tgsRy1eh6kuLawwTIxtx03fC2MLATc3hA== Received: from BN9PR03CA0870.namprd03.prod.outlook.com (2603:10b6:408:13d::35) by BN6PR1201MB0052.namprd12.prod.outlook.com (2603:10b6:405:53::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4608.17; Sat, 16 Oct 2021 09:13:46 +0000 Received: from BN8NAM11FT042.eop-nam11.prod.protection.outlook.com (2603:10b6:408:13d:cafe::6c) by BN9PR03CA0870.outlook.office365.com (2603:10b6:408:13d::35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4608.15 via Frontend Transport; Sat, 16 Oct 2021 09:13:46 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT042.mail.protection.outlook.com (10.13.177.85) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4608.15 via Frontend Transport; Sat, 16 Oct 2021 09:13:45 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sat, 16 Oct 2021 09:13:42 +0000 From: Xueming Li To: CC: , Lior Margalit , Matan Azrad , Viacheslav Ovsiienko Date: Sat, 16 Oct 2021 17:12:11 +0800 Message-ID: <20211016091214.1831902-12-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211016091214.1831902-1-xuemingl@nvidia.com> References: <20210926111904.237736-1-xuemingl@nvidia.com> <20211016091214.1831902-1-xuemingl@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 9c197a54-1a7b-4bc8-7e85-08d9908541fc X-MS-TrafficTypeDiagnostic: BN6PR1201MB0052: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:7219; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: UrbILfqfAvu7+qzTjVLXjeBe/Xd7o6oOXXwCXf7x+LBDr2QWixsQrFNdmJoxI8tAra+ftZu4fCZpxfjlIAsjdG7s2C/YGT6FSlIBlhTqw623O1HBHRE2JuZwbqezXMz3a5ROSlYdlUFyDxo8+57Fj5IMrGoaFmdrt12LNHtzY+hMVqYvvYQ/Q8GpsyWP+/bCrXTnCOZRRSGa/KBbFYNNQjxgAVjq1IenYSI4Hr9b0iQgXvEO+wjFP2A6VDx5EuwMKohUmSbHY9BAEJOIBaMqEG0J7h7fb2wj0F6NVUbXLstvV13tv9Bm3p7yh2uz3JjjgJhzsnK1CtFPt6upuUQJCOOXOR6LRjY285+UzPhyqIfIdKbiqlvzagww5IQE2W9r8N7VoWjNr5t8ikxexuTQE1BCwKtUnFUhsp/OKFFcrBczSKqScLYY3GraQAIej6oH4deiJmahhk3dT8xzqsqef0ApMI+b2TLO6c5wztoNS0LmBx4iCfI7n55ZAsHQpnTRByk9nR33pNarOrXeAUrCYF9KAU7IuE2I7s7U3WmzNXSl16SPobuXgxSRSj1yPjE4ULMs5/q559vNIJVKD9AvrmrZOv6Lh3PAS4zXb68kQT/ZcG33M/e1hZx7JMTv4ulre1dkBJC/n6OTziguxo/votSHzadZ2NgT4W7EmiSd4360KUeUGzBuxpbWyJSFg0Wzlew5OwR/fPZof6TQLlK2onexRuoADvZolTznPatmhVmZ553cvPPbV/YDaJ3s+SzZ X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(2906002)(47076005)(82310400003)(26005)(186003)(4326008)(426003)(316002)(36860700001)(16526019)(54906003)(5660300002)(336012)(356005)(8676002)(6916009)(1076003)(7696005)(6286002)(30864003)(107886003)(8936002)(70206006)(70586007)(86362001)(83380400001)(7636003)(55016002)(6666004)(36756003)(508600001)(2616005)(21314003)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Oct 2021 09:13:45.9993 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9c197a54-1a7b-4bc8-7e85-08d9908541fc X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT042.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR1201MB0052 Subject: [dpdk-dev] [PATCH v2 11/13] net/mlx5: remove Rx queue data list from device X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Rx queue data list(priv->rxqs) can be replaced by Rx queue list(priv->rxq_privs), removes it and replace with universal wrapper API. Signed-off-by: Xueming Li --- drivers/net/mlx5/linux/mlx5_verbs.c | 7 ++--- drivers/net/mlx5/mlx5.c | 10 +------ drivers/net/mlx5/mlx5.h | 1 - drivers/net/mlx5/mlx5_devx.c | 13 +++++---- drivers/net/mlx5/mlx5_ethdev.c | 6 +--- drivers/net/mlx5/mlx5_flow.c | 45 +++++++++++++++-------------- drivers/net/mlx5/mlx5_rss.c | 6 ++-- drivers/net/mlx5/mlx5_rx.c | 19 +++++------- drivers/net/mlx5/mlx5_rx.h | 9 +++--- drivers/net/mlx5/mlx5_rxq.c | 23 ++++++--------- drivers/net/mlx5/mlx5_rxtx_vec.c | 6 ++-- drivers/net/mlx5/mlx5_stats.c | 9 +++--- drivers/net/mlx5/mlx5_trigger.c | 2 +- 13 files changed, 69 insertions(+), 87 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c index a2a9b9c1f98..0e68a13208b 100644 --- a/drivers/net/mlx5/linux/mlx5_verbs.c +++ b/drivers/net/mlx5/linux/mlx5_verbs.c @@ -527,11 +527,10 @@ mlx5_ibv_ind_table_new(struct rte_eth_dev *dev, const unsigned int log_n, MLX5_ASSERT(ind_tbl); for (i = 0; i != ind_tbl->queues_n; ++i) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[ind_tbl->queues[i]]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq, struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, + ind_tbl->queues[i]); - wq[i] = rxq_ctrl->obj->wq; + wq[i] = rxq->ctrl->obj->wq; } MLX5_ASSERT(i > 0); /* Finalise indirection table. */ diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 477ad8c1bc9..6240f6f5dc6 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1578,20 +1578,12 @@ mlx5_dev_close(struct rte_eth_dev *dev) mlx5_mp_os_req_stop_rxtx(dev); /* Free the eCPRI flex parser resource. */ mlx5_flex_parser_ecpri_release(dev); - if (priv->rxqs != NULL) { + if (priv->rxq_privs != NULL) { /* XXX race condition if mlx5_rx_burst() is still running. */ rte_delay_us_sleep(1000); for (i = 0; (i != priv->rxqs_n); ++i) mlx5_rxq_release(dev, i); priv->rxqs_n = 0; - priv->rxqs = NULL; - } - if (priv->representor) { - /* Each representor has a dedicated interrupts handler */ - mlx5_free(dev->intr_handle); - dev->intr_handle = NULL; - } - if (priv->rxq_privs != NULL) { mlx5_free(priv->rxq_privs); priv->rxq_privs = NULL; } diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index a2735cbb350..55612f777ea 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1406,7 +1406,6 @@ struct mlx5_priv { unsigned int rxqs_n; /* RX queues array size. */ unsigned int txqs_n; /* TX queues array size. */ struct mlx5_rxq_priv *(*rxq_privs)[]; /* RX queue non-shared data. */ - struct mlx5_rxq_data *(*rxqs)[]; /* (Shared) RX queues. */ struct mlx5_txq_data *(*txqs)[]; /* TX queues. */ struct rte_mempool *mprq_mp; /* Mempool for Multi-Packet RQ. */ struct rte_eth_rss_conf rss_conf; /* RSS configuration. */ diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index f7f7526dbf6..b767470dea0 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -682,15 +682,16 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key, /* NULL queues designate drop queue. */ if (ind_tbl->queues != NULL) { - struct mlx5_rxq_data *rxq_data = - (*priv->rxqs)[ind_tbl->queues[0]]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); - rxq_obj_type = rxq_ctrl->type; + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, + ind_tbl->queues[0]); + rxq_obj_type = rxq->ctrl->type; /* Enable TIR LRO only if all the queues were configured for. */ for (i = 0; i < ind_tbl->queues_n; ++i) { - if (!(*priv->rxqs)[ind_tbl->queues[i]]->lro) { + struct mlx5_rxq_data *rxq_i = + mlx5_rxq_data_get(dev, ind_tbl->queues[i]); + + if (rxq_i != NULL && !rxq_i->lro) { lro = false; break; } diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c index ee1189b929d..070ff149488 100644 --- a/drivers/net/mlx5/mlx5_ethdev.c +++ b/drivers/net/mlx5/mlx5_ethdev.c @@ -114,7 +114,6 @@ mlx5_dev_configure(struct rte_eth_dev *dev) rte_errno = ENOMEM; return -rte_errno; } - priv->rxqs = (void *)dev->data->rx_queues; priv->txqs = (void *)dev->data->tx_queues; if (txqs_n != priv->txqs_n) { DRV_LOG(INFO, "port %u Tx queues number update: %u -> %u", @@ -171,11 +170,8 @@ mlx5_dev_configure_rss_reta(struct rte_eth_dev *dev) return -rte_errno; } for (i = 0, j = 0; i < rxqs_n; i++) { - struct mlx5_rxq_data *rxq_data; - struct mlx5_rxq_ctrl *rxq_ctrl; + struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i); - rxq_data = (*priv->rxqs)[i]; - rxq_ctrl = container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); if (rxq_ctrl && rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD) rss_queue_arr[j++] = i; } diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index c10b9112593..49a74edd2e6 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -1166,10 +1166,11 @@ flow_drv_rxq_flags_set(struct rte_eth_dev *dev, return; for (i = 0; i != ind_tbl->queues_n; ++i) { int idx = ind_tbl->queues[i]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of((*priv->rxqs)[idx], - struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, idx); + MLX5_ASSERT(rxq_ctrl != NULL); + if (rxq_ctrl == NULL) + continue; /* * To support metadata register copy on Tx loopback, * this must be always enabled (metadata may arive @@ -1261,10 +1262,11 @@ flow_drv_rxq_flags_trim(struct rte_eth_dev *dev, MLX5_ASSERT(dev->data->dev_started); for (i = 0; i != ind_tbl->queues_n; ++i) { int idx = ind_tbl->queues[i]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of((*priv->rxqs)[idx], - struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, idx); + MLX5_ASSERT(rxq_ctrl != NULL); + if (rxq_ctrl == NULL) + continue; if (priv->config.dv_flow_en && priv->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY && mlx5_flow_ext_mreg_supported(dev)) { @@ -1325,18 +1327,16 @@ flow_rxq_flags_clear(struct rte_eth_dev *dev) unsigned int i; for (i = 0; i != priv->rxqs_n; ++i) { - struct mlx5_rxq_ctrl *rxq_ctrl; + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, i); unsigned int j; - if (!(*priv->rxqs)[i]) + if (rxq == NULL || rxq->ctrl == NULL) continue; - rxq_ctrl = container_of((*priv->rxqs)[i], - struct mlx5_rxq_ctrl, rxq); - rxq_ctrl->flow_mark_n = 0; - rxq_ctrl->rxq.mark = 0; + rxq->ctrl->flow_mark_n = 0; + rxq->ctrl->rxq.mark = 0; for (j = 0; j != MLX5_FLOW_TUNNEL; ++j) - rxq_ctrl->flow_tunnels_n[j] = 0; - rxq_ctrl->rxq.tunnel = 0; + rxq->ctrl->flow_tunnels_n[j] = 0; + rxq->ctrl->rxq.tunnel = 0; } } @@ -1350,13 +1350,15 @@ void mlx5_flow_rxq_dynf_metadata_set(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *data; unsigned int i; for (i = 0; i != priv->rxqs_n; ++i) { - if (!(*priv->rxqs)[i]) + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, i); + struct mlx5_rxq_data *data; + + if (rxq == NULL || rxq->ctrl == NULL) continue; - data = (*priv->rxqs)[i]; + data = &rxq->ctrl->rxq; if (!rte_flow_dynf_metadata_avail()) { data->dynf_meta = 0; data->flow_meta_mask = 0; @@ -1547,7 +1549,7 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action, RTE_FLOW_ERROR_TYPE_ACTION_CONF, &queue->index, "queue index out of range"); - if (!(*priv->rxqs)[queue->index]) + if (mlx5_rxq_get(dev, queue->index) == NULL) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION_CONF, &queue->index, @@ -1578,7 +1580,7 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action, * 0 on success, a negative errno code on error. */ static int -mlx5_validate_rss_queues(const struct rte_eth_dev *dev, +mlx5_validate_rss_queues(struct rte_eth_dev *dev, const uint16_t *queues, uint32_t queues_n, const char **error, uint32_t *queue_idx) { @@ -1594,13 +1596,12 @@ mlx5_validate_rss_queues(const struct rte_eth_dev *dev, *queue_idx = i; return -EINVAL; } - if (!(*priv->rxqs)[queues[i]]) { + rxq_ctrl = mlx5_rxq_ctrl_get(dev, queues[i]); + if (rxq_ctrl == NULL) { *error = "queue is not configured"; *queue_idx = i; return -EINVAL; } - rxq_ctrl = container_of((*priv->rxqs)[queues[i]], - struct mlx5_rxq_ctrl, rxq); if (i == 0) rxq_type = rxq_ctrl->type; if (rxq_type != rxq_ctrl->type) { diff --git a/drivers/net/mlx5/mlx5_rss.c b/drivers/net/mlx5/mlx5_rss.c index c32129cdc2b..9ffc44b179f 100644 --- a/drivers/net/mlx5/mlx5_rss.c +++ b/drivers/net/mlx5/mlx5_rss.c @@ -65,9 +65,11 @@ mlx5_rss_hash_update(struct rte_eth_dev *dev, priv->rss_conf.rss_hf = rss_conf->rss_hf; /* Enable the RSS hash in all Rx queues. */ for (i = 0, idx = 0; idx != priv->rxqs_n; ++i) { - if (!(*priv->rxqs)[i]) + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, i); + + if (rxq == NULL || rxq->ctrl == NULL) continue; - (*priv->rxqs)[i]->rss_hash = !!rss_conf->rss_hf && + rxq->ctrl->rxq.rss_hash = !!rss_conf->rss_hf && !!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS); ++idx; } diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c index 09de26c0d39..3017a8da20c 100644 --- a/drivers/net/mlx5/mlx5_rx.c +++ b/drivers/net/mlx5/mlx5_rx.c @@ -148,10 +148,8 @@ void mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t rx_queue_id, struct rte_eth_rxq_info *qinfo) { - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq = (*priv->rxqs)[rx_queue_id]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq, struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, rx_queue_id); + struct mlx5_rxq_data *rxq = mlx5_rxq_data_get(dev, rx_queue_id); if (!rxq) return; @@ -162,7 +160,10 @@ mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t rx_queue_id, qinfo->conf.rx_thresh.wthresh = 0; qinfo->conf.rx_free_thresh = rxq->rq_repl_thresh; qinfo->conf.rx_drop_en = 1; - qinfo->conf.rx_deferred_start = rxq_ctrl ? 0 : 1; + if (rxq_ctrl == NULL || rxq_ctrl->obj == NULL) + qinfo->conf.rx_deferred_start = 0; + else + qinfo->conf.rx_deferred_start = 1; qinfo->conf.offloads = dev->data->dev_conf.rxmode.offloads; qinfo->scattered_rx = dev->data->scattered_rx; qinfo->nb_desc = mlx5_rxq_mprq_enabled(rxq) ? @@ -191,10 +192,8 @@ mlx5_rx_burst_mode_get(struct rte_eth_dev *dev, struct rte_eth_burst_mode *mode) { eth_rx_burst_t pkt_burst = dev->rx_pkt_burst; - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq; + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, rx_queue_id); - rxq = (*priv->rxqs)[rx_queue_id]; if (!rxq) { rte_errno = EINVAL; return -rte_errno; @@ -245,15 +244,13 @@ mlx5_rx_burst_mode_get(struct rte_eth_dev *dev, uint32_t mlx5_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id) { - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq; + struct mlx5_rxq_data *rxq = mlx5_rxq_data_get(dev, rx_queue_id); if (dev->rx_pkt_burst == NULL || dev->rx_pkt_burst == removed_rx_burst) { rte_errno = ENOTSUP; return -rte_errno; } - rxq = (*priv->rxqs)[rx_queue_id]; if (!rxq) { rte_errno = EINVAL; return -rte_errno; diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index 25f7fc2071a..161399c764d 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -606,14 +606,13 @@ mlx5_mprq_enabled(struct rte_eth_dev *dev) return 0; /* All the configured queues should be enabled. */ for (i = 0; i < priv->rxqs_n; ++i) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[i]; - struct mlx5_rxq_ctrl *rxq_ctrl = container_of - (rxq, struct mlx5_rxq_ctrl, rxq); + struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, i); - if (rxq == NULL || rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD) + if (rxq_ctrl == NULL || + rxq_ctrl->type != MLX5_RXQ_TYPE_STANDARD) continue; n_ibv++; - if (mlx5_rxq_mprq_enabled(rxq)) + if (mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq)) ++n; } /* Multi-Packet RQ can't be partially configured. */ diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index b9d39a28d78..8eec9a4ed8d 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -729,7 +729,7 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, } DRV_LOG(DEBUG, "port %u adding Rx queue %u to list", dev->data->port_id, idx); - (*priv->rxqs)[idx] = &rxq_ctrl->rxq; + dev->data->rx_queues[idx] = &rxq_ctrl->rxq; return 0; } @@ -811,7 +811,7 @@ mlx5_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t idx, } DRV_LOG(DEBUG, "port %u adding hairpin Rx queue %u to list", dev->data->port_id, idx); - (*priv->rxqs)[idx] = &rxq_ctrl->rxq; + dev->data->rx_queues[idx] = &rxq_ctrl->rxq; return 0; } @@ -1712,8 +1712,7 @@ mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx) { struct mlx5_priv *priv = dev->data->dev_private; - if (priv->rxq_privs == NULL) - return NULL; + MLX5_ASSERT(priv->rxq_privs != NULL); return (*priv->rxq_privs)[idx]; } @@ -1799,7 +1798,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx) LIST_REMOVE(rxq, owner_entry); LIST_REMOVE(rxq_ctrl, next); mlx5_free(rxq_ctrl); - (*priv->rxqs)[idx] = NULL; + dev->data->rx_queues[idx] = NULL; mlx5_free(rxq); (*priv->rxq_privs)[idx] = NULL; } @@ -1845,14 +1844,10 @@ enum mlx5_rxq_type mlx5_rxq_get_type(struct rte_eth_dev *dev, uint16_t idx) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_ctrl *rxq_ctrl = NULL; + struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, idx); - if (idx < priv->rxqs_n && (*priv->rxqs)[idx]) { - rxq_ctrl = container_of((*priv->rxqs)[idx], - struct mlx5_rxq_ctrl, - rxq); + if (idx < priv->rxqs_n && rxq_ctrl != NULL) return rxq_ctrl->type; - } return MLX5_RXQ_TYPE_UNDEFINED; } @@ -2619,13 +2614,13 @@ mlx5_rxq_timestamp_set(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_dev_ctx_shared *sh = priv->sh; - struct mlx5_rxq_data *data; unsigned int i; for (i = 0; i != priv->rxqs_n; ++i) { - if (!(*priv->rxqs)[i]) + struct mlx5_rxq_data *data = mlx5_rxq_data_get(dev, i); + + if (data == NULL) continue; - data = (*priv->rxqs)[i]; data->sh = sh; data->rt_timestamp = priv->config.rt_timestamp; } diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.c b/drivers/net/mlx5/mlx5_rxtx_vec.c index 511681841ca..6212ce8247d 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec.c +++ b/drivers/net/mlx5/mlx5_rxtx_vec.c @@ -578,11 +578,11 @@ mlx5_check_vec_rx_support(struct rte_eth_dev *dev) return -ENOTSUP; /* All the configured queues should support. */ for (i = 0; i < priv->rxqs_n; ++i) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[i]; + struct mlx5_rxq_data *rxq_data = mlx5_rxq_data_get(dev, i); - if (!rxq) + if (!rxq_data) continue; - if (mlx5_rxq_check_vec_support(rxq) < 0) + if (mlx5_rxq_check_vec_support(rxq_data) < 0) break; } if (i != priv->rxqs_n) diff --git a/drivers/net/mlx5/mlx5_stats.c b/drivers/net/mlx5/mlx5_stats.c index ae2f5668a74..732775954ad 100644 --- a/drivers/net/mlx5/mlx5_stats.c +++ b/drivers/net/mlx5/mlx5_stats.c @@ -107,7 +107,7 @@ mlx5_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) memset(&tmp, 0, sizeof(tmp)); /* Add software counters. */ for (i = 0; (i != priv->rxqs_n); ++i) { - struct mlx5_rxq_data *rxq = (*priv->rxqs)[i]; + struct mlx5_rxq_data *rxq = mlx5_rxq_data_get(dev, i); if (rxq == NULL) continue; @@ -181,10 +181,11 @@ mlx5_stats_reset(struct rte_eth_dev *dev) unsigned int i; for (i = 0; (i != priv->rxqs_n); ++i) { - if ((*priv->rxqs)[i] == NULL) + struct mlx5_rxq_data *rxq_data = mlx5_rxq_data_get(dev, i); + + if (rxq_data == NULL) continue; - memset(&(*priv->rxqs)[i]->stats, 0, - sizeof(struct mlx5_rxq_stats)); + memset(&rxq_data->stats, 0, sizeof(struct mlx5_rxq_stats)); } for (i = 0; (i != priv->txqs_n); ++i) { if ((*priv->txqs)[i] == NULL) diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index b3188f510fb..1e865e74e39 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -176,7 +176,7 @@ mlx5_rxq_start(struct rte_eth_dev *dev) if (!rxq_ctrl->obj) { DRV_LOG(ERR, "Port %u Rx queue %u can't allocate resources.", - dev->data->port_id, (*priv->rxqs)[i]->idx); + dev->data->port_id, i); rte_errno = ENOMEM; goto error; }