From patchwork Mon Jun 5 06:17:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 128077 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 56F1742C31; Mon, 5 Jun 2023 08:42:23 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CB17B41141; Mon, 5 Jun 2023 08:42:18 +0200 (CEST) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 4D98E4003C for ; Mon, 5 Jun 2023 08:42:17 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685947337; x=1717483337; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Z03PGnmb4IvqIT+p2fyA+nrckUU8ET1jTiQRpNyFFws=; b=WbaQasEgYSbIP3WhT4a0bcER9A9ALEsWUlnqttZ843CRyU0t09yYehHU m5vGLbd4LvzzPhV1CsvT0yuM/QcnMfqwbZaC8v8k3498HxR5ZJMtQlM7S 32Rpnuq7blHEh3WdMQBSN6QdX2O3vzav2K3I4A5fSK94L9Rm9ncUzudo9 K1cbDjjm2x7N7D+hMzfOWhx0g0S4LR1Dijcavuzh4sYkKA/zBTvk/sSy2 kbQukTLr05U7l0L1mW/y/sBCKf8n5pmLeVKEVx+kokLYlyx7DnKaG/uH8 5K1vzCYwB2aNisMcm+3GsxqsVkHtdYWbiTPcIPvYNDpaAMWnThmOaKdyC Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="419839618" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="419839618" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jun 2023 23:42:16 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="798301020" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="798301020" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by FMSMGA003.fm.intel.com with ESMTP; 04 Jun 2023 23:42:14 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing Subject: [PATCH v8 01/14] net/cpfl: refine structures Date: Mon, 5 Jun 2023 06:17:11 +0000 Message-Id: <20230605061724.88130-2-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230605061724.88130-1-beilei.xing@intel.com> References: <20230531130450.26380-1-beilei.xing@intel.com> <20230605061724.88130-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch refines some structures to support hairpin queue, cpfl_rx_queue/cpfl_tx_queue/cpfl_vport. Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/net/cpfl/cpfl_ethdev.c | 85 +++++++----- drivers/net/cpfl/cpfl_ethdev.h | 6 +- drivers/net/cpfl/cpfl_rxtx.c | 175 +++++++++++++++++------- drivers/net/cpfl/cpfl_rxtx.h | 8 ++ drivers/net/cpfl/cpfl_rxtx_vec_common.h | 17 +-- 5 files changed, 196 insertions(+), 95 deletions(-) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index 7528a14d05..e587155db6 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -124,7 +124,8 @@ static int cpfl_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct rte_eth_link new_link; unsigned int i; @@ -156,7 +157,8 @@ cpfl_dev_link_update(struct rte_eth_dev *dev, static int cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; dev_info->max_rx_queues = base->caps.max_rx_q; @@ -216,7 +218,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) static int cpfl_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; /* mtu setting is forbidden if port is start */ if (dev->data->dev_started) { @@ -256,12 +259,12 @@ static uint64_t cpfl_get_mbuf_alloc_failed_stats(struct rte_eth_dev *dev) { uint64_t mbuf_alloc_failed = 0; - struct idpf_rx_queue *rxq; + struct cpfl_rx_queue *cpfl_rxq; int i = 0; for (i = 0; i < dev->data->nb_rx_queues; i++) { - rxq = dev->data->rx_queues[i]; - mbuf_alloc_failed += __atomic_load_n(&rxq->rx_stats.mbuf_alloc_failed, + cpfl_rxq = dev->data->rx_queues[i]; + mbuf_alloc_failed += __atomic_load_n(&cpfl_rxq->base.rx_stats.mbuf_alloc_failed, __ATOMIC_RELAXED); } @@ -271,8 +274,8 @@ cpfl_get_mbuf_alloc_failed_stats(struct rte_eth_dev *dev) static int cpfl_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { - struct idpf_vport *vport = - (struct idpf_vport *)dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct virtchnl2_vport_stats *pstats = NULL; int ret; @@ -305,20 +308,20 @@ cpfl_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) static void cpfl_reset_mbuf_alloc_failed_stats(struct rte_eth_dev *dev) { - struct idpf_rx_queue *rxq; + struct cpfl_rx_queue *cpfl_rxq; int i; for (i = 0; i < dev->data->nb_rx_queues; i++) { - rxq = dev->data->rx_queues[i]; - __atomic_store_n(&rxq->rx_stats.mbuf_alloc_failed, 0, __ATOMIC_RELAXED); + cpfl_rxq = dev->data->rx_queues[i]; + __atomic_store_n(&cpfl_rxq->base.rx_stats.mbuf_alloc_failed, 0, __ATOMIC_RELAXED); } } static int cpfl_dev_stats_reset(struct rte_eth_dev *dev) { - struct idpf_vport *vport = - (struct idpf_vport *)dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct virtchnl2_vport_stats *pstats = NULL; int ret; @@ -343,8 +346,8 @@ static int cpfl_dev_xstats_reset(struct rte_eth_dev *dev) static int cpfl_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, unsigned int n) { - struct idpf_vport *vport = - (struct idpf_vport *)dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct virtchnl2_vport_stats *pstats = NULL; unsigned int i; int ret; @@ -459,7 +462,8 @@ cpfl_rss_reta_update(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; uint16_t idx, shift; int ret = 0; @@ -498,7 +502,8 @@ cpfl_rss_reta_query(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; uint16_t idx, shift; int ret = 0; @@ -536,7 +541,8 @@ static int cpfl_rss_hash_update(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; int ret = 0; @@ -601,7 +607,8 @@ static int cpfl_rss_hash_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; int ret = 0; @@ -638,7 +645,8 @@ cpfl_rss_hash_conf_get(struct rte_eth_dev *dev, static int cpfl_dev_configure(struct rte_eth_dev *dev) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct rte_eth_conf *conf = &dev->data->dev_conf; struct idpf_adapter *base = vport->adapter; int ret; @@ -710,7 +718,8 @@ cpfl_dev_configure(struct rte_eth_dev *dev) static int cpfl_config_rx_queues_irqs(struct rte_eth_dev *dev) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; uint16_t nb_rx_queues = dev->data->nb_rx_queues; return idpf_vport_irq_map_config(vport, nb_rx_queues); @@ -719,14 +728,14 @@ cpfl_config_rx_queues_irqs(struct rte_eth_dev *dev) static int cpfl_start_queues(struct rte_eth_dev *dev) { - struct idpf_rx_queue *rxq; - struct idpf_tx_queue *txq; + struct cpfl_rx_queue *cpfl_rxq; + struct cpfl_tx_queue *cpfl_txq; int err = 0; int i; for (i = 0; i < dev->data->nb_tx_queues; i++) { - txq = dev->data->tx_queues[i]; - if (txq == NULL || txq->tx_deferred_start) + cpfl_txq = dev->data->tx_queues[i]; + if (cpfl_txq == NULL || cpfl_txq->base.tx_deferred_start) continue; err = cpfl_tx_queue_start(dev, i); if (err != 0) { @@ -736,8 +745,8 @@ cpfl_start_queues(struct rte_eth_dev *dev) } for (i = 0; i < dev->data->nb_rx_queues; i++) { - rxq = dev->data->rx_queues[i]; - if (rxq == NULL || rxq->rx_deferred_start) + cpfl_rxq = dev->data->rx_queues[i]; + if (cpfl_rxq == NULL || cpfl_rxq->base.rx_deferred_start) continue; err = cpfl_rx_queue_start(dev, i); if (err != 0) { @@ -752,7 +761,8 @@ cpfl_start_queues(struct rte_eth_dev *dev) static int cpfl_dev_start(struct rte_eth_dev *dev) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(base); uint16_t num_allocated_vectors = base->caps.num_allocated_vectors; @@ -813,7 +823,8 @@ cpfl_dev_start(struct rte_eth_dev *dev) static int cpfl_dev_stop(struct rte_eth_dev *dev) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; if (dev->data->dev_started == 0) return 0; @@ -832,7 +843,8 @@ cpfl_dev_stop(struct rte_eth_dev *dev) static int cpfl_dev_close(struct rte_eth_dev *dev) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(vport->adapter); cpfl_dev_stop(dev); @@ -842,7 +854,7 @@ cpfl_dev_close(struct rte_eth_dev *dev) adapter->cur_vport_nb--; dev->data->dev_private = NULL; adapter->vports[vport->sw_idx] = NULL; - rte_free(vport); + rte_free(cpfl_vport); return 0; } @@ -1047,7 +1059,7 @@ cpfl_find_vport(struct cpfl_adapter_ext *adapter, uint32_t vport_id) int i; for (i = 0; i < adapter->cur_vport_nb; i++) { - vport = adapter->vports[i]; + vport = &adapter->vports[i]->base; if (vport->vport_id != vport_id) continue; else @@ -1275,7 +1287,8 @@ cpfl_vport_idx_alloc(struct cpfl_adapter_ext *adapter) static int cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct cpfl_vport_param *param = init_params; struct cpfl_adapter_ext *adapter = param->adapter; /* for sending create vport virtchnl msg prepare */ @@ -1300,7 +1313,7 @@ cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params) goto err; } - adapter->vports[param->idx] = vport; + adapter->vports[param->idx] = cpfl_vport; adapter->cur_vports |= RTE_BIT32(param->devarg_id); adapter->cur_vport_nb++; @@ -1415,7 +1428,7 @@ cpfl_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, snprintf(name, sizeof(name), "cpfl_%s_vport_0", pci_dev->device.name); retval = rte_eth_dev_create(&pci_dev->device, name, - sizeof(struct idpf_vport), + sizeof(struct cpfl_vport), NULL, NULL, cpfl_dev_vport_init, &vport_param); if (retval != 0) @@ -1433,7 +1446,7 @@ cpfl_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, pci_dev->device.name, devargs.req_vports[i]); retval = rte_eth_dev_create(&pci_dev->device, name, - sizeof(struct idpf_vport), + sizeof(struct cpfl_vport), NULL, NULL, cpfl_dev_vport_init, &vport_param); if (retval != 0) diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h index 200dfcac02..81fe9ac4c3 100644 --- a/drivers/net/cpfl/cpfl_ethdev.h +++ b/drivers/net/cpfl/cpfl_ethdev.h @@ -69,13 +69,17 @@ struct cpfl_devargs { uint16_t req_vport_nb; }; +struct cpfl_vport { + struct idpf_vport base; +}; + struct cpfl_adapter_ext { TAILQ_ENTRY(cpfl_adapter_ext) next; struct idpf_adapter base; char name[CPFL_ADAPTER_NAME_LEN]; - struct idpf_vport **vports; + struct cpfl_vport **vports; uint16_t max_vport_nb; uint16_t cur_vports; /* bit mask of created vport */ diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c index 75021c3c54..04a51b8d15 100644 --- a/drivers/net/cpfl/cpfl_rxtx.c +++ b/drivers/net/cpfl/cpfl_rxtx.c @@ -128,7 +128,8 @@ cpfl_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *rxq, uint16_t nb_desc, unsigned int socket_id, struct rte_mempool *mp, uint8_t bufq_id) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; struct idpf_hw *hw = &base->hw; const struct rte_memzone *mz; @@ -220,15 +221,69 @@ cpfl_rx_split_bufq_release(struct idpf_rx_queue *bufq) rte_free(bufq); } +static void +cpfl_rx_queue_release(void *rxq) +{ + struct cpfl_rx_queue *cpfl_rxq = rxq; + struct idpf_rx_queue *q = NULL; + + if (cpfl_rxq == NULL) + return; + + q = &cpfl_rxq->base; + + /* Split queue */ + if (!q->adapter->is_rx_singleq) { + if (q->bufq2) + cpfl_rx_split_bufq_release(q->bufq2); + + if (q->bufq1) + cpfl_rx_split_bufq_release(q->bufq1); + + rte_free(cpfl_rxq); + return; + } + + /* Single queue */ + q->ops->release_mbufs(q); + rte_free(q->sw_ring); + rte_memzone_free(q->mz); + rte_free(cpfl_rxq); +} + +static void +cpfl_tx_queue_release(void *txq) +{ + struct cpfl_tx_queue *cpfl_txq = txq; + struct idpf_tx_queue *q = NULL; + + if (cpfl_txq == NULL) + return; + + q = &cpfl_txq->base; + + if (q->complq) { + rte_memzone_free(q->complq->mz); + rte_free(q->complq); + } + + q->ops->release_mbufs(q); + rte_free(q->sw_ring); + rte_memzone_free(q->mz); + rte_free(cpfl_txq); +} + int cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mp) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; struct idpf_hw *hw = &base->hw; + struct cpfl_rx_queue *cpfl_rxq; const struct rte_memzone *mz; struct idpf_rx_queue *rxq; uint16_t rx_free_thresh; @@ -248,21 +303,23 @@ cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, /* Free memory if needed */ if (dev->data->rx_queues[queue_idx] != NULL) { - idpf_qc_rx_queue_release(dev->data->rx_queues[queue_idx]); + cpfl_rx_queue_release(dev->data->rx_queues[queue_idx]); dev->data->rx_queues[queue_idx] = NULL; } /* Setup Rx queue */ - rxq = rte_zmalloc_socket("cpfl rxq", - sizeof(struct idpf_rx_queue), + cpfl_rxq = rte_zmalloc_socket("cpfl rxq", + sizeof(struct cpfl_rx_queue), RTE_CACHE_LINE_SIZE, socket_id); - if (rxq == NULL) { + if (cpfl_rxq == NULL) { PMD_INIT_LOG(ERR, "Failed to allocate memory for rx queue data structure"); ret = -ENOMEM; goto err_rxq_alloc; } + rxq = &cpfl_rxq->base; + is_splitq = !!(vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT); rxq->mp = mp; @@ -329,7 +386,7 @@ cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, } rxq->q_set = true; - dev->data->rx_queues[queue_idx] = rxq; + dev->data->rx_queues[queue_idx] = cpfl_rxq; return 0; @@ -349,7 +406,8 @@ cpfl_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq, uint16_t queue_idx, uint16_t nb_desc, unsigned int socket_id) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; const struct rte_memzone *mz; struct idpf_tx_queue *cq; int ret; @@ -397,9 +455,11 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_txconf *tx_conf) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; uint16_t tx_rs_thresh, tx_free_thresh; + struct cpfl_tx_queue *cpfl_txq; struct idpf_hw *hw = &base->hw; const struct rte_memzone *mz; struct idpf_tx_queue *txq; @@ -419,21 +479,23 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, /* Free memory if needed. */ if (dev->data->tx_queues[queue_idx] != NULL) { - idpf_qc_tx_queue_release(dev->data->tx_queues[queue_idx]); + cpfl_tx_queue_release(dev->data->tx_queues[queue_idx]); dev->data->tx_queues[queue_idx] = NULL; } /* Allocate the TX queue data structure. */ - txq = rte_zmalloc_socket("cpfl txq", - sizeof(struct idpf_tx_queue), + cpfl_txq = rte_zmalloc_socket("cpfl txq", + sizeof(struct cpfl_tx_queue), RTE_CACHE_LINE_SIZE, socket_id); - if (txq == NULL) { + if (cpfl_txq == NULL) { PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure"); ret = -ENOMEM; goto err_txq_alloc; } + txq = &cpfl_txq->base; + is_splitq = !!(vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT); txq->nb_tx_desc = nb_desc; @@ -487,7 +549,7 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, queue_idx * vport->chunks_info.tx_qtail_spacing); txq->ops = &def_txq_ops; txq->q_set = true; - dev->data->tx_queues[queue_idx] = txq; + dev->data->tx_queues[queue_idx] = cpfl_txq; return 0; @@ -503,6 +565,7 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, int cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) { + struct cpfl_rx_queue *cpfl_rxq; struct idpf_rx_queue *rxq; uint16_t max_pkt_len; uint32_t frame_size; @@ -511,7 +574,8 @@ cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) if (rx_queue_id >= dev->data->nb_rx_queues) return -EINVAL; - rxq = dev->data->rx_queues[rx_queue_id]; + cpfl_rxq = dev->data->rx_queues[rx_queue_id]; + rxq = &cpfl_rxq->base; if (rxq == NULL || !rxq->q_set) { PMD_DRV_LOG(ERR, "RX queue %u not available or setup", @@ -575,9 +639,10 @@ cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) int cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) { - struct idpf_vport *vport = dev->data->dev_private; - struct idpf_rx_queue *rxq = - dev->data->rx_queues[rx_queue_id]; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; + struct cpfl_rx_queue *cpfl_rxq = dev->data->rx_queues[rx_queue_id]; + struct idpf_rx_queue *rxq = &cpfl_rxq->base; int err = 0; err = idpf_vc_rxq_config(vport, rxq); @@ -610,15 +675,15 @@ cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) int cpfl_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id) { - struct idpf_tx_queue *txq; + struct cpfl_tx_queue *cpfl_txq; if (tx_queue_id >= dev->data->nb_tx_queues) return -EINVAL; - txq = dev->data->tx_queues[tx_queue_id]; + cpfl_txq = dev->data->tx_queues[tx_queue_id]; /* Init the RX tail register. */ - IDPF_PCI_REG_WRITE(txq->qtx_tail, 0); + IDPF_PCI_REG_WRITE(cpfl_txq->base.qtx_tail, 0); return 0; } @@ -626,12 +691,13 @@ cpfl_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id) int cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) { - struct idpf_vport *vport = dev->data->dev_private; - struct idpf_tx_queue *txq = + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; + struct cpfl_tx_queue *cpfl_txq = dev->data->tx_queues[tx_queue_id]; int err = 0; - err = idpf_vc_txq_config(vport, txq); + err = idpf_vc_txq_config(vport, &cpfl_txq->base); if (err != 0) { PMD_DRV_LOG(ERR, "Fail to configure Tx queue %u", tx_queue_id); return err; @@ -650,7 +716,7 @@ cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on", tx_queue_id); } else { - txq->q_started = true; + cpfl_txq->base.q_started = true; dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; } @@ -661,13 +727,16 @@ cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; + struct cpfl_rx_queue *cpfl_rxq; struct idpf_rx_queue *rxq; int err; if (rx_queue_id >= dev->data->nb_rx_queues) return -EINVAL; + cpfl_rxq = dev->data->rx_queues[rx_queue_id]; err = idpf_vc_queue_switch(vport, rx_queue_id, true, false); if (err != 0) { PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off", @@ -675,7 +744,7 @@ cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) return err; } - rxq = dev->data->rx_queues[rx_queue_id]; + rxq = &cpfl_rxq->base; rxq->q_started = false; if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) { rxq->ops->release_mbufs(rxq); @@ -693,13 +762,17 @@ cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) int cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; + struct cpfl_tx_queue *cpfl_txq; struct idpf_tx_queue *txq; int err; if (tx_queue_id >= dev->data->nb_tx_queues) return -EINVAL; + cpfl_txq = dev->data->tx_queues[tx_queue_id]; + err = idpf_vc_queue_switch(vport, tx_queue_id, false, false); if (err != 0) { PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off", @@ -707,7 +780,7 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) return err; } - txq = dev->data->tx_queues[tx_queue_id]; + txq = &cpfl_txq->base; txq->q_started = false; txq->ops->release_mbufs(txq); if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) { @@ -724,25 +797,25 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid) { - idpf_qc_rx_queue_release(dev->data->rx_queues[qid]); + cpfl_rx_queue_release(dev->data->rx_queues[qid]); } void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid) { - idpf_qc_tx_queue_release(dev->data->tx_queues[qid]); + cpfl_tx_queue_release(dev->data->tx_queues[qid]); } void cpfl_stop_queues(struct rte_eth_dev *dev) { - struct idpf_rx_queue *rxq; - struct idpf_tx_queue *txq; + struct cpfl_rx_queue *cpfl_rxq; + struct cpfl_tx_queue *cpfl_txq; int i; for (i = 0; i < dev->data->nb_rx_queues; i++) { - rxq = dev->data->rx_queues[i]; - if (rxq == NULL) + cpfl_rxq = dev->data->rx_queues[i]; + if (cpfl_rxq == NULL) continue; if (cpfl_rx_queue_stop(dev, i) != 0) @@ -750,8 +823,8 @@ cpfl_stop_queues(struct rte_eth_dev *dev) } for (i = 0; i < dev->data->nb_tx_queues; i++) { - txq = dev->data->tx_queues[i]; - if (txq == NULL) + cpfl_txq = dev->data->tx_queues[i]; + if (cpfl_txq == NULL) continue; if (cpfl_tx_queue_stop(dev, i) != 0) @@ -762,9 +835,10 @@ cpfl_stop_queues(struct rte_eth_dev *dev) void cpfl_set_rx_function(struct rte_eth_dev *dev) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; #ifdef RTE_ARCH_X86 - struct idpf_rx_queue *rxq; + struct cpfl_rx_queue *cpfl_rxq; int i; if (cpfl_rx_vec_dev_check_default(dev) == CPFL_VECTOR_PATH && @@ -790,8 +864,8 @@ cpfl_set_rx_function(struct rte_eth_dev *dev) if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) { if (vport->rx_vec_allowed) { for (i = 0; i < dev->data->nb_rx_queues; i++) { - rxq = dev->data->rx_queues[i]; - (void)idpf_qc_splitq_rx_vec_setup(rxq); + cpfl_rxq = dev->data->rx_queues[i]; + (void)idpf_qc_splitq_rx_vec_setup(&cpfl_rxq->base); } #ifdef CC_AVX512_SUPPORT if (vport->rx_use_avx512) { @@ -810,8 +884,8 @@ cpfl_set_rx_function(struct rte_eth_dev *dev) } else { if (vport->rx_vec_allowed) { for (i = 0; i < dev->data->nb_rx_queues; i++) { - rxq = dev->data->rx_queues[i]; - (void)idpf_qc_singleq_rx_vec_setup(rxq); + cpfl_rxq = dev->data->rx_queues[i]; + (void)idpf_qc_singleq_rx_vec_setup(&cpfl_rxq->base); } #ifdef CC_AVX512_SUPPORT if (vport->rx_use_avx512) { @@ -860,10 +934,11 @@ cpfl_set_rx_function(struct rte_eth_dev *dev) void cpfl_set_tx_function(struct rte_eth_dev *dev) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; #ifdef RTE_ARCH_X86 #ifdef CC_AVX512_SUPPORT - struct idpf_tx_queue *txq; + struct cpfl_tx_queue *cpfl_txq; int i; #endif /* CC_AVX512_SUPPORT */ @@ -878,8 +953,8 @@ cpfl_set_tx_function(struct rte_eth_dev *dev) vport->tx_use_avx512 = true; if (vport->tx_use_avx512) { for (i = 0; i < dev->data->nb_tx_queues; i++) { - txq = dev->data->tx_queues[i]; - idpf_qc_tx_vec_avx512_setup(txq); + cpfl_txq = dev->data->tx_queues[i]; + idpf_qc_tx_vec_avx512_setup(&cpfl_txq->base); } } } @@ -916,10 +991,10 @@ cpfl_set_tx_function(struct rte_eth_dev *dev) #ifdef CC_AVX512_SUPPORT if (vport->tx_use_avx512) { for (i = 0; i < dev->data->nb_tx_queues; i++) { - txq = dev->data->tx_queues[i]; - if (txq == NULL) + cpfl_txq = dev->data->tx_queues[i]; + if (cpfl_txq == NULL) continue; - idpf_qc_tx_vec_avx512_setup(txq); + idpf_qc_tx_vec_avx512_setup(&cpfl_txq->base); } PMD_DRV_LOG(NOTICE, "Using Single AVX512 Vector Tx (port %d).", diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h index fb267d38c8..bfb9ad97bd 100644 --- a/drivers/net/cpfl/cpfl_rxtx.h +++ b/drivers/net/cpfl/cpfl_rxtx.h @@ -23,6 +23,14 @@ #define CPFL_SUPPORT_CHAIN_NUM 5 +struct cpfl_rx_queue { + struct idpf_rx_queue base; +}; + +struct cpfl_tx_queue { + struct idpf_tx_queue base; +}; + int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_txconf *tx_conf); diff --git a/drivers/net/cpfl/cpfl_rxtx_vec_common.h b/drivers/net/cpfl/cpfl_rxtx_vec_common.h index 665418d27d..5690b17911 100644 --- a/drivers/net/cpfl/cpfl_rxtx_vec_common.h +++ b/drivers/net/cpfl/cpfl_rxtx_vec_common.h @@ -76,15 +76,16 @@ cpfl_rx_splitq_vec_default(struct idpf_rx_queue *rxq) static inline int cpfl_rx_vec_dev_check_default(struct rte_eth_dev *dev) { - struct idpf_vport *vport = dev->data->dev_private; - struct idpf_rx_queue *rxq; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; + struct cpfl_rx_queue *cpfl_rxq; int i, default_ret, splitq_ret, ret = CPFL_SCALAR_PATH; for (i = 0; i < dev->data->nb_rx_queues; i++) { - rxq = dev->data->rx_queues[i]; - default_ret = cpfl_rx_vec_queue_default(rxq); + cpfl_rxq = dev->data->rx_queues[i]; + default_ret = cpfl_rx_vec_queue_default(&cpfl_rxq->base); if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) { - splitq_ret = cpfl_rx_splitq_vec_default(rxq); + splitq_ret = cpfl_rx_splitq_vec_default(&cpfl_rxq->base); ret = splitq_ret && default_ret; } else { ret = default_ret; @@ -100,12 +101,12 @@ static inline int cpfl_tx_vec_dev_check_default(struct rte_eth_dev *dev) { int i; - struct idpf_tx_queue *txq; + struct cpfl_tx_queue *cpfl_txq; int ret = 0; for (i = 0; i < dev->data->nb_tx_queues; i++) { - txq = dev->data->tx_queues[i]; - ret = cpfl_tx_vec_queue_default(txq); + cpfl_txq = dev->data->tx_queues[i]; + ret = cpfl_tx_vec_queue_default(&cpfl_txq->base); if (ret == CPFL_SCALAR_PATH) return CPFL_SCALAR_PATH; } From patchwork Mon Jun 5 06:17:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 128078 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2B44C42C31; Mon, 5 Jun 2023 08:42:32 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 562BD42D2C; Mon, 5 Jun 2023 08:42:21 +0200 (CEST) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id AC6514003C for ; Mon, 5 Jun 2023 08:42:18 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685947338; x=1717483338; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=82N8O95wtbEHqZYor1Qdr1XbDZ5VBYUiACvRUiM8voI=; b=h4daiBl+/LVGU1JY4jBB7smFcmSC2RJY7BmynZmCmv0Ejs4EZ3bFzmVI pzGzCPf7mQEcCcwzcBZzB9ezNwDueN8bWpMojzxq5KkD62yUwD81Cs4ST 31pqMqBRTQOKmixIQt6H8S5Jf1ARe7D2zBxe9gf3WjfrOOsN9mNeAhE1i 39Rr7ohSnFkVVfRhz2nnrIICRcYvUjh4iDP+YtHnf7AWIYGowVPskWGhr 45fGFLUVi13f75Zp+ajmSApTXL10WIoFRdncS1m/UPK1f+FawXUCJMLBZ 6tRK1PU59bXLWaN73SSV6O2SdnlkCGpL6V373pjLQ2MS6mAvFZngJNelZ A==; X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="419839620" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="419839620" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jun 2023 23:42:18 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="798301024" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="798301024" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by FMSMGA003.fm.intel.com with ESMTP; 04 Jun 2023 23:42:16 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing Subject: [PATCH v8 02/14] common/idpf: support queue groups add/delete Date: Mon, 5 Jun 2023 06:17:12 +0000 Message-Id: <20230605061724.88130-3-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230605061724.88130-1-beilei.xing@intel.com> References: <20230531130450.26380-1-beilei.xing@intel.com> <20230605061724.88130-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch adds queue group add/delete virtual channel support. Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/common/idpf/idpf_common_virtchnl.c | 66 ++++++++++++++++++++++ drivers/common/idpf/idpf_common_virtchnl.h | 9 +++ drivers/common/idpf/version.map | 2 + 3 files changed, 77 insertions(+) diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c index b713678634..a3fe55c897 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.c +++ b/drivers/common/idpf/idpf_common_virtchnl.c @@ -359,6 +359,72 @@ idpf_vc_vport_destroy(struct idpf_vport *vport) return err; } +int +idpf_vc_queue_grps_add(struct idpf_vport *vport, + struct virtchnl2_add_queue_groups *p2p_queue_grps_info, + uint8_t *p2p_queue_grps_out) +{ + struct idpf_adapter *adapter = vport->adapter; + struct idpf_cmd_info args; + int size, qg_info_size; + int err = -1; + + size = sizeof(*p2p_queue_grps_info) + + (p2p_queue_grps_info->qg_info.num_queue_groups - 1) * + sizeof(struct virtchnl2_queue_group_info); + + memset(&args, 0, sizeof(args)); + args.ops = VIRTCHNL2_OP_ADD_QUEUE_GROUPS; + args.in_args = (uint8_t *)p2p_queue_grps_info; + args.in_args_size = size; + args.out_buffer = adapter->mbx_resp; + args.out_size = IDPF_DFLT_MBX_BUF_SIZE; + + err = idpf_vc_cmd_execute(adapter, &args); + if (err != 0) { + DRV_LOG(ERR, + "Failed to execute command of VIRTCHNL2_OP_ADD_QUEUE_GROUPS"); + return err; + } + + rte_memcpy(p2p_queue_grps_out, args.out_buffer, IDPF_DFLT_MBX_BUF_SIZE); + return 0; +} + +int idpf_vc_queue_grps_del(struct idpf_vport *vport, + uint16_t num_q_grps, + struct virtchnl2_queue_group_id *qg_ids) +{ + struct idpf_adapter *adapter = vport->adapter; + struct virtchnl2_delete_queue_groups *vc_del_q_grps; + struct idpf_cmd_info args; + int size; + int err; + + size = sizeof(*vc_del_q_grps) + + (num_q_grps - 1) * sizeof(struct virtchnl2_queue_group_id); + vc_del_q_grps = rte_zmalloc("vc_del_q_grps", size, 0); + + vc_del_q_grps->vport_id = vport->vport_id; + vc_del_q_grps->num_queue_groups = num_q_grps; + memcpy(vc_del_q_grps->qg_ids, qg_ids, + num_q_grps * sizeof(struct virtchnl2_queue_group_id)); + + memset(&args, 0, sizeof(args)); + args.ops = VIRTCHNL2_OP_DEL_QUEUE_GROUPS; + args.in_args = (uint8_t *)vc_del_q_grps; + args.in_args_size = size; + args.out_buffer = adapter->mbx_resp; + args.out_size = IDPF_DFLT_MBX_BUF_SIZE; + + err = idpf_vc_cmd_execute(adapter, &args); + if (err != 0) + DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_DEL_QUEUE_GROUPS"); + + rte_free(vc_del_q_grps); + return err; +} + int idpf_vc_rss_key_set(struct idpf_vport *vport) { diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h index c45295290e..58b16e1c5d 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.h +++ b/drivers/common/idpf/idpf_common_virtchnl.h @@ -64,4 +64,13 @@ int idpf_vc_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg, __rte_internal int idpf_vc_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq, u16 *buff_count, struct idpf_dma_mem **buffs); +__rte_internal +int idpf_vc_queue_grps_del(struct idpf_vport *vport, + uint16_t num_q_grps, + struct virtchnl2_queue_group_id *qg_ids); +__rte_internal +int +idpf_vc_queue_grps_add(struct idpf_vport *vport, + struct virtchnl2_add_queue_groups *ptp_queue_grps_info, + uint8_t *ptp_queue_grps_out); #endif /* _IDPF_COMMON_VIRTCHNL_H_ */ diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map index 70334a1b03..01d18f3f3f 100644 --- a/drivers/common/idpf/version.map +++ b/drivers/common/idpf/version.map @@ -43,6 +43,8 @@ INTERNAL { idpf_vc_irq_map_unmap_config; idpf_vc_one_msg_read; idpf_vc_ptype_info_query; + idpf_vc_queue_grps_add; + idpf_vc_queue_grps_del; idpf_vc_queue_switch; idpf_vc_queues_ena_dis; idpf_vc_rss_hash_get; From patchwork Mon Jun 5 06:17:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 128079 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AE65F42C31; Mon, 5 Jun 2023 08:42:38 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4865442D35; Mon, 5 Jun 2023 08:42:22 +0200 (CEST) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 3AB4942D12 for ; Mon, 5 Jun 2023 08:42:20 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685947340; x=1717483340; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9l5RakS5ufJgfDHn5377KDzpGV4wppBz3vvs7m4f8Jk=; b=UhxH+ZSqnJH78L9DJudMmp3oTevu6HZgVfppH8SWTdNJ2DGG6gOyQDR9 HAKvIVZqP6Q+PliKv4i2DqTJwRrpqRY2uQGVam0BTtcBQSxGu5wySMXAm fbrx8lOCEc7E1OOd4OfUBOqqbgI5UfZkU+x+j/5qGu390ePOXjNHopSCq cWAF0wbcnr14NupKWkejaCehiWKokEyzpz6CN+RgFSYmCjSqhMtbMSVvO YRJXO4NI+FvKwgUaeUdSTcVvvyYRV4kpQX1JL2167aKmxpQoTdKr6JdIc ODBcCVXUFqGVETYkYQSJNKi688HeeuBHvl0TdZBV/G+HPJ0zifB6/ELSR A==; X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="419839622" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="419839622" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jun 2023 23:42:19 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="798301028" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="798301028" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by FMSMGA003.fm.intel.com with ESMTP; 04 Jun 2023 23:42:18 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing Subject: [PATCH v8 03/14] net/cpfl: add haipin queue group during vport init Date: Mon, 5 Jun 2023 06:17:13 +0000 Message-Id: <20230605061724.88130-4-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230605061724.88130-1-beilei.xing@intel.com> References: <20230531130450.26380-1-beilei.xing@intel.com> <20230605061724.88130-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch adds haipin queue group during vport init. Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/net/cpfl/cpfl_ethdev.c | 133 +++++++++++++++++++++++++++++++++ drivers/net/cpfl/cpfl_ethdev.h | 18 +++++ drivers/net/cpfl/cpfl_rxtx.h | 7 ++ 3 files changed, 158 insertions(+) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index e587155db6..c1273a7478 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -840,6 +840,20 @@ cpfl_dev_stop(struct rte_eth_dev *dev) return 0; } +static int +cpfl_p2p_queue_grps_del(struct idpf_vport *vport) +{ + struct virtchnl2_queue_group_id qg_ids[CPFL_P2P_NB_QUEUE_GRPS] = {0}; + int ret = 0; + + qg_ids[0].queue_group_id = CPFL_P2P_QUEUE_GRP_ID; + qg_ids[0].queue_group_type = VIRTCHNL2_QUEUE_GROUP_P2P; + ret = idpf_vc_queue_grps_del(vport, CPFL_P2P_NB_QUEUE_GRPS, qg_ids); + if (ret) + PMD_DRV_LOG(ERR, "Failed to delete p2p queue groups"); + return ret; +} + static int cpfl_dev_close(struct rte_eth_dev *dev) { @@ -848,7 +862,12 @@ cpfl_dev_close(struct rte_eth_dev *dev) struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(vport->adapter); cpfl_dev_stop(dev); + + if (!adapter->base.is_rx_singleq && !adapter->base.is_tx_singleq) + cpfl_p2p_queue_grps_del(vport); + idpf_vport_deinit(vport); + rte_free(cpfl_vport->p2p_q_chunks_info); adapter->cur_vports &= ~RTE_BIT32(vport->devarg_id); adapter->cur_vport_nb--; @@ -1284,6 +1303,96 @@ cpfl_vport_idx_alloc(struct cpfl_adapter_ext *adapter) return vport_idx; } +static int +cpfl_p2p_q_grps_add(struct idpf_vport *vport, + struct virtchnl2_add_queue_groups *p2p_queue_grps_info, + uint8_t *p2p_q_vc_out_info) +{ + int ret; + + p2p_queue_grps_info->vport_id = vport->vport_id; + p2p_queue_grps_info->qg_info.num_queue_groups = CPFL_P2P_NB_QUEUE_GRPS; + p2p_queue_grps_info->qg_info.groups[0].num_rx_q = CPFL_MAX_P2P_NB_QUEUES; + p2p_queue_grps_info->qg_info.groups[0].num_rx_bufq = CPFL_P2P_NB_RX_BUFQ; + p2p_queue_grps_info->qg_info.groups[0].num_tx_q = CPFL_MAX_P2P_NB_QUEUES; + p2p_queue_grps_info->qg_info.groups[0].num_tx_complq = CPFL_P2P_NB_TX_COMPLQ; + p2p_queue_grps_info->qg_info.groups[0].qg_id.queue_group_id = CPFL_P2P_QUEUE_GRP_ID; + p2p_queue_grps_info->qg_info.groups[0].qg_id.queue_group_type = VIRTCHNL2_QUEUE_GROUP_P2P; + p2p_queue_grps_info->qg_info.groups[0].rx_q_grp_info.rss_lut_size = 0; + p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.tx_tc = 0; + p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.priority = 0; + p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.is_sp = 0; + p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.pir_weight = 0; + + ret = idpf_vc_queue_grps_add(vport, p2p_queue_grps_info, p2p_q_vc_out_info); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to add p2p queue groups."); + return ret; + } + + return ret; +} + +static int +cpfl_p2p_queue_info_init(struct cpfl_vport *cpfl_vport, + struct virtchnl2_add_queue_groups *p2p_q_vc_out_info) +{ + struct p2p_queue_chunks_info *p2p_q_chunks_info = cpfl_vport->p2p_q_chunks_info; + struct virtchnl2_queue_reg_chunks *vc_chunks_out; + int i, type; + + if (p2p_q_vc_out_info->qg_info.groups[0].qg_id.queue_group_type != + VIRTCHNL2_QUEUE_GROUP_P2P) { + PMD_DRV_LOG(ERR, "Add queue group response mismatch."); + return -EINVAL; + } + + vc_chunks_out = &p2p_q_vc_out_info->qg_info.groups[0].chunks; + + for (i = 0; i < vc_chunks_out->num_chunks; i++) { + type = vc_chunks_out->chunks[i].type; + switch (type) { + case VIRTCHNL2_QUEUE_TYPE_TX: + p2p_q_chunks_info->tx_start_qid = + vc_chunks_out->chunks[i].start_queue_id; + p2p_q_chunks_info->tx_qtail_start = + vc_chunks_out->chunks[i].qtail_reg_start; + p2p_q_chunks_info->tx_qtail_spacing = + vc_chunks_out->chunks[i].qtail_reg_spacing; + break; + case VIRTCHNL2_QUEUE_TYPE_RX: + p2p_q_chunks_info->rx_start_qid = + vc_chunks_out->chunks[i].start_queue_id; + p2p_q_chunks_info->rx_qtail_start = + vc_chunks_out->chunks[i].qtail_reg_start; + p2p_q_chunks_info->rx_qtail_spacing = + vc_chunks_out->chunks[i].qtail_reg_spacing; + break; + case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION: + p2p_q_chunks_info->tx_compl_start_qid = + vc_chunks_out->chunks[i].start_queue_id; + p2p_q_chunks_info->tx_compl_qtail_start = + vc_chunks_out->chunks[i].qtail_reg_start; + p2p_q_chunks_info->tx_compl_qtail_spacing = + vc_chunks_out->chunks[i].qtail_reg_spacing; + break; + case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER: + p2p_q_chunks_info->rx_buf_start_qid = + vc_chunks_out->chunks[i].start_queue_id; + p2p_q_chunks_info->rx_buf_qtail_start = + vc_chunks_out->chunks[i].qtail_reg_start; + p2p_q_chunks_info->rx_buf_qtail_spacing = + vc_chunks_out->chunks[i].qtail_reg_spacing; + break; + default: + PMD_DRV_LOG(ERR, "Unsupported queue type"); + break; + } + } + + return 0; +} + static int cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params) { @@ -1293,6 +1402,8 @@ cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params) struct cpfl_adapter_ext *adapter = param->adapter; /* for sending create vport virtchnl msg prepare */ struct virtchnl2_create_vport create_vport_info; + struct virtchnl2_add_queue_groups p2p_queue_grps_info; + uint8_t p2p_q_vc_out_info[IDPF_DFLT_MBX_BUF_SIZE] = {0}; int ret = 0; dev->dev_ops = &cpfl_eth_dev_ops; @@ -1327,6 +1438,28 @@ cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params) rte_ether_addr_copy((struct rte_ether_addr *)vport->default_mac_addr, &dev->data->mac_addrs[0]); + if (!adapter->base.is_rx_singleq && !adapter->base.is_tx_singleq) { + memset(&p2p_queue_grps_info, 0, sizeof(p2p_queue_grps_info)); + ret = cpfl_p2p_q_grps_add(vport, &p2p_queue_grps_info, p2p_q_vc_out_info); + if (ret != 0) { + PMD_INIT_LOG(ERR, "Failed to add p2p queue group."); + return 0; + } + cpfl_vport->p2p_q_chunks_info = rte_zmalloc(NULL, + sizeof(struct p2p_queue_chunks_info), 0); + if (cpfl_vport->p2p_q_chunks_info == NULL) { + PMD_INIT_LOG(ERR, "Failed to allocate p2p queue info."); + cpfl_p2p_queue_grps_del(vport); + return 0; + } + ret = cpfl_p2p_queue_info_init(cpfl_vport, + (struct virtchnl2_add_queue_groups *)p2p_q_vc_out_info); + if (ret != 0) { + PMD_INIT_LOG(ERR, "Failed to init p2p queue info."); + cpfl_p2p_queue_grps_del(vport); + } + } + return 0; err_mac_addrs: diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h index 81fe9ac4c3..666d46a44a 100644 --- a/drivers/net/cpfl/cpfl_ethdev.h +++ b/drivers/net/cpfl/cpfl_ethdev.h @@ -56,6 +56,7 @@ /* Device IDs */ #define IDPF_DEV_ID_CPF 0x1453 +#define VIRTCHNL2_QUEUE_GROUP_P2P 0x100 struct cpfl_vport_param { struct cpfl_adapter_ext *adapter; @@ -69,8 +70,25 @@ struct cpfl_devargs { uint16_t req_vport_nb; }; +struct p2p_queue_chunks_info { + uint32_t tx_start_qid; + uint32_t rx_start_qid; + uint32_t tx_compl_start_qid; + uint32_t rx_buf_start_qid; + + uint64_t tx_qtail_start; + uint32_t tx_qtail_spacing; + uint64_t rx_qtail_start; + uint32_t rx_qtail_spacing; + uint64_t tx_compl_qtail_start; + uint32_t tx_compl_qtail_spacing; + uint64_t rx_buf_qtail_start; + uint32_t rx_buf_qtail_spacing; +}; + struct cpfl_vport { struct idpf_vport base; + struct p2p_queue_chunks_info *p2p_q_chunks_info; }; struct cpfl_adapter_ext { diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h index bfb9ad97bd..1fe65778f0 100644 --- a/drivers/net/cpfl/cpfl_rxtx.h +++ b/drivers/net/cpfl/cpfl_rxtx.h @@ -13,6 +13,13 @@ #define CPFL_MIN_RING_DESC 32 #define CPFL_MAX_RING_DESC 4096 #define CPFL_DMA_MEM_ALIGN 4096 + +#define CPFL_MAX_P2P_NB_QUEUES 16 +#define CPFL_P2P_NB_RX_BUFQ 1 +#define CPFL_P2P_NB_TX_COMPLQ 1 +#define CPFL_P2P_NB_QUEUE_GRPS 1 +#define CPFL_P2P_QUEUE_GRP_ID 1 + /* Base address of the HW descriptor ring should be 128B aligned. */ #define CPFL_RING_BASE_ALIGN 128 From patchwork Mon Jun 5 06:17:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 128080 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 98B5D42C31; Mon, 5 Jun 2023 08:42:45 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 49EAF42D39; Mon, 5 Jun 2023 08:42:24 +0200 (CEST) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id A220542D32 for ; Mon, 5 Jun 2023 08:42:21 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685947341; x=1717483341; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=0J3kCWXQyej1pfHQj5Q0eZprYBPaCR3/PoNyi/pb6ZU=; b=CSrG70jHg3HrxitpwuyHOh2fxkIVbWJ5ArQjFz33PyHckWm2Ixhv2WL/ FSwNEDrGw0ECb3i9I3MKjiodrjInUaNGVMiemeDe8hMe3HL8BcPHYb5l1 ChlEL79aYws6QBZ7879kuDHMz0m/VWMUCHofm/8bqGa3BU8uiSKP5vILf brarPM4PJYvnNsXbCD+cSqgncpVhjX9hnsRCPXKBxoNGgvsGuYKv6zz2n E4SbGvDRuWKB99G+CTYil23jMiNtMKpOyyhROK01jFdOz81+cpEVRDjpl LPVdGGEfXDBVg3FhcoMKGdIW8l7Hm8hX/2f43vk4kqP+Elqi+KFSfd6I2 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="419839626" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="419839626" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jun 2023 23:42:21 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="798301032" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="798301032" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by FMSMGA003.fm.intel.com with ESMTP; 04 Jun 2023 23:42:19 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing , Xiao Wang Subject: [PATCH v8 04/14] net/cpfl: support hairpin queue capbility get Date: Mon, 5 Jun 2023 06:17:14 +0000 Message-Id: <20230605061724.88130-5-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230605061724.88130-1-beilei.xing@intel.com> References: <20230531130450.26380-1-beilei.xing@intel.com> <20230605061724.88130-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch adds hairpin_cap_get ops support. Signed-off-by: Xiao Wang Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/net/cpfl/cpfl_ethdev.c | 18 ++++++++++++++++++ drivers/net/cpfl/cpfl_rxtx.h | 3 +++ 2 files changed, 21 insertions(+) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index c1273a7478..40b4515539 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -154,6 +154,23 @@ cpfl_dev_link_update(struct rte_eth_dev *dev, return rte_eth_linkstatus_set(dev, &new_link); } +static int +cpfl_hairpin_cap_get(struct rte_eth_dev *dev, + struct rte_eth_hairpin_cap *cap) +{ + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + + if (cpfl_vport->p2p_q_chunks_info == NULL) + return -ENOTSUP; + + cap->max_nb_queues = CPFL_MAX_P2P_NB_QUEUES; + cap->max_rx_2_tx = CPFL_MAX_HAIRPINQ_RX_2_TX; + cap->max_tx_2_rx = CPFL_MAX_HAIRPINQ_TX_2_RX; + cap->max_nb_desc = CPFL_MAX_HAIRPINQ_NB_DESC; + + return 0; +} + static int cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) { @@ -904,6 +921,7 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = { .xstats_get = cpfl_dev_xstats_get, .xstats_get_names = cpfl_dev_xstats_get_names, .xstats_reset = cpfl_dev_xstats_reset, + .hairpin_cap_get = cpfl_hairpin_cap_get, }; static int diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h index 1fe65778f0..a4a164d462 100644 --- a/drivers/net/cpfl/cpfl_rxtx.h +++ b/drivers/net/cpfl/cpfl_rxtx.h @@ -14,6 +14,9 @@ #define CPFL_MAX_RING_DESC 4096 #define CPFL_DMA_MEM_ALIGN 4096 +#define CPFL_MAX_HAIRPINQ_RX_2_TX 1 +#define CPFL_MAX_HAIRPINQ_TX_2_RX 1 +#define CPFL_MAX_HAIRPINQ_NB_DESC 1024 #define CPFL_MAX_P2P_NB_QUEUES 16 #define CPFL_P2P_NB_RX_BUFQ 1 #define CPFL_P2P_NB_TX_COMPLQ 1 From patchwork Mon Jun 5 06:17:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 128081 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 89B8F42C31; Mon, 5 Jun 2023 08:42:51 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 44AC742D38; Mon, 5 Jun 2023 08:42:26 +0200 (CEST) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 993E342D38 for ; Mon, 5 Jun 2023 08:42:23 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685947343; x=1717483343; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=iBIY9rZGgZkeDH+ES+KN1u0xwVGl6PWzqaQXnPE9Vhk=; b=n892VlJ5zW952wmbdwKZolIxiDGSWUH4UJfYKXfIjgxG/7EzxrTBGCev x3VtasoBMQoeMEwaHq+fjmqbaGRsV5p3alPWxaby1rbrw6BoXxt9YgJMc KD/da3xUtrJmUWYvA25P12wtHCCiM4Mnc0QkP7olKYyodT8w0LRteEwpF 94q/zUn0iXt//K6NnRmEX/wH2px9CjsjwVahaKTty0grcbtyAvm11HvCL n+YG6rJFMYRwTsvL6GqVBZQSTnXOCC3qg7K22e2/lRT+5Z8WKbUJKlzBX 8208+spGiRU79m3mv1f6CXM1cbYp0/C5n0KGuhzUbKHq/ve4emLsZrR6J g==; X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="419839632" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="419839632" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jun 2023 23:42:23 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="798301035" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="798301035" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by FMSMGA003.fm.intel.com with ESMTP; 04 Jun 2023 23:42:21 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing , Xiao Wang Subject: [PATCH v8 05/14] net/cpfl: support hairpin queue setup and release Date: Mon, 5 Jun 2023 06:17:15 +0000 Message-Id: <20230605061724.88130-6-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230605061724.88130-1-beilei.xing@intel.com> References: <20230531130450.26380-1-beilei.xing@intel.com> <20230605061724.88130-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing Support hairpin Rx/Tx queue setup and release. Signed-off-by: Xiao Wang Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/net/cpfl/cpfl_ethdev.c | 6 + drivers/net/cpfl/cpfl_ethdev.h | 11 + drivers/net/cpfl/cpfl_rxtx.c | 364 +++++++++++++++++++++++- drivers/net/cpfl/cpfl_rxtx.h | 36 +++ drivers/net/cpfl/cpfl_rxtx_vec_common.h | 4 + 5 files changed, 420 insertions(+), 1 deletion(-) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index 40b4515539..b17c538ec2 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -879,6 +879,10 @@ cpfl_dev_close(struct rte_eth_dev *dev) struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(vport->adapter); cpfl_dev_stop(dev); + if (cpfl_vport->p2p_mp) { + rte_mempool_free(cpfl_vport->p2p_mp); + cpfl_vport->p2p_mp = NULL; + } if (!adapter->base.is_rx_singleq && !adapter->base.is_tx_singleq) cpfl_p2p_queue_grps_del(vport); @@ -922,6 +926,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = { .xstats_get_names = cpfl_dev_xstats_get_names, .xstats_reset = cpfl_dev_xstats_reset, .hairpin_cap_get = cpfl_hairpin_cap_get, + .rx_hairpin_queue_setup = cpfl_rx_hairpin_queue_setup, + .tx_hairpin_queue_setup = cpfl_tx_hairpin_queue_setup, }; static int diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h index 666d46a44a..2e42354f70 100644 --- a/drivers/net/cpfl/cpfl_ethdev.h +++ b/drivers/net/cpfl/cpfl_ethdev.h @@ -89,6 +89,17 @@ struct p2p_queue_chunks_info { struct cpfl_vport { struct idpf_vport base; struct p2p_queue_chunks_info *p2p_q_chunks_info; + + struct rte_mempool *p2p_mp; + + uint16_t nb_data_rxq; + uint16_t nb_data_txq; + uint16_t nb_p2p_rxq; + uint16_t nb_p2p_txq; + + struct idpf_rx_queue *p2p_rx_bufq; + struct idpf_tx_queue *p2p_tx_complq; + bool p2p_manual_bind; }; struct cpfl_adapter_ext { diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c index 04a51b8d15..90b408d1f4 100644 --- a/drivers/net/cpfl/cpfl_rxtx.c +++ b/drivers/net/cpfl/cpfl_rxtx.c @@ -10,6 +10,67 @@ #include "cpfl_rxtx.h" #include "cpfl_rxtx_vec_common.h" +static inline void +cpfl_tx_hairpin_descq_reset(struct idpf_tx_queue *txq) +{ + uint32_t i, size; + + if (!txq) { + PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL"); + return; + } + + size = txq->nb_tx_desc * CPFL_P2P_DESC_LEN; + for (i = 0; i < size; i++) + ((volatile char *)txq->desc_ring)[i] = 0; +} + +static inline void +cpfl_tx_hairpin_complq_reset(struct idpf_tx_queue *cq) +{ + uint32_t i, size; + + if (!cq) { + PMD_DRV_LOG(DEBUG, "Pointer to complq is NULL"); + return; + } + + size = cq->nb_tx_desc * CPFL_P2P_DESC_LEN; + for (i = 0; i < size; i++) + ((volatile char *)cq->compl_ring)[i] = 0; +} + +static inline void +cpfl_rx_hairpin_descq_reset(struct idpf_rx_queue *rxq) +{ + uint16_t len; + uint32_t i; + + if (!rxq) + return; + + len = rxq->nb_rx_desc; + for (i = 0; i < len * CPFL_P2P_DESC_LEN; i++) + ((volatile char *)rxq->rx_ring)[i] = 0; +} + +static inline void +cpfl_rx_hairpin_bufq_reset(struct idpf_rx_queue *rxbq) +{ + uint16_t len; + uint32_t i; + + if (!rxbq) + return; + + len = rxbq->nb_rx_desc; + for (i = 0; i < len * CPFL_P2P_DESC_LEN; i++) + ((volatile char *)rxbq->rx_ring)[i] = 0; + + rxbq->bufq1 = NULL; + rxbq->bufq2 = NULL; +} + static uint64_t cpfl_rx_offload_convert(uint64_t offload) { @@ -234,7 +295,10 @@ cpfl_rx_queue_release(void *rxq) /* Split queue */ if (!q->adapter->is_rx_singleq) { - if (q->bufq2) + /* the mz is shared between Tx/Rx hairpin, let Rx_release + * free the buf, q->bufq1->mz and q->mz. + */ + if (!cpfl_rxq->hairpin_info.hairpin_q && q->bufq2) cpfl_rx_split_bufq_release(q->bufq2); if (q->bufq1) @@ -385,6 +449,7 @@ cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, } } + cpfl_vport->nb_data_rxq++; rxq->q_set = true; dev->data->rx_queues[queue_idx] = cpfl_rxq; @@ -548,6 +613,7 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start + queue_idx * vport->chunks_info.tx_qtail_spacing); txq->ops = &def_txq_ops; + cpfl_vport->nb_data_txq++; txq->q_set = true; dev->data->tx_queues[queue_idx] = cpfl_txq; @@ -562,6 +628,300 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, return ret; } +static int +cpfl_rx_hairpin_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *bufq, + uint16_t logic_qid, uint16_t nb_desc) +{ + struct cpfl_vport *cpfl_vport = + (struct cpfl_vport *)dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; + struct idpf_adapter *adapter = vport->adapter; + struct rte_mempool *mp; + char pool_name[RTE_MEMPOOL_NAMESIZE]; + + mp = cpfl_vport->p2p_mp; + if (!mp) { + snprintf(pool_name, RTE_MEMPOOL_NAMESIZE, "p2p_mb_pool_%u", + dev->data->port_id); + mp = rte_pktmbuf_pool_create(pool_name, CPFL_P2P_NB_MBUF * CPFL_MAX_P2P_NB_QUEUES, + CPFL_P2P_CACHE_SIZE, 0, CPFL_P2P_MBUF_SIZE, + dev->device->numa_node); + if (!mp) { + PMD_INIT_LOG(ERR, "Failed to allocate mbuf pool for p2p"); + return -ENOMEM; + } + cpfl_vport->p2p_mp = mp; + } + + bufq->mp = mp; + bufq->nb_rx_desc = nb_desc; + bufq->queue_id = cpfl_hw_qid_get(cpfl_vport->p2p_q_chunks_info->rx_buf_start_qid, + logic_qid); + bufq->port_id = dev->data->port_id; + bufq->adapter = adapter; + bufq->rx_buf_len = CPFL_P2P_MBUF_SIZE - RTE_PKTMBUF_HEADROOM; + + bufq->q_set = true; + bufq->ops = &def_rxq_ops; + + return 0; +} + +int +cpfl_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, + uint16_t nb_desc, + const struct rte_eth_hairpin_conf *conf) +{ + struct cpfl_vport *cpfl_vport = (struct cpfl_vport *)dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; + struct idpf_adapter *adapter_base = vport->adapter; + uint16_t logic_qid = cpfl_vport->nb_p2p_rxq; + struct cpfl_rxq_hairpin_info *hairpin_info; + struct cpfl_rx_queue *cpfl_rxq; + struct idpf_rx_queue *bufq1 = NULL; + struct idpf_rx_queue *rxq; + uint16_t peer_port, peer_q; + uint16_t qid; + int ret; + + if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) { + PMD_INIT_LOG(ERR, "Only spilt queue model supports hairpin queue."); + return -EINVAL; + } + + if (conf->peer_count != 1) { + PMD_INIT_LOG(ERR, "Can't support Rx hairpin queue peer count %d", conf->peer_count); + return -EINVAL; + } + + peer_port = conf->peers[0].port; + peer_q = conf->peers[0].queue; + + if (nb_desc % CPFL_ALIGN_RING_DESC != 0 || + nb_desc > CPFL_MAX_RING_DESC || + nb_desc < CPFL_MIN_RING_DESC) { + PMD_INIT_LOG(ERR, "Number (%u) of receive descriptors is invalid", nb_desc); + return -EINVAL; + } + + /* Free memory if needed */ + if (dev->data->rx_queues[queue_idx]) { + cpfl_rx_queue_release(dev->data->rx_queues[queue_idx]); + dev->data->rx_queues[queue_idx] = NULL; + } + + /* Setup Rx description queue */ + cpfl_rxq = rte_zmalloc_socket("cpfl hairpin rxq", + sizeof(struct cpfl_rx_queue), + RTE_CACHE_LINE_SIZE, + SOCKET_ID_ANY); + if (!cpfl_rxq) { + PMD_INIT_LOG(ERR, "Failed to allocate memory for rx queue data structure"); + return -ENOMEM; + } + + rxq = &cpfl_rxq->base; + hairpin_info = &cpfl_rxq->hairpin_info; + rxq->nb_rx_desc = nb_desc * 2; + rxq->queue_id = cpfl_hw_qid_get(cpfl_vport->p2p_q_chunks_info->rx_start_qid, logic_qid); + rxq->port_id = dev->data->port_id; + rxq->adapter = adapter_base; + rxq->rx_buf_len = CPFL_P2P_MBUF_SIZE - RTE_PKTMBUF_HEADROOM; + hairpin_info->hairpin_q = true; + hairpin_info->peer_txp = peer_port; + hairpin_info->peer_txq_id = peer_q; + + if (conf->manual_bind != 0) + cpfl_vport->p2p_manual_bind = true; + else + cpfl_vport->p2p_manual_bind = false; + + if (cpfl_vport->p2p_rx_bufq == NULL) { + bufq1 = rte_zmalloc_socket("hairpin rx bufq1", + sizeof(struct idpf_rx_queue), + RTE_CACHE_LINE_SIZE, + SOCKET_ID_ANY); + if (!bufq1) { + PMD_INIT_LOG(ERR, "Failed to allocate memory for hairpin Rx buffer queue 1."); + ret = -ENOMEM; + goto err_alloc_bufq1; + } + qid = 2 * logic_qid; + ret = cpfl_rx_hairpin_bufq_setup(dev, bufq1, qid, nb_desc); + if (ret) { + PMD_INIT_LOG(ERR, "Failed to setup hairpin Rx buffer queue 1"); + ret = -EINVAL; + goto err_setup_bufq1; + } + cpfl_vport->p2p_rx_bufq = bufq1; + } + + rxq->bufq1 = cpfl_vport->p2p_rx_bufq; + rxq->bufq2 = NULL; + + cpfl_vport->nb_p2p_rxq++; + rxq->q_set = true; + dev->data->rx_queues[queue_idx] = cpfl_rxq; + + return 0; + +err_setup_bufq1: + rte_mempool_free(cpfl_vport->p2p_mp); + rte_free(bufq1); +err_alloc_bufq1: + rte_free(cpfl_rxq); + + return ret; +} + +int +cpfl_tx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, + uint16_t nb_desc, + const struct rte_eth_hairpin_conf *conf) +{ + struct cpfl_vport *cpfl_vport = + (struct cpfl_vport *)dev->data->dev_private; + + struct idpf_vport *vport = &cpfl_vport->base; + struct idpf_adapter *adapter_base = vport->adapter; + uint16_t logic_qid = cpfl_vport->nb_p2p_txq; + struct cpfl_txq_hairpin_info *hairpin_info; + struct idpf_hw *hw = &adapter_base->hw; + struct cpfl_tx_queue *cpfl_txq; + struct idpf_tx_queue *txq, *cq; + const struct rte_memzone *mz; + uint32_t ring_size; + uint16_t peer_port, peer_q; + int ret; + + if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) { + PMD_INIT_LOG(ERR, "Only spilt queue model supports hairpin queue."); + return -EINVAL; + } + + if (conf->peer_count != 1) { + PMD_INIT_LOG(ERR, "Can't support Tx hairpin queue peer count %d", conf->peer_count); + return -EINVAL; + } + + peer_port = conf->peers[0].port; + peer_q = conf->peers[0].queue; + + if (nb_desc % CPFL_ALIGN_RING_DESC != 0 || + nb_desc > CPFL_MAX_RING_DESC || + nb_desc < CPFL_MIN_RING_DESC) { + PMD_INIT_LOG(ERR, "Number (%u) of transmit descriptors is invalid", + nb_desc); + return -EINVAL; + } + + /* Free memory if needed. */ + if (dev->data->tx_queues[queue_idx]) { + cpfl_tx_queue_release(dev->data->tx_queues[queue_idx]); + dev->data->tx_queues[queue_idx] = NULL; + } + + /* Allocate the TX queue data structure. */ + cpfl_txq = rte_zmalloc_socket("cpfl hairpin txq", + sizeof(struct cpfl_tx_queue), + RTE_CACHE_LINE_SIZE, + SOCKET_ID_ANY); + if (!cpfl_txq) { + PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure"); + return -ENOMEM; + } + + txq = &cpfl_txq->base; + hairpin_info = &cpfl_txq->hairpin_info; + /* Txq ring length should be 2 times of Tx completion queue size. */ + txq->nb_tx_desc = nb_desc * 2; + txq->queue_id = cpfl_hw_qid_get(cpfl_vport->p2p_q_chunks_info->tx_start_qid, logic_qid); + txq->port_id = dev->data->port_id; + hairpin_info->hairpin_q = true; + hairpin_info->peer_rxp = peer_port; + hairpin_info->peer_rxq_id = peer_q; + + if (conf->manual_bind != 0) + cpfl_vport->p2p_manual_bind = true; + else + cpfl_vport->p2p_manual_bind = false; + + /* Always Tx hairpin queue allocates Tx HW ring */ + ring_size = RTE_ALIGN(txq->nb_tx_desc * CPFL_P2P_DESC_LEN, + CPFL_DMA_MEM_ALIGN); + mz = rte_eth_dma_zone_reserve(dev, "hairpin_tx_ring", logic_qid, + ring_size + CPFL_P2P_RING_BUF, + CPFL_RING_BASE_ALIGN, + dev->device->numa_node); + if (!mz) { + PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX"); + ret = -ENOMEM; + goto err_txq_mz_rsv; + } + + txq->tx_ring_phys_addr = mz->iova; + txq->desc_ring = mz->addr; + txq->mz = mz; + + cpfl_tx_hairpin_descq_reset(txq); + txq->qtx_tail = hw->hw_addr + + cpfl_hw_qtail_get(cpfl_vport->p2p_q_chunks_info->tx_qtail_start, + logic_qid, cpfl_vport->p2p_q_chunks_info->tx_qtail_spacing); + txq->ops = &def_txq_ops; + + if (cpfl_vport->p2p_tx_complq == NULL) { + cq = rte_zmalloc_socket("cpfl hairpin cq", + sizeof(struct idpf_tx_queue), + RTE_CACHE_LINE_SIZE, + dev->device->numa_node); + if (!cq) { + PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure"); + ret = -ENOMEM; + goto err_cq_alloc; + } + + cq->nb_tx_desc = nb_desc; + cq->queue_id = cpfl_hw_qid_get(cpfl_vport->p2p_q_chunks_info->tx_compl_start_qid, + 0); + cq->port_id = dev->data->port_id; + + /* Tx completion queue always allocates the HW ring */ + ring_size = RTE_ALIGN(cq->nb_tx_desc * CPFL_P2P_DESC_LEN, + CPFL_DMA_MEM_ALIGN); + mz = rte_eth_dma_zone_reserve(dev, "hairpin_tx_compl_ring", logic_qid, + ring_size + CPFL_P2P_RING_BUF, + CPFL_RING_BASE_ALIGN, + dev->device->numa_node); + if (!mz) { + PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX completion queue"); + ret = -ENOMEM; + goto err_cq_mz_rsv; + } + cq->tx_ring_phys_addr = mz->iova; + cq->compl_ring = mz->addr; + cq->mz = mz; + + cpfl_tx_hairpin_complq_reset(cq); + cpfl_vport->p2p_tx_complq = cq; + } + + txq->complq = cpfl_vport->p2p_tx_complq; + + cpfl_vport->nb_p2p_txq++; + txq->q_set = true; + dev->data->tx_queues[queue_idx] = cpfl_txq; + + return 0; + +err_cq_mz_rsv: + rte_free(cq); +err_cq_alloc: + cpfl_dma_zone_release(mz); +err_txq_mz_rsv: + rte_free(cpfl_txq); + return ret; +} + int cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) { @@ -865,6 +1225,8 @@ cpfl_set_rx_function(struct rte_eth_dev *dev) if (vport->rx_vec_allowed) { for (i = 0; i < dev->data->nb_rx_queues; i++) { cpfl_rxq = dev->data->rx_queues[i]; + if (cpfl_rxq->hairpin_info.hairpin_q) + continue; (void)idpf_qc_splitq_rx_vec_setup(&cpfl_rxq->base); } #ifdef CC_AVX512_SUPPORT diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h index a4a164d462..06198d4aad 100644 --- a/drivers/net/cpfl/cpfl_rxtx.h +++ b/drivers/net/cpfl/cpfl_rxtx.h @@ -22,6 +22,11 @@ #define CPFL_P2P_NB_TX_COMPLQ 1 #define CPFL_P2P_NB_QUEUE_GRPS 1 #define CPFL_P2P_QUEUE_GRP_ID 1 +#define CPFL_P2P_DESC_LEN 16 +#define CPFL_P2P_NB_MBUF 4096 +#define CPFL_P2P_CACHE_SIZE 250 +#define CPFL_P2P_MBUF_SIZE 2048 +#define CPFL_P2P_RING_BUF 128 /* Base address of the HW descriptor ring should be 128B aligned. */ #define CPFL_RING_BASE_ALIGN 128 @@ -33,14 +38,40 @@ #define CPFL_SUPPORT_CHAIN_NUM 5 +struct cpfl_rxq_hairpin_info { + bool hairpin_q; /* if rx queue is a hairpin queue */ + uint16_t peer_txp; + uint16_t peer_txq_id; +}; + struct cpfl_rx_queue { struct idpf_rx_queue base; + struct cpfl_rxq_hairpin_info hairpin_info; +}; + +struct cpfl_txq_hairpin_info { + bool hairpin_q; /* if tx queue is a hairpin queue */ + uint16_t peer_rxp; + uint16_t peer_rxq_id; }; struct cpfl_tx_queue { struct idpf_tx_queue base; + struct cpfl_txq_hairpin_info hairpin_info; }; +static inline uint16_t +cpfl_hw_qid_get(uint16_t start_qid, uint16_t offset) +{ + return start_qid + offset; +} + +static inline uint64_t +cpfl_hw_qtail_get(uint64_t tail_start, uint16_t offset, uint64_t tail_spacing) +{ + return tail_start + offset * tail_spacing; +} + int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_txconf *tx_conf); @@ -59,4 +90,9 @@ void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid); void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid); void cpfl_set_rx_function(struct rte_eth_dev *dev); void cpfl_set_tx_function(struct rte_eth_dev *dev); +int cpfl_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, + uint16_t nb_desc, const struct rte_eth_hairpin_conf *conf); +int cpfl_tx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, + uint16_t nb_desc, + const struct rte_eth_hairpin_conf *conf); #endif /* _CPFL_RXTX_H_ */ diff --git a/drivers/net/cpfl/cpfl_rxtx_vec_common.h b/drivers/net/cpfl/cpfl_rxtx_vec_common.h index 5690b17911..d8e9191196 100644 --- a/drivers/net/cpfl/cpfl_rxtx_vec_common.h +++ b/drivers/net/cpfl/cpfl_rxtx_vec_common.h @@ -85,6 +85,8 @@ cpfl_rx_vec_dev_check_default(struct rte_eth_dev *dev) cpfl_rxq = dev->data->rx_queues[i]; default_ret = cpfl_rx_vec_queue_default(&cpfl_rxq->base); if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) { + if (cpfl_rxq->hairpin_info.hairpin_q) + continue; splitq_ret = cpfl_rx_splitq_vec_default(&cpfl_rxq->base); ret = splitq_ret && default_ret; } else { @@ -106,6 +108,8 @@ cpfl_tx_vec_dev_check_default(struct rte_eth_dev *dev) for (i = 0; i < dev->data->nb_tx_queues; i++) { cpfl_txq = dev->data->tx_queues[i]; + if (cpfl_txq->hairpin_info.hairpin_q) + continue; ret = cpfl_tx_vec_queue_default(&cpfl_txq->base); if (ret == CPFL_SCALAR_PATH) return CPFL_SCALAR_PATH; From patchwork Mon Jun 5 06:17:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 128082 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E03C342C31; Mon, 5 Jun 2023 08:42:59 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 80B9E42D4A; Mon, 5 Jun 2023 08:42:27 +0200 (CEST) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 06BEF42D38 for ; Mon, 5 Jun 2023 08:42:24 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685947345; x=1717483345; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=CvMN8NOITqEoPij/b8a7zIGb2DvQk1boRsk9Q2X9GsI=; b=KuWD4x0L8q0atjWNMUynYcsXaMlWFCJib7kpVF246JykGHyAGeh0bX9R dx0PytUdUJUjHrSZqZyGmpUPW3d1yHGRRuT3gAu9vkQWG2PV9Nostpld6 uOoZSEHUO3T7Kj85tJ7RYnrt7vVCJcsgGD/teC6R3kMa4uJDyRElwOHl2 5d6uAxpsiS/E82GfdXO/hI7PLMsQlz5mCEi/YGnrFvbXOA9nSNa6uQs62 YIB5X0uBfCxeOs3s/4Bpj4Zrup/Pbc0MKVawNV4N7950MQdDiqoX9xKE1 0UKzSxgR9aEYeAAye9pe8pwBWAg8OpiVbNu/F6y+3ZKRjir/C+IDwA385 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="419839637" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="419839637" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jun 2023 23:42:24 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="798301038" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="798301038" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by FMSMGA003.fm.intel.com with ESMTP; 04 Jun 2023 23:42:23 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing Subject: [PATCH v8 06/14] common/idpf: add queue config API Date: Mon, 5 Jun 2023 06:17:16 +0000 Message-Id: <20230605061724.88130-7-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230605061724.88130-1-beilei.xing@intel.com> References: <20230531130450.26380-1-beilei.xing@intel.com> <20230605061724.88130-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch supports Rx/Tx queue configuration APIs. Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/common/idpf/idpf_common_virtchnl.c | 70 ++++++++++++++++++++++ drivers/common/idpf/idpf_common_virtchnl.h | 6 ++ drivers/common/idpf/version.map | 2 + 3 files changed, 78 insertions(+) diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c index a3fe55c897..211b44a88e 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.c +++ b/drivers/common/idpf/idpf_common_virtchnl.c @@ -1050,6 +1050,41 @@ idpf_vc_rxq_config(struct idpf_vport *vport, struct idpf_rx_queue *rxq) return err; } +int idpf_vc_rxq_config_by_info(struct idpf_vport *vport, struct virtchnl2_rxq_info *rxq_info, + uint16_t num_qs) +{ + struct idpf_adapter *adapter = vport->adapter; + struct virtchnl2_config_rx_queues *vc_rxqs = NULL; + struct idpf_cmd_info args; + int size, err, i; + + size = sizeof(*vc_rxqs) + (num_qs - 1) * + sizeof(struct virtchnl2_rxq_info); + vc_rxqs = rte_zmalloc("cfg_rxqs", size, 0); + if (vc_rxqs == NULL) { + DRV_LOG(ERR, "Failed to allocate virtchnl2_config_rx_queues"); + err = -ENOMEM; + return err; + } + vc_rxqs->vport_id = vport->vport_id; + vc_rxqs->num_qinfo = num_qs; + memcpy(vc_rxqs->qinfo, rxq_info, num_qs * sizeof(struct virtchnl2_rxq_info)); + + memset(&args, 0, sizeof(args)); + args.ops = VIRTCHNL2_OP_CONFIG_RX_QUEUES; + args.in_args = (uint8_t *)vc_rxqs; + args.in_args_size = size; + args.out_buffer = adapter->mbx_resp; + args.out_size = IDPF_DFLT_MBX_BUF_SIZE; + + err = idpf_vc_cmd_execute(adapter, &args); + rte_free(vc_rxqs); + if (err != 0) + DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_RX_QUEUES"); + + return err; +} + int idpf_vc_txq_config(struct idpf_vport *vport, struct idpf_tx_queue *txq) { @@ -1121,6 +1156,41 @@ idpf_vc_txq_config(struct idpf_vport *vport, struct idpf_tx_queue *txq) return err; } +int +idpf_vc_txq_config_by_info(struct idpf_vport *vport, struct virtchnl2_txq_info *txq_info, + uint16_t num_qs) +{ + struct idpf_adapter *adapter = vport->adapter; + struct virtchnl2_config_tx_queues *vc_txqs = NULL; + struct idpf_cmd_info args; + int size, err; + + size = sizeof(*vc_txqs) + (num_qs - 1) * sizeof(struct virtchnl2_txq_info); + vc_txqs = rte_zmalloc("cfg_txqs", size, 0); + if (vc_txqs == NULL) { + DRV_LOG(ERR, "Failed to allocate virtchnl2_config_tx_queues"); + err = -ENOMEM; + return err; + } + vc_txqs->vport_id = vport->vport_id; + vc_txqs->num_qinfo = num_qs; + memcpy(vc_txqs->qinfo, txq_info, num_qs * sizeof(struct virtchnl2_txq_info)); + + memset(&args, 0, sizeof(args)); + args.ops = VIRTCHNL2_OP_CONFIG_TX_QUEUES; + args.in_args = (uint8_t *)vc_txqs; + args.in_args_size = size; + args.out_buffer = adapter->mbx_resp; + args.out_size = IDPF_DFLT_MBX_BUF_SIZE; + + err = idpf_vc_cmd_execute(adapter, &args); + rte_free(vc_txqs); + if (err != 0) + DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_TX_QUEUES"); + + return err; +} + int idpf_vc_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg, struct idpf_ctlq_msg *q_msg) diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h index 58b16e1c5d..db83761a5e 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.h +++ b/drivers/common/idpf/idpf_common_virtchnl.h @@ -65,6 +65,12 @@ __rte_internal int idpf_vc_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq, u16 *buff_count, struct idpf_dma_mem **buffs); __rte_internal +int idpf_vc_rxq_config_by_info(struct idpf_vport *vport, struct virtchnl2_rxq_info *rxq_info, + uint16_t num_qs); +__rte_internal +int idpf_vc_txq_config_by_info(struct idpf_vport *vport, struct virtchnl2_txq_info *txq_info, + uint16_t num_qs); +__rte_internal int idpf_vc_queue_grps_del(struct idpf_vport *vport, uint16_t num_q_grps, struct virtchnl2_queue_group_id *qg_ids); diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map index 01d18f3f3f..17e77884ce 100644 --- a/drivers/common/idpf/version.map +++ b/drivers/common/idpf/version.map @@ -54,8 +54,10 @@ INTERNAL { idpf_vc_rss_lut_get; idpf_vc_rss_lut_set; idpf_vc_rxq_config; + idpf_vc_rxq_config_by_info; idpf_vc_stats_query; idpf_vc_txq_config; + idpf_vc_txq_config_by_info; idpf_vc_vectors_alloc; idpf_vc_vectors_dealloc; idpf_vc_vport_create; From patchwork Mon Jun 5 06:17:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 128083 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B35CE42C31; Mon, 5 Jun 2023 08:43:05 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7B52C42D53; Mon, 5 Jun 2023 08:42:28 +0200 (CEST) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 6E68442D46 for ; Mon, 5 Jun 2023 08:42:27 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685947347; x=1717483347; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/VNQepTd1nenXVqfSsTKtCJV/ZVYKFv5oqROR8nczLM=; b=V7KcvZ9yvJJanXhJnH+GvJ2otPnz0zeLWq5+sOfiTZXGxTqcN/Sevnla AABYpqQphVsVSbkfQQ3MpS/YME0YfONMvYSPqOU/gAt75IBa89HHqS7mw beclbwr9K7CTTeqym8M7A+wS/m1S5kWII8VO7KzIKWiqY7fSGdA2cIrDb C3n5kDTbc6qap8m2DJqoX0lD0Z5ES5nb4NVAI6TQ1roUwS4N+V/MYAyw1 VUFmKkV7WWOeht4yqmXCQi18p6sKwvCCfqGL1WzAKB8VwwSrKzj+pWPac YQgy3vQ8kZ8Ioa2dE4JXVxH1RHQXSZwGJ50V+Qj9oa0vBeDXhj20BgDAA w==; X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="419839642" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="419839642" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jun 2023 23:42:26 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="798301041" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="798301041" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by FMSMGA003.fm.intel.com with ESMTP; 04 Jun 2023 23:42:24 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing , Xiao Wang Subject: [PATCH v8 07/14] net/cpfl: support hairpin queue configuration Date: Mon, 5 Jun 2023 06:17:17 +0000 Message-Id: <20230605061724.88130-8-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230605061724.88130-1-beilei.xing@intel.com> References: <20230531130450.26380-1-beilei.xing@intel.com> <20230605061724.88130-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch supports Rx/Tx hairpin queue configuration. Signed-off-by: Xiao Wang Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/net/cpfl/cpfl_ethdev.c | 136 +++++++++++++++++++++++++++++++-- drivers/net/cpfl/cpfl_rxtx.c | 80 +++++++++++++++++++ drivers/net/cpfl/cpfl_rxtx.h | 7 ++ 3 files changed, 217 insertions(+), 6 deletions(-) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index b17c538ec2..a06def06d0 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -742,33 +742,157 @@ cpfl_config_rx_queues_irqs(struct rte_eth_dev *dev) return idpf_vport_irq_map_config(vport, nb_rx_queues); } +/* Update hairpin_info for dev's tx hairpin queue */ +static int +cpfl_txq_hairpin_info_update(struct rte_eth_dev *dev, uint16_t rx_port) +{ + struct cpfl_vport *cpfl_tx_vport = dev->data->dev_private; + struct rte_eth_dev *peer_dev = &rte_eth_devices[rx_port]; + struct cpfl_vport *cpfl_rx_vport = peer_dev->data->dev_private; + struct cpfl_txq_hairpin_info *hairpin_info; + struct cpfl_tx_queue *cpfl_txq; + int i; + + for (i = cpfl_tx_vport->nb_data_txq; i < dev->data->nb_tx_queues; i++) { + cpfl_txq = dev->data->tx_queues[i]; + hairpin_info = &cpfl_txq->hairpin_info; + if (hairpin_info->peer_rxp != rx_port) { + PMD_DRV_LOG(ERR, "port %d is not the peer port", rx_port); + return -EINVAL; + } + hairpin_info->peer_rxq_id = + cpfl_hw_qid_get(cpfl_rx_vport->p2p_q_chunks_info->rx_start_qid, + hairpin_info->peer_rxq_id - cpfl_rx_vport->nb_data_rxq); + } + + return 0; +} + +/* Bind Rx hairpin queue's memory zone to peer Tx hairpin queue's memory zone */ +static void +cpfl_rxq_hairpin_mz_bind(struct rte_eth_dev *dev) +{ + struct cpfl_vport *cpfl_rx_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_rx_vport->base; + struct idpf_adapter *adapter = vport->adapter; + struct idpf_hw *hw = &adapter->hw; + struct cpfl_rx_queue *cpfl_rxq; + struct cpfl_tx_queue *cpfl_txq; + struct rte_eth_dev *peer_dev; + const struct rte_memzone *mz; + uint16_t peer_tx_port; + uint16_t peer_tx_qid; + int i; + + for (i = cpfl_rx_vport->nb_data_rxq; i < dev->data->nb_rx_queues; i++) { + cpfl_rxq = dev->data->rx_queues[i]; + peer_tx_port = cpfl_rxq->hairpin_info.peer_txp; + peer_tx_qid = cpfl_rxq->hairpin_info.peer_txq_id; + peer_dev = &rte_eth_devices[peer_tx_port]; + cpfl_txq = peer_dev->data->tx_queues[peer_tx_qid]; + + /* bind rx queue */ + mz = cpfl_txq->base.mz; + cpfl_rxq->base.rx_ring_phys_addr = mz->iova; + cpfl_rxq->base.rx_ring = mz->addr; + cpfl_rxq->base.mz = mz; + + /* bind rx buffer queue */ + mz = cpfl_txq->base.complq->mz; + cpfl_rxq->base.bufq1->rx_ring_phys_addr = mz->iova; + cpfl_rxq->base.bufq1->rx_ring = mz->addr; + cpfl_rxq->base.bufq1->mz = mz; + cpfl_rxq->base.bufq1->qrx_tail = hw->hw_addr + + cpfl_hw_qtail_get(cpfl_rx_vport->p2p_q_chunks_info->rx_buf_qtail_start, + 0, cpfl_rx_vport->p2p_q_chunks_info->rx_buf_qtail_spacing); + } +} + static int cpfl_start_queues(struct rte_eth_dev *dev) { + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct cpfl_rx_queue *cpfl_rxq; struct cpfl_tx_queue *cpfl_txq; + int update_flag = 0; int err = 0; int i; + /* For normal data queues, configure, init and enale Txq. + * For non-manual bind hairpin queues, configure Txq. + */ for (i = 0; i < dev->data->nb_tx_queues; i++) { cpfl_txq = dev->data->tx_queues[i]; if (cpfl_txq == NULL || cpfl_txq->base.tx_deferred_start) continue; - err = cpfl_tx_queue_start(dev, i); + if (!cpfl_txq->hairpin_info.hairpin_q) { + err = cpfl_tx_queue_start(dev, i); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to start Tx queue %u", i); + return err; + } + } else if (!cpfl_vport->p2p_manual_bind) { + if (update_flag == 0) { + err = cpfl_txq_hairpin_info_update(dev, + cpfl_txq->hairpin_info.peer_rxp); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to update Tx hairpin queue info"); + return err; + } + update_flag = 1; + } + err = cpfl_hairpin_txq_config(vport, cpfl_txq); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to configure hairpin Tx queue %u", i); + return err; + } + } + } + + /* For non-manual bind hairpin queues, configure Tx completion queue first.*/ + if (!cpfl_vport->p2p_manual_bind && cpfl_vport->p2p_tx_complq != NULL) { + err = cpfl_hairpin_tx_complq_config(cpfl_vport); if (err != 0) { - PMD_DRV_LOG(ERR, "Fail to start Tx queue %u", i); + PMD_DRV_LOG(ERR, "Fail to config Tx completion queue"); return err; } } + /* For non-manual bind hairpin queues, configure Rx buffer queue.*/ + if (!cpfl_vport->p2p_manual_bind && cpfl_vport->p2p_rx_bufq != NULL) { + cpfl_rxq_hairpin_mz_bind(dev); + err = cpfl_hairpin_rx_bufq_config(cpfl_vport); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to config Rx buffer queue"); + return err; + } + } + + /* For normal data queues, configure, init and enale Rxq. + * For non-manual bind hairpin queues, configure Rxq, and then init Rxq. + */ for (i = 0; i < dev->data->nb_rx_queues; i++) { cpfl_rxq = dev->data->rx_queues[i]; if (cpfl_rxq == NULL || cpfl_rxq->base.rx_deferred_start) continue; - err = cpfl_rx_queue_start(dev, i); - if (err != 0) { - PMD_DRV_LOG(ERR, "Fail to start Rx queue %u", i); - return err; + if (!cpfl_rxq->hairpin_info.hairpin_q) { + err = cpfl_rx_queue_start(dev, i); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to start Rx queue %u", i); + return err; + } + } else if (!cpfl_vport->p2p_manual_bind) { + err = cpfl_hairpin_rxq_config(vport, cpfl_rxq); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to configure hairpin Rx queue %u", i); + return err; + } + err = cpfl_rx_queue_init(dev, i); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to init hairpin Rx queue %u", i); + return err; + } } } diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c index 90b408d1f4..9408c6e1a4 100644 --- a/drivers/net/cpfl/cpfl_rxtx.c +++ b/drivers/net/cpfl/cpfl_rxtx.c @@ -922,6 +922,86 @@ cpfl_tx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, return ret; } +int +cpfl_hairpin_rx_bufq_config(struct cpfl_vport *cpfl_vport) +{ + struct idpf_rx_queue *rx_bufq = cpfl_vport->p2p_rx_bufq; + struct virtchnl2_rxq_info rxq_info[1] = {0}; + + rxq_info[0].type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER; + rxq_info[0].queue_id = rx_bufq->queue_id; + rxq_info[0].ring_len = rx_bufq->nb_rx_desc; + rxq_info[0].dma_ring_addr = rx_bufq->rx_ring_phys_addr; + rxq_info[0].desc_ids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M; + rxq_info[0].rx_buffer_low_watermark = CPFL_RXBUF_LOW_WATERMARK; + rxq_info[0].model = VIRTCHNL2_QUEUE_MODEL_SPLIT; + rxq_info[0].data_buffer_size = rx_bufq->rx_buf_len; + rxq_info[0].buffer_notif_stride = CPFL_RX_BUF_STRIDE; + + return idpf_vc_rxq_config_by_info(&cpfl_vport->base, rxq_info, 1); +} + +int +cpfl_hairpin_rxq_config(struct idpf_vport *vport, struct cpfl_rx_queue *cpfl_rxq) +{ + struct virtchnl2_rxq_info rxq_info[1] = {0}; + struct idpf_rx_queue *rxq = &cpfl_rxq->base; + + rxq_info[0].type = VIRTCHNL2_QUEUE_TYPE_RX; + rxq_info[0].queue_id = rxq->queue_id; + rxq_info[0].ring_len = rxq->nb_rx_desc; + rxq_info[0].dma_ring_addr = rxq->rx_ring_phys_addr; + rxq_info[0].rx_bufq1_id = rxq->bufq1->queue_id; + rxq_info[0].max_pkt_size = vport->max_pkt_len; + rxq_info[0].desc_ids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M; + rxq_info[0].qflags |= VIRTCHNL2_RX_DESC_SIZE_16BYTE; + + rxq_info[0].data_buffer_size = rxq->rx_buf_len; + rxq_info[0].model = VIRTCHNL2_QUEUE_MODEL_SPLIT; + rxq_info[0].rx_buffer_low_watermark = CPFL_RXBUF_LOW_WATERMARK; + + PMD_DRV_LOG(NOTICE, "hairpin: vport %u, Rxq id 0x%x", + vport->vport_id, rxq_info[0].queue_id); + + return idpf_vc_rxq_config_by_info(vport, rxq_info, 1); +} + +int +cpfl_hairpin_tx_complq_config(struct cpfl_vport *cpfl_vport) +{ + struct idpf_tx_queue *tx_complq = cpfl_vport->p2p_tx_complq; + struct virtchnl2_txq_info txq_info[1] = {0}; + + txq_info[0].dma_ring_addr = tx_complq->tx_ring_phys_addr; + txq_info[0].type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION; + txq_info[0].queue_id = tx_complq->queue_id; + txq_info[0].ring_len = tx_complq->nb_tx_desc; + txq_info[0].peer_rx_queue_id = cpfl_vport->p2p_rx_bufq->queue_id; + txq_info[0].model = VIRTCHNL2_QUEUE_MODEL_SPLIT; + txq_info[0].sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW; + + return idpf_vc_txq_config_by_info(&cpfl_vport->base, txq_info, 1); +} + +int +cpfl_hairpin_txq_config(struct idpf_vport *vport, struct cpfl_tx_queue *cpfl_txq) +{ + struct idpf_tx_queue *txq = &cpfl_txq->base; + struct virtchnl2_txq_info txq_info[1] = {0}; + + txq_info[0].dma_ring_addr = txq->tx_ring_phys_addr; + txq_info[0].type = VIRTCHNL2_QUEUE_TYPE_TX; + txq_info[0].queue_id = txq->queue_id; + txq_info[0].ring_len = txq->nb_tx_desc; + txq_info[0].tx_compl_queue_id = txq->complq->queue_id; + txq_info[0].relative_queue_id = txq->queue_id; + txq_info[0].peer_rx_queue_id = cpfl_txq->hairpin_info.peer_rxq_id; + txq_info[0].model = VIRTCHNL2_QUEUE_MODEL_SPLIT; + txq_info[0].sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW; + + return idpf_vc_txq_config_by_info(vport, txq_info, 1); +} + int cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) { diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h index 06198d4aad..872ebc1bfd 100644 --- a/drivers/net/cpfl/cpfl_rxtx.h +++ b/drivers/net/cpfl/cpfl_rxtx.h @@ -32,12 +32,15 @@ #define CPFL_RING_BASE_ALIGN 128 #define CPFL_DEFAULT_RX_FREE_THRESH 32 +#define CPFL_RXBUF_LOW_WATERMARK 64 #define CPFL_DEFAULT_TX_RS_THRESH 32 #define CPFL_DEFAULT_TX_FREE_THRESH 32 #define CPFL_SUPPORT_CHAIN_NUM 5 +#define CPFL_RX_BUF_STRIDE 64 + struct cpfl_rxq_hairpin_info { bool hairpin_q; /* if rx queue is a hairpin queue */ uint16_t peer_txp; @@ -95,4 +98,8 @@ int cpfl_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, int cpfl_tx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc, const struct rte_eth_hairpin_conf *conf); +int cpfl_hairpin_tx_complq_config(struct cpfl_vport *cpfl_vport); +int cpfl_hairpin_txq_config(struct idpf_vport *vport, struct cpfl_tx_queue *cpfl_txq); +int cpfl_hairpin_rx_bufq_config(struct cpfl_vport *cpfl_vport); +int cpfl_hairpin_rxq_config(struct idpf_vport *vport, struct cpfl_rx_queue *cpfl_rxq); #endif /* _CPFL_RXTX_H_ */ From patchwork Mon Jun 5 06:17:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 128084 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AF1C642C31; Mon, 5 Jun 2023 08:43:11 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 90E6442D59; Mon, 5 Jun 2023 08:42:29 +0200 (CEST) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 6D2B342D51 for ; Mon, 5 Jun 2023 08:42:28 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685947348; x=1717483348; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=a2LfBNxAPScF49xIh5G9q18lGBZR3G+zBXOEzuYHjJE=; b=SgWz+Mnp2XAM7oGGEuq8eyU4lLF7ijDZxjUYxh/V7702i0fcmsFtdtg5 /ZEKUZx5rw0o2sI0P2EwmwJ4DwkMFVerb18v5xNZGtDh3CmRDsTBZZK4D sWf68DaxuDuzRQ8aN691z9I7JMM1AyEfnbe3giSLr2KOe2FfygVtWtx/g 8Fxvx3X4YLfZhbRqNMSOeEUY5iEEWC04tWJNyyl2DOc5M61MIjHio8Ocl QYQFlmZp6Pwjzxy55FNr6bN0gNlqcVJazwntosjju7p/oVwRM/lltebkk CfYM5sLN2nvVz4nzN/ixSSsuT0xWK7pEMJAgsbvfhbeZbJokiDtZvIkEH A==; X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="419839648" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="419839648" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jun 2023 23:42:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="798301045" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="798301045" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by FMSMGA003.fm.intel.com with ESMTP; 04 Jun 2023 23:42:26 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing Subject: [PATCH v8 08/14] common/idpf: add switch queue API Date: Mon, 5 Jun 2023 06:17:18 +0000 Message-Id: <20230605061724.88130-9-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230605061724.88130-1-beilei.xing@intel.com> References: <20230531130450.26380-1-beilei.xing@intel.com> <20230605061724.88130-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch adds idpf_vc_ena_dis_one_queue API. Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/common/idpf/idpf_common_virtchnl.c | 2 +- drivers/common/idpf/idpf_common_virtchnl.h | 3 +++ drivers/common/idpf/version.map | 1 + 3 files changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c index 211b44a88e..6455f640da 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.c +++ b/drivers/common/idpf/idpf_common_virtchnl.c @@ -733,7 +733,7 @@ idpf_vc_vectors_dealloc(struct idpf_vport *vport) return err; } -static int +int idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid, uint32_t type, bool on) { diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h index db83761a5e..9ff5c38c26 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.h +++ b/drivers/common/idpf/idpf_common_virtchnl.h @@ -71,6 +71,9 @@ __rte_internal int idpf_vc_txq_config_by_info(struct idpf_vport *vport, struct virtchnl2_txq_info *txq_info, uint16_t num_qs); __rte_internal +int idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid, + uint32_t type, bool on); +__rte_internal int idpf_vc_queue_grps_del(struct idpf_vport *vport, uint16_t num_q_grps, struct virtchnl2_queue_group_id *qg_ids); diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map index 17e77884ce..25624732b0 100644 --- a/drivers/common/idpf/version.map +++ b/drivers/common/idpf/version.map @@ -40,6 +40,7 @@ INTERNAL { idpf_vc_cmd_execute; idpf_vc_ctlq_post_rx_buffs; idpf_vc_ctlq_recv; + idpf_vc_ena_dis_one_queue; idpf_vc_irq_map_unmap_config; idpf_vc_one_msg_read; idpf_vc_ptype_info_query; From patchwork Mon Jun 5 06:17:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 128085 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 28C7E42C31; Mon, 5 Jun 2023 08:43:17 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CD66742D40; Mon, 5 Jun 2023 08:42:31 +0200 (CEST) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 7FAA742D3E for ; Mon, 5 Jun 2023 08:42:30 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685947350; x=1717483350; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=au7iRmJmrfSeMraOXLqiGH2EDUdJMB+znqL+nRerzWY=; b=gX0ZwKkzQoREoR2yaLJ1H4c7jskXTnPxqK1MSQ55NE8vPl4wQuP2jHHM +3Z/GtUCj2cSYOd5p7XAaBWcIuS4H4QFfX4xT1XWncK4gcbe33ySYxo3+ IhG7zUnjqbLCT6LknQCeIIWkypVlx+AkK0qCXMY1ZrVT8+HUIVudbyU4t Jo+kJWD3Q02VYZY6qF7yBnehVn9fXyS7VYqLQwqUNFrMqMOiVYtAkA9vu Vsp0U60hSIIi9QTS0HkuuVSN3RZoiOONYp+6JHZFpdSt5h6QJEePyOE1+ tCWXeWKWDgRnfMGUOupFCw7eDeVMIFSNaYWxpPEMXLdLgZp7FtHDA2e3q Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="419839654" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="419839654" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jun 2023 23:42:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="798301051" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="798301051" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by FMSMGA003.fm.intel.com with ESMTP; 04 Jun 2023 23:42:27 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing , Xiao Wang Subject: [PATCH v8 09/14] net/cpfl: support hairpin queue start/stop Date: Mon, 5 Jun 2023 06:17:19 +0000 Message-Id: <20230605061724.88130-10-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230605061724.88130-1-beilei.xing@intel.com> References: <20230531130450.26380-1-beilei.xing@intel.com> <20230605061724.88130-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch supports Rx/Tx hairpin queue start/stop. Signed-off-by: Xiao Wang Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/net/cpfl/cpfl_ethdev.c | 46 +++++++++ drivers/net/cpfl/cpfl_rxtx.c | 164 +++++++++++++++++++++++++++++---- drivers/net/cpfl/cpfl_rxtx.h | 15 +++ 3 files changed, 207 insertions(+), 18 deletions(-) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index a06def06d0..2b99e58341 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -896,6 +896,52 @@ cpfl_start_queues(struct rte_eth_dev *dev) } } + /* For non-manual bind hairpin queues, enable Tx queue and Rx queue, + * then enable Tx completion queue and Rx buffer queue. + */ + for (i = cpfl_vport->nb_data_txq; i < dev->data->nb_tx_queues; i++) { + cpfl_txq = dev->data->tx_queues[i]; + if (cpfl_txq->hairpin_info.hairpin_q && !cpfl_vport->p2p_manual_bind) { + err = cpfl_switch_hairpin_rxtx_queue(cpfl_vport, + i - cpfl_vport->nb_data_txq, + false, true); + if (err) + PMD_DRV_LOG(ERR, "Failed to switch hairpin TX queue %u on", + i); + else + cpfl_txq->base.q_started = true; + } + } + + for (i = cpfl_vport->nb_data_rxq; i < dev->data->nb_rx_queues; i++) { + cpfl_rxq = dev->data->rx_queues[i]; + if (cpfl_rxq->hairpin_info.hairpin_q && !cpfl_vport->p2p_manual_bind) { + err = cpfl_switch_hairpin_rxtx_queue(cpfl_vport, + i - cpfl_vport->nb_data_rxq, + true, true); + if (err) + PMD_DRV_LOG(ERR, "Failed to switch hairpin RX queue %u on", + i); + else + cpfl_rxq->base.q_started = true; + } + } + + if (!cpfl_vport->p2p_manual_bind && + cpfl_vport->p2p_tx_complq != NULL && + cpfl_vport->p2p_rx_bufq != NULL) { + err = cpfl_switch_hairpin_complq(cpfl_vport, true); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to switch hairpin Tx complq"); + return err; + } + err = cpfl_switch_hairpin_bufq(cpfl_vport, true); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to switch hairpin Rx bufq"); + return err; + } + } + return err; } diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c index 9408c6e1a4..8d1f8a560b 100644 --- a/drivers/net/cpfl/cpfl_rxtx.c +++ b/drivers/net/cpfl/cpfl_rxtx.c @@ -1002,6 +1002,89 @@ cpfl_hairpin_txq_config(struct idpf_vport *vport, struct cpfl_tx_queue *cpfl_txq return idpf_vc_txq_config_by_info(vport, txq_info, 1); } +int +cpfl_switch_hairpin_complq(struct cpfl_vport *cpfl_vport, bool on) +{ + struct idpf_vport *vport = &cpfl_vport->base; + uint32_t type; + int err, queue_id; + + type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION; + queue_id = cpfl_vport->p2p_tx_complq->queue_id; + err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on); + + return err; +} + +int +cpfl_switch_hairpin_bufq(struct cpfl_vport *cpfl_vport, bool on) +{ + struct idpf_vport *vport = &cpfl_vport->base; + uint32_t type; + int err, queue_id; + + type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER; + queue_id = cpfl_vport->p2p_rx_bufq->queue_id; + err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on); + + return err; +} + +int +cpfl_switch_hairpin_rxtx_queue(struct cpfl_vport *cpfl_vport, uint16_t logic_qid, + bool rx, bool on) +{ + struct idpf_vport *vport = &cpfl_vport->base; + uint32_t type; + int err, queue_id; + + type = rx ? VIRTCHNL2_QUEUE_TYPE_RX : VIRTCHNL2_QUEUE_TYPE_TX; + + if (type == VIRTCHNL2_QUEUE_TYPE_RX) + queue_id = cpfl_hw_qid_get(cpfl_vport->p2p_q_chunks_info->rx_start_qid, logic_qid); + else + queue_id = cpfl_hw_qid_get(cpfl_vport->p2p_q_chunks_info->tx_start_qid, logic_qid); + err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on); + if (err) + return err; + + return err; +} + +static int +cpfl_alloc_split_p2p_rxq_mbufs(struct idpf_rx_queue *rxq) +{ + volatile struct virtchnl2_p2p_rx_buf_desc *rxd; + struct rte_mbuf *mbuf = NULL; + uint64_t dma_addr; + uint16_t i; + + for (i = 0; i < rxq->nb_rx_desc; i++) { + mbuf = rte_mbuf_raw_alloc(rxq->mp); + if (unlikely(!mbuf)) { + PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX"); + return -ENOMEM; + } + + rte_mbuf_refcnt_set(mbuf, 1); + mbuf->next = NULL; + mbuf->data_off = RTE_PKTMBUF_HEADROOM; + mbuf->nb_segs = 1; + mbuf->port = rxq->port_id; + dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf)); + + rxd = &((volatile struct virtchnl2_p2p_rx_buf_desc *)(rxq->rx_ring))[i]; + rxd->reserve0 = 0; + rxd->pkt_addr = dma_addr; + } + + rxq->nb_rx_hold = 0; + /* The value written in the RX buffer queue tail register, must be a multiple of 8.*/ + rxq->rx_tail = rxq->nb_rx_desc - CPFL_HAIRPIN_Q_TAIL_AUX_VALUE; + + return 0; +} + int cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) { @@ -1055,22 +1138,31 @@ cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) IDPF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1); } else { /* Split queue */ - err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq1); - if (err != 0) { - PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf"); - return err; - } - err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq2); - if (err != 0) { - PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf"); - return err; + if (cpfl_rxq->hairpin_info.hairpin_q) { + err = cpfl_alloc_split_p2p_rxq_mbufs(rxq->bufq1); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to allocate p2p RX buffer queue mbuf"); + return err; + } + } else { + err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq1); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf"); + return err; + } + err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq2); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf"); + return err; + } } rte_wmb(); /* Init the RX tail register. */ IDPF_PCI_REG_WRITE(rxq->bufq1->qrx_tail, rxq->bufq1->rx_tail); - IDPF_PCI_REG_WRITE(rxq->bufq2->qrx_tail, rxq->bufq2->rx_tail); + if (rxq->bufq2) + IDPF_PCI_REG_WRITE(rxq->bufq2->qrx_tail, rxq->bufq2->rx_tail); } return err; @@ -1177,7 +1269,12 @@ cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) return -EINVAL; cpfl_rxq = dev->data->rx_queues[rx_queue_id]; - err = idpf_vc_queue_switch(vport, rx_queue_id, true, false); + if (cpfl_rxq->hairpin_info.hairpin_q) + err = cpfl_switch_hairpin_rxtx_queue(cpfl_vport, + rx_queue_id - cpfl_vport->nb_data_txq, + true, false); + else + err = idpf_vc_queue_switch(vport, rx_queue_id, true, false); if (err != 0) { PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off", rx_queue_id); @@ -1191,10 +1288,17 @@ cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) idpf_qc_single_rx_queue_reset(rxq); } else { rxq->bufq1->ops->release_mbufs(rxq->bufq1); - rxq->bufq2->ops->release_mbufs(rxq->bufq2); - idpf_qc_split_rx_queue_reset(rxq); + if (rxq->bufq2) + rxq->bufq2->ops->release_mbufs(rxq->bufq2); + if (cpfl_rxq->hairpin_info.hairpin_q) { + cpfl_rx_hairpin_descq_reset(rxq); + cpfl_rx_hairpin_bufq_reset(rxq->bufq1); + } else { + idpf_qc_split_rx_queue_reset(rxq); + } } - dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + if (!cpfl_rxq->hairpin_info.hairpin_q) + dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; return 0; } @@ -1213,7 +1317,12 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) cpfl_txq = dev->data->tx_queues[tx_queue_id]; - err = idpf_vc_queue_switch(vport, tx_queue_id, false, false); + if (cpfl_txq->hairpin_info.hairpin_q) + err = cpfl_switch_hairpin_rxtx_queue(cpfl_vport, + tx_queue_id - cpfl_vport->nb_data_txq, + false, false); + else + err = idpf_vc_queue_switch(vport, tx_queue_id, false, false); if (err != 0) { PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off", tx_queue_id); @@ -1226,10 +1335,17 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) { idpf_qc_single_tx_queue_reset(txq); } else { - idpf_qc_split_tx_descq_reset(txq); - idpf_qc_split_tx_complq_reset(txq->complq); + if (cpfl_txq->hairpin_info.hairpin_q) { + cpfl_tx_hairpin_descq_reset(txq); + cpfl_tx_hairpin_complq_reset(txq->complq); + } else { + idpf_qc_split_tx_descq_reset(txq); + idpf_qc_split_tx_complq_reset(txq->complq); + } } - dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + + if (!cpfl_txq->hairpin_info.hairpin_q) + dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; return 0; } @@ -1249,10 +1365,22 @@ cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid) void cpfl_stop_queues(struct rte_eth_dev *dev) { + struct cpfl_vport *cpfl_vport = + (struct cpfl_vport *)dev->data->dev_private; struct cpfl_rx_queue *cpfl_rxq; struct cpfl_tx_queue *cpfl_txq; int i; + if (cpfl_vport->p2p_tx_complq != NULL) { + if (cpfl_switch_hairpin_complq(cpfl_vport, false) != 0) + PMD_DRV_LOG(ERR, "Failed to stop hairpin Tx complq"); + } + + if (cpfl_vport->p2p_rx_bufq != NULL) { + if (cpfl_switch_hairpin_bufq(cpfl_vport, false) != 0) + PMD_DRV_LOG(ERR, "Failed to stop hairpin Rx bufq"); + } + for (i = 0; i < dev->data->nb_rx_queues; i++) { cpfl_rxq = dev->data->rx_queues[i]; if (cpfl_rxq == NULL) diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h index 872ebc1bfd..aacd087b56 100644 --- a/drivers/net/cpfl/cpfl_rxtx.h +++ b/drivers/net/cpfl/cpfl_rxtx.h @@ -41,6 +41,17 @@ #define CPFL_RX_BUF_STRIDE 64 +/* The value written in the RX buffer queue tail register, + * and in WritePTR field in the TX completion queue context, + * must be a multiple of 8. + */ +#define CPFL_HAIRPIN_Q_TAIL_AUX_VALUE 8 + +struct virtchnl2_p2p_rx_buf_desc { + __le64 reserve0; + __le64 pkt_addr; /* Packet buffer address */ +}; + struct cpfl_rxq_hairpin_info { bool hairpin_q; /* if rx queue is a hairpin queue */ uint16_t peer_txp; @@ -102,4 +113,8 @@ int cpfl_hairpin_tx_complq_config(struct cpfl_vport *cpfl_vport); int cpfl_hairpin_txq_config(struct idpf_vport *vport, struct cpfl_tx_queue *cpfl_txq); int cpfl_hairpin_rx_bufq_config(struct cpfl_vport *cpfl_vport); int cpfl_hairpin_rxq_config(struct idpf_vport *vport, struct cpfl_rx_queue *cpfl_rxq); +int cpfl_switch_hairpin_complq(struct cpfl_vport *cpfl_vport, bool on); +int cpfl_switch_hairpin_bufq(struct cpfl_vport *cpfl_vport, bool on); +int cpfl_switch_hairpin_rxtx_queue(struct cpfl_vport *cpfl_vport, uint16_t qid, + bool rx, bool on); #endif /* _CPFL_RXTX_H_ */ From patchwork Mon Jun 5 06:17:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 128086 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 53E0D42C31; Mon, 5 Jun 2023 08:43:24 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3337742D74; Mon, 5 Jun 2023 08:42:33 +0200 (CEST) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 9721242D3E for ; Mon, 5 Jun 2023 08:42:31 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685947351; x=1717483351; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=AgdOhh+iIeT2R0RNmGL0vg8twvdrqp6FQBooT3cjfX8=; b=DYfwO6G/uEVrQTGJUGXiLB4N75Z2UyFCH4rmKMEgIoBIH9bGS15ZpAD3 PK8Bg5FYaC/U7JBaPXaLS9aBh0YcU8qTKhVo+8tUamVbr2qdeR/dMiJ8q E5hQ3xnhhrYdLg5T3c9Vt0swoBdWF7lQCStlKl9Jstu9V2J1jXyTTL3+Y G6SNbaMS0lpT7mGDqcDdd9QZRdmXNG+wQpRTFx+BPzY2ASPSw88rcVsBE e+SAtq8pMUzX/favJJRLPp3tCUdEqmdfGXD//T1832vivE4m8FIBh1OKZ OijhMUZP9ivx6S8GVi6fV3g5LgcMxwy/03gkxFerSkO5dW/rqwxAK8YlD w==; X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="419839655" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="419839655" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jun 2023 23:42:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="798301054" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="798301054" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by FMSMGA003.fm.intel.com with ESMTP; 04 Jun 2023 23:42:29 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing Subject: [PATCH v8 10/14] common/idpf: add irq map config API Date: Mon, 5 Jun 2023 06:17:20 +0000 Message-Id: <20230605061724.88130-11-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230605061724.88130-1-beilei.xing@intel.com> References: <20230531130450.26380-1-beilei.xing@intel.com> <20230605061724.88130-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch supports idpf_vport_irq_map_config_by_qids API. Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/common/idpf/idpf_common_device.c | 75 ++++++++++++++++++++++++ drivers/common/idpf/idpf_common_device.h | 4 ++ drivers/common/idpf/version.map | 1 + 3 files changed, 80 insertions(+) diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c index dc47551b17..cc4207a46e 100644 --- a/drivers/common/idpf/idpf_common_device.c +++ b/drivers/common/idpf/idpf_common_device.c @@ -667,6 +667,81 @@ idpf_vport_irq_map_config(struct idpf_vport *vport, uint16_t nb_rx_queues) return ret; } +int +idpf_vport_irq_map_config_by_qids(struct idpf_vport *vport, uint32_t *qids, uint16_t nb_rx_queues) +{ + struct idpf_adapter *adapter = vport->adapter; + struct virtchnl2_queue_vector *qv_map; + struct idpf_hw *hw = &adapter->hw; + uint32_t dynctl_val, itrn_val; + uint32_t dynctl_reg_start; + uint32_t itrn_reg_start; + uint16_t i; + int ret; + + qv_map = rte_zmalloc("qv_map", + nb_rx_queues * + sizeof(struct virtchnl2_queue_vector), 0); + if (qv_map == NULL) { + DRV_LOG(ERR, "Failed to allocate %d queue-vector map", + nb_rx_queues); + ret = -ENOMEM; + goto qv_map_alloc_err; + } + + /* Rx interrupt disabled, Map interrupt only for writeback */ + + /* The capability flags adapter->caps.other_caps should be + * compared with bit VIRTCHNL2_CAP_WB_ON_ITR here. The if + * condition should be updated when the FW can return the + * correct flag bits. + */ + dynctl_reg_start = + vport->recv_vectors->vchunks.vchunks->dynctl_reg_start; + itrn_reg_start = + vport->recv_vectors->vchunks.vchunks->itrn_reg_start; + dynctl_val = IDPF_READ_REG(hw, dynctl_reg_start); + DRV_LOG(DEBUG, "Value of dynctl_reg_start is 0x%x", dynctl_val); + itrn_val = IDPF_READ_REG(hw, itrn_reg_start); + DRV_LOG(DEBUG, "Value of itrn_reg_start is 0x%x", itrn_val); + /* Force write-backs by setting WB_ON_ITR bit in DYN_CTL + * register. WB_ON_ITR and INTENA are mutually exclusive + * bits. Setting WB_ON_ITR bits means TX and RX Descs + * are written back based on ITR expiration irrespective + * of INTENA setting. + */ + /* TBD: need to tune INTERVAL value for better performance. */ + itrn_val = (itrn_val == 0) ? IDPF_DFLT_INTERVAL : itrn_val; + dynctl_val = VIRTCHNL2_ITR_IDX_0 << + PF_GLINT_DYN_CTL_ITR_INDX_S | + PF_GLINT_DYN_CTL_WB_ON_ITR_M | + itrn_val << PF_GLINT_DYN_CTL_INTERVAL_S; + IDPF_WRITE_REG(hw, dynctl_reg_start, dynctl_val); + + for (i = 0; i < nb_rx_queues; i++) { + /* map all queues to the same vector */ + qv_map[i].queue_id = qids[i]; + qv_map[i].vector_id = + vport->recv_vectors->vchunks.vchunks->start_vector_id; + } + vport->qv_map = qv_map; + + ret = idpf_vc_irq_map_unmap_config(vport, nb_rx_queues, true); + if (ret != 0) { + DRV_LOG(ERR, "config interrupt mapping failed"); + goto config_irq_map_err; + } + + return 0; + +config_irq_map_err: + rte_free(vport->qv_map); + vport->qv_map = NULL; + +qv_map_alloc_err: + return ret; +} + int idpf_vport_irq_unmap_config(struct idpf_vport *vport, uint16_t nb_rx_queues) { diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h index 112367dae8..f767ea7cec 100644 --- a/drivers/common/idpf/idpf_common_device.h +++ b/drivers/common/idpf/idpf_common_device.h @@ -200,5 +200,9 @@ int idpf_vport_info_init(struct idpf_vport *vport, struct virtchnl2_create_vport *vport_info); __rte_internal void idpf_vport_stats_update(struct virtchnl2_vport_stats *oes, struct virtchnl2_vport_stats *nes); +__rte_internal +int idpf_vport_irq_map_config_by_qids(struct idpf_vport *vport, + uint32_t *qids, + uint16_t nb_rx_queues); #endif /* _IDPF_COMMON_DEVICE_H_ */ diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map index 25624732b0..0729f6b912 100644 --- a/drivers/common/idpf/version.map +++ b/drivers/common/idpf/version.map @@ -69,6 +69,7 @@ INTERNAL { idpf_vport_info_init; idpf_vport_init; idpf_vport_irq_map_config; + idpf_vport_irq_map_config_by_qids; idpf_vport_irq_unmap_config; idpf_vport_rss_config; idpf_vport_stats_update; From patchwork Mon Jun 5 06:17:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 128087 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0536A42C31; Mon, 5 Jun 2023 08:43:30 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2F7DC42D81; Mon, 5 Jun 2023 08:42:36 +0200 (CEST) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id EE49542D6D for ; Mon, 5 Jun 2023 08:42:32 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685947353; x=1717483353; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=TuGfAAb42Qj4k04bSwpC8oW9c8uSveokzJrVzbfXP1s=; b=hQHdpm+jjzdFnkhEVzFJgmYg8sIxaqZzCnC3ir4LalkDK/c4FqFn0wtr jnlFsx//u3fhT0WZY2iJd5Vu4SA5DpkWkPmCK9OwwhRUJvVk0Hs2lSVUA KSobwDXOHBkfYaL+cvJdj7+TNln2A9y3mBbaqIgWPwAtUDZut/LQHO7E0 oOJuq7RjKAXufddJCAXscGaofBoeSoCeO10qKwdJjYUIIewNgOo+Vbvzc 69XjfA6wpeCTolXJU0udLNF5dh7TGUcaHzQI5V9+LWXShYdhh3DNGljOx h8JxSLP/A1ZgqWZpMVYpYzH5FJWyZLWqBnn/OeV7e3iiPYP1ePfB8D5y4 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="419839659" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="419839659" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jun 2023 23:42:32 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="798301059" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="798301059" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by FMSMGA003.fm.intel.com with ESMTP; 04 Jun 2023 23:42:31 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing Subject: [PATCH v8 11/14] net/cpfl: enable write back based on ITR expire Date: Mon, 5 Jun 2023 06:17:21 +0000 Message-Id: <20230605061724.88130-12-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230605061724.88130-1-beilei.xing@intel.com> References: <20230531130450.26380-1-beilei.xing@intel.com> <20230605061724.88130-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch enables write back based on ITR expire (WR_ON_ITR) for hairpin queues. Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/net/cpfl/cpfl_ethdev.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index 2b99e58341..850f1c0bc6 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -735,11 +735,22 @@ cpfl_dev_configure(struct rte_eth_dev *dev) static int cpfl_config_rx_queues_irqs(struct rte_eth_dev *dev) { + uint32_t qids[CPFL_MAX_P2P_NB_QUEUES + IDPF_DEFAULT_RXQ_NUM] = {0}; struct cpfl_vport *cpfl_vport = dev->data->dev_private; struct idpf_vport *vport = &cpfl_vport->base; uint16_t nb_rx_queues = dev->data->nb_rx_queues; + struct cpfl_rx_queue *cpfl_rxq; + int i; - return idpf_vport_irq_map_config(vport, nb_rx_queues); + for (i = 0; i < nb_rx_queues; i++) { + cpfl_rxq = dev->data->rx_queues[i]; + if (cpfl_rxq->hairpin_info.hairpin_q) + qids[i] = cpfl_hw_qid_get(cpfl_vport->p2p_q_chunks_info->rx_start_qid, + (i - cpfl_vport->nb_data_rxq)); + else + qids[i] = cpfl_hw_qid_get(vport->chunks_info.rx_start_qid, i); + } + return idpf_vport_irq_map_config_by_qids(vport, qids, nb_rx_queues); } /* Update hairpin_info for dev's tx hairpin queue */ From patchwork Mon Jun 5 06:17:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 128088 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A953E42C31; Mon, 5 Jun 2023 08:43:35 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 29FD942D88; Mon, 5 Jun 2023 08:42:37 +0200 (CEST) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 67B5F4113F for ; Mon, 5 Jun 2023 08:42:34 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685947354; x=1717483354; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Qny8Djj9LlXWdHdo6JQTwa0v9HR7wN5xcNdQnngSaYI=; b=DOXWHbAeMcVBBl2xLhRUU++tTk89fkXdNsMYJHEanRQEMgahI/AtQSRS 4C+qXZSEMGfdl6AihjFLLld4RT6hM9+rmJHtq08n2wryr8MSUAax09PAb ROLP9XR4MOIA7w8QrOzsxfeAzlZ4ZqPemrNqB2dxycZ4AxdR27K9OveHn hNoCUazXBeYc3R1KZrNTdfrbRssyFe2nb98UTUaaQwvAImYA3NQsE6eX0 WUVTWiaEM08M2649yHq18pAlaYvNI7gL2SgZGXpX9ru0AUXTu3YICItzN NGttNdc3ayl673gEN7TEddmiGNpd7L2+jGOa/EA8TwXHPXZweCtkBC+dn Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="419839663" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="419839663" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jun 2023 23:42:33 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="798301063" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="798301063" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by FMSMGA003.fm.intel.com with ESMTP; 04 Jun 2023 23:42:32 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing , Xiao Wang Subject: [PATCH v8 12/14] net/cpfl: support peer ports get Date: Mon, 5 Jun 2023 06:17:22 +0000 Message-Id: <20230605061724.88130-13-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230605061724.88130-1-beilei.xing@intel.com> References: <20230531130450.26380-1-beilei.xing@intel.com> <20230605061724.88130-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch supports get hairpin peer ports. Signed-off-by: Xiao Wang Signed-off-by: Beilei Xing --- drivers/net/cpfl/cpfl_ethdev.c | 41 ++++++++++++++++++++++++++++++++++ 1 file changed, 41 insertions(+) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index 850f1c0bc6..1a1ca4bc77 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -1080,6 +1080,46 @@ cpfl_dev_close(struct rte_eth_dev *dev) return 0; } +static int +cpfl_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports, + size_t len, uint32_t tx) +{ + struct cpfl_vport *cpfl_vport = + (struct cpfl_vport *)dev->data->dev_private; + struct idpf_tx_queue *txq; + struct idpf_rx_queue *rxq; + struct cpfl_tx_queue *cpfl_txq; + struct cpfl_rx_queue *cpfl_rxq; + int i; + int j = 0; + + if (len <= 0) + return -EINVAL; + + if (cpfl_vport->p2p_q_chunks_info == NULL) + return -ENOTSUP; + + if (tx > 0) { + for (i = cpfl_vport->nb_data_txq, j = 0; i < dev->data->nb_tx_queues; i++, j++) { + txq = dev->data->tx_queues[i]; + if (txq == NULL) + return -EINVAL; + cpfl_txq = (struct cpfl_tx_queue *)txq; + peer_ports[j] = cpfl_txq->hairpin_info.peer_rxp; + } + } else if (tx == 0) { + for (i = cpfl_vport->nb_data_rxq, j = 0; i < dev->data->nb_rx_queues; i++, j++) { + rxq = dev->data->rx_queues[i]; + if (rxq == NULL) + return -EINVAL; + cpfl_rxq = (struct cpfl_rx_queue *)rxq; + peer_ports[j] = cpfl_rxq->hairpin_info.peer_txp; + } + } + + return j; +} + static const struct eth_dev_ops cpfl_eth_dev_ops = { .dev_configure = cpfl_dev_configure, .dev_close = cpfl_dev_close, @@ -1109,6 +1149,7 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = { .hairpin_cap_get = cpfl_hairpin_cap_get, .rx_hairpin_queue_setup = cpfl_rx_hairpin_queue_setup, .tx_hairpin_queue_setup = cpfl_tx_hairpin_queue_setup, + .hairpin_get_peer_ports = cpfl_hairpin_get_peer_ports, }; static int From patchwork Mon Jun 5 06:17:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 128089 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 65D9742C31; Mon, 5 Jun 2023 08:43:41 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 24BCB42D93; Mon, 5 Jun 2023 08:42:38 +0200 (CEST) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 1F9994113F for ; Mon, 5 Jun 2023 08:42:35 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685947356; x=1717483356; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=b7WywgjAkhemV5RAtfTJTAfFXMEszSeIc/c+S/CaSSU=; b=ae2cV11LVc1fAqUei2oL4jyaDch6VX/Tr1GYxEuVD/Jg6GGNNrfvWsz4 4l4rTkoEsFyFtdjeilnZSMPobfs7VhjEHZV9HRss4EWam4S5ggNJ9/ezE NURRxddkTp3mHoFZpOGVfMik56upnpLkUT9yHbNV6TabQyB/gAIkwbWsk cY+bFPBhKucKJq7CbOZUOPBAoUHArWJV4vPZ4HBbXGCz8RE7toVKoVOPj ESZTqTsEVAgVW8BoBKIIQTxZLaWy+QMpahFDdugCLzLBzxCR8Z3ZLaHbp +pIzAz1/TbkqQbebF7tF2bbDdUr3c2t+QKpKfLaYOpSFgurvbyTWsGnF/ A==; X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="419839666" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="419839666" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jun 2023 23:42:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="798301066" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="798301066" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by FMSMGA003.fm.intel.com with ESMTP; 04 Jun 2023 23:42:34 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing , Xiao Wang Subject: [PATCH v8 13/14] net/cpfl: support hairpin bind/unbind Date: Mon, 5 Jun 2023 06:17:23 +0000 Message-Id: <20230605061724.88130-14-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230605061724.88130-1-beilei.xing@intel.com> References: <20230531130450.26380-1-beilei.xing@intel.com> <20230605061724.88130-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch supports hairpin_bind/unbind ops. Signed-off-by: Xiao Wang Signed-off-by: Beilei Xing --- drivers/net/cpfl/cpfl_ethdev.c | 137 +++++++++++++++++++++++++++++++++ 1 file changed, 137 insertions(+) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index 1a1ca4bc77..0d127eae3e 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -1120,6 +1120,141 @@ cpfl_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports, return j; } +static int +cpfl_hairpin_bind(struct rte_eth_dev *dev, uint16_t rx_port) +{ + struct cpfl_vport *cpfl_tx_vport = dev->data->dev_private; + struct idpf_vport *tx_vport = &cpfl_tx_vport->base; + struct cpfl_vport *cpfl_rx_vport; + struct cpfl_tx_queue *cpfl_txq; + struct cpfl_rx_queue *cpfl_rxq; + struct rte_eth_dev *peer_dev; + struct idpf_vport *rx_vport; + int err = 0; + int i; + + err = cpfl_txq_hairpin_info_update(dev, rx_port); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to update Tx hairpin queue info."); + return err; + } + + /* configure hairpin queues */ + for (i = cpfl_tx_vport->nb_data_txq; i < dev->data->nb_tx_queues; i++) { + cpfl_txq = dev->data->tx_queues[i]; + err = cpfl_hairpin_txq_config(tx_vport, cpfl_txq); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to configure hairpin Tx queue %u", i); + return err; + } + } + + err = cpfl_hairpin_tx_complq_config(cpfl_tx_vport); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to config Tx completion queue"); + return err; + } + + peer_dev = &rte_eth_devices[rx_port]; + cpfl_rx_vport = (struct cpfl_vport *)peer_dev->data->dev_private; + rx_vport = &cpfl_rx_vport->base; + cpfl_rxq_hairpin_mz_bind(peer_dev); + + err = cpfl_hairpin_rx_bufq_config(cpfl_rx_vport); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to config Rx buffer queue"); + return err; + } + + for (i = cpfl_rx_vport->nb_data_rxq; i < peer_dev->data->nb_rx_queues; i++) { + cpfl_rxq = peer_dev->data->rx_queues[i]; + err = cpfl_hairpin_rxq_config(rx_vport, cpfl_rxq); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to configure hairpin Rx queue %u", i); + return err; + } + err = cpfl_rx_queue_init(peer_dev, i); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to init hairpin Rx queue %u", i); + return err; + } + } + + /* enable hairpin queues */ + for (i = cpfl_tx_vport->nb_data_txq; i < dev->data->nb_tx_queues; i++) { + cpfl_txq = dev->data->tx_queues[i]; + err = cpfl_switch_hairpin_rxtx_queue(cpfl_tx_vport, + i - cpfl_tx_vport->nb_data_txq, + false, true); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to switch hairpin TX queue %u on", + i); + return err; + } + cpfl_txq->base.q_started = true; + } + + err = cpfl_switch_hairpin_complq(cpfl_tx_vport, true); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to switch hairpin Tx complq"); + return err; + } + + for (i = cpfl_rx_vport->nb_data_rxq; i < peer_dev->data->nb_rx_queues; i++) { + cpfl_rxq = peer_dev->data->rx_queues[i]; + err = cpfl_switch_hairpin_rxtx_queue(cpfl_rx_vport, + i - cpfl_rx_vport->nb_data_rxq, + true, true); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to switch hairpin RX queue %u on", + i); + } + cpfl_rxq->base.q_started = true; + } + + err = cpfl_switch_hairpin_bufq(cpfl_rx_vport, true); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to switch hairpin Rx buffer queue"); + return err; + } + + return 0; +} + +static int +cpfl_hairpin_unbind(struct rte_eth_dev *dev, uint16_t rx_port) +{ + struct cpfl_vport *cpfl_tx_vport = dev->data->dev_private; + struct rte_eth_dev *peer_dev = &rte_eth_devices[rx_port]; + struct cpfl_vport *cpfl_rx_vport = peer_dev->data->dev_private; + struct cpfl_tx_queue *cpfl_txq; + struct cpfl_rx_queue *cpfl_rxq; + int i; + + /* disable hairpin queues */ + for (i = cpfl_tx_vport->nb_data_txq; i < dev->data->nb_tx_queues; i++) { + cpfl_txq = dev->data->tx_queues[i]; + cpfl_switch_hairpin_rxtx_queue(cpfl_tx_vport, + i - cpfl_tx_vport->nb_data_txq, + false, false); + cpfl_txq->base.q_started = false; + } + + cpfl_switch_hairpin_complq(cpfl_tx_vport, false); + + for (i = cpfl_rx_vport->nb_data_rxq; i < peer_dev->data->nb_rx_queues; i++) { + cpfl_rxq = peer_dev->data->rx_queues[i]; + cpfl_switch_hairpin_rxtx_queue(cpfl_rx_vport, + i - cpfl_rx_vport->nb_data_rxq, + true, false); + cpfl_rxq->base.q_started = false; + } + + cpfl_switch_hairpin_bufq(cpfl_rx_vport, false); + + return 0; +} + static const struct eth_dev_ops cpfl_eth_dev_ops = { .dev_configure = cpfl_dev_configure, .dev_close = cpfl_dev_close, @@ -1150,6 +1285,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = { .rx_hairpin_queue_setup = cpfl_rx_hairpin_queue_setup, .tx_hairpin_queue_setup = cpfl_tx_hairpin_queue_setup, .hairpin_get_peer_ports = cpfl_hairpin_get_peer_ports, + .hairpin_bind = cpfl_hairpin_bind, + .hairpin_unbind = cpfl_hairpin_unbind, }; static int From patchwork Mon Jun 5 06:17:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 128090 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 02D5242C31; Mon, 5 Jun 2023 08:43:47 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2977842D9E; Mon, 5 Jun 2023 08:42:39 +0200 (CEST) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 5F28942D64 for ; Mon, 5 Jun 2023 08:42:37 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685947357; x=1717483357; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=oVQhuz58lMsIDG0WuorafsAG9p4gkLFTqvO9dYitu7A=; b=JhXtcS9bdSRPA+/351CD0AUnv9+DnhQvwSwVZrqDc9M309p9kdsKPw88 QMPi7GvII9eH+6iG2n7in5nW3vE/uxVQfZcaWuFwEmG2RETq5QnCu/K6H lpZ6aGnZfoBrLysnOiBeNdI9JfGNZbKjMq4G5jymT+6MAGudAmYzRNJYW KdVWXoJ+hE6DmiTNtLpY/bsTohUR+PYgIsj+G3mVPm41/xc+I/nnGy3L8 Xd/dR+pyh2SxdcDJ5cWdY668gPnDrCsJfbWHkovNWF/4JtknxbbpDW+H8 lF98aj/RQZwG7wPCcd26Dn3LzMXdHzuchBGirhQL1GGK36Ha3/B9P1tsA g==; X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="419839668" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="419839668" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jun 2023 23:42:36 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="798301069" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="798301069" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by FMSMGA003.fm.intel.com with ESMTP; 04 Jun 2023 23:42:35 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing Subject: [PATCH v8 14/14] doc: update the doc of CPFL PMD Date: Mon, 5 Jun 2023 06:17:24 +0000 Message-Id: <20230605061724.88130-15-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230605061724.88130-1-beilei.xing@intel.com> References: <20230531130450.26380-1-beilei.xing@intel.com> <20230605061724.88130-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing Update cpfl.rst to clarify hairpin support. Signed-off-by: Beilei Xing --- doc/guides/nics/cpfl.rst | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/doc/guides/nics/cpfl.rst b/doc/guides/nics/cpfl.rst index d25db088eb..8d5c3082e4 100644 --- a/doc/guides/nics/cpfl.rst +++ b/doc/guides/nics/cpfl.rst @@ -106,3 +106,10 @@ The paths are chosen based on 2 conditions: A value "P" means the offload feature is not supported by vector path. If any not supported features are used, cpfl vector PMD is disabled and the scalar paths are chosen. + +Hairpin queue +~~~~~~~~~~~~~ + + E2100 Series can loopback packets from RX port to TX port, this feature is + called port-to-port or hairpin. + Currently, the PMD only supports single port hairpin.