From patchwork Mon Jun 5 09:06:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 128104 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4866442C34; Mon, 5 Jun 2023 11:32:10 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B92B842D0B; Mon, 5 Jun 2023 11:32:04 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 2EA034003C for ; Mon, 5 Jun 2023 11:31:59 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685957521; x=1717493521; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Z03PGnmb4IvqIT+p2fyA+nrckUU8ET1jTiQRpNyFFws=; b=SET+tpan57H/wHaTokaP8HdiVPq8z6uh9K7gWtqyz+dwifjeZYDgbg12 zwsgaySx94kaHJbiU4DuE+dCXneLnhonMN0g+CnleO8lTLM0t7PJ18IUb TE3IOzaPApxY+4Jg/znG9LD9nDrjs0iNePyA25FIJM/tHa1/OxIkc3dqP KkQ4W/xl+RDyc73NwwxLldWk1NBH9ws3H4Ul68WValxTtwFpcYpsSX4xk xWiLLZ2z8y/3KlfpVRaj2+j9Tk2x5G+utmrkE546YHoLqhNzbPga/+Peq DhD/mZLhMqw3wnCENfbV/Ikdrpj5/Wem0ch/b4GObpLOTNiHjcnF40nIF Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="355181042" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="355181042" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jun 2023 02:31:57 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="741652000" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="741652000" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga001.jf.intel.com with ESMTP; 05 Jun 2023 02:31:54 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing Subject: [PATCH v9 01/14] net/cpfl: refine structures Date: Mon, 5 Jun 2023 09:06:28 +0000 Message-Id: <20230605090641.36525-2-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230605090641.36525-1-beilei.xing@intel.com> References: <20230605061724.88130-1-beilei.xing@intel.com> <20230605090641.36525-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch refines some structures to support hairpin queue, cpfl_rx_queue/cpfl_tx_queue/cpfl_vport. Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/net/cpfl/cpfl_ethdev.c | 85 +++++++----- drivers/net/cpfl/cpfl_ethdev.h | 6 +- drivers/net/cpfl/cpfl_rxtx.c | 175 +++++++++++++++++------- drivers/net/cpfl/cpfl_rxtx.h | 8 ++ drivers/net/cpfl/cpfl_rxtx_vec_common.h | 17 +-- 5 files changed, 196 insertions(+), 95 deletions(-) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index 7528a14d05..e587155db6 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -124,7 +124,8 @@ static int cpfl_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct rte_eth_link new_link; unsigned int i; @@ -156,7 +157,8 @@ cpfl_dev_link_update(struct rte_eth_dev *dev, static int cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; dev_info->max_rx_queues = base->caps.max_rx_q; @@ -216,7 +218,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) static int cpfl_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; /* mtu setting is forbidden if port is start */ if (dev->data->dev_started) { @@ -256,12 +259,12 @@ static uint64_t cpfl_get_mbuf_alloc_failed_stats(struct rte_eth_dev *dev) { uint64_t mbuf_alloc_failed = 0; - struct idpf_rx_queue *rxq; + struct cpfl_rx_queue *cpfl_rxq; int i = 0; for (i = 0; i < dev->data->nb_rx_queues; i++) { - rxq = dev->data->rx_queues[i]; - mbuf_alloc_failed += __atomic_load_n(&rxq->rx_stats.mbuf_alloc_failed, + cpfl_rxq = dev->data->rx_queues[i]; + mbuf_alloc_failed += __atomic_load_n(&cpfl_rxq->base.rx_stats.mbuf_alloc_failed, __ATOMIC_RELAXED); } @@ -271,8 +274,8 @@ cpfl_get_mbuf_alloc_failed_stats(struct rte_eth_dev *dev) static int cpfl_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { - struct idpf_vport *vport = - (struct idpf_vport *)dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct virtchnl2_vport_stats *pstats = NULL; int ret; @@ -305,20 +308,20 @@ cpfl_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) static void cpfl_reset_mbuf_alloc_failed_stats(struct rte_eth_dev *dev) { - struct idpf_rx_queue *rxq; + struct cpfl_rx_queue *cpfl_rxq; int i; for (i = 0; i < dev->data->nb_rx_queues; i++) { - rxq = dev->data->rx_queues[i]; - __atomic_store_n(&rxq->rx_stats.mbuf_alloc_failed, 0, __ATOMIC_RELAXED); + cpfl_rxq = dev->data->rx_queues[i]; + __atomic_store_n(&cpfl_rxq->base.rx_stats.mbuf_alloc_failed, 0, __ATOMIC_RELAXED); } } static int cpfl_dev_stats_reset(struct rte_eth_dev *dev) { - struct idpf_vport *vport = - (struct idpf_vport *)dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct virtchnl2_vport_stats *pstats = NULL; int ret; @@ -343,8 +346,8 @@ static int cpfl_dev_xstats_reset(struct rte_eth_dev *dev) static int cpfl_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, unsigned int n) { - struct idpf_vport *vport = - (struct idpf_vport *)dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct virtchnl2_vport_stats *pstats = NULL; unsigned int i; int ret; @@ -459,7 +462,8 @@ cpfl_rss_reta_update(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; uint16_t idx, shift; int ret = 0; @@ -498,7 +502,8 @@ cpfl_rss_reta_query(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; uint16_t idx, shift; int ret = 0; @@ -536,7 +541,8 @@ static int cpfl_rss_hash_update(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; int ret = 0; @@ -601,7 +607,8 @@ static int cpfl_rss_hash_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; int ret = 0; @@ -638,7 +645,8 @@ cpfl_rss_hash_conf_get(struct rte_eth_dev *dev, static int cpfl_dev_configure(struct rte_eth_dev *dev) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct rte_eth_conf *conf = &dev->data->dev_conf; struct idpf_adapter *base = vport->adapter; int ret; @@ -710,7 +718,8 @@ cpfl_dev_configure(struct rte_eth_dev *dev) static int cpfl_config_rx_queues_irqs(struct rte_eth_dev *dev) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; uint16_t nb_rx_queues = dev->data->nb_rx_queues; return idpf_vport_irq_map_config(vport, nb_rx_queues); @@ -719,14 +728,14 @@ cpfl_config_rx_queues_irqs(struct rte_eth_dev *dev) static int cpfl_start_queues(struct rte_eth_dev *dev) { - struct idpf_rx_queue *rxq; - struct idpf_tx_queue *txq; + struct cpfl_rx_queue *cpfl_rxq; + struct cpfl_tx_queue *cpfl_txq; int err = 0; int i; for (i = 0; i < dev->data->nb_tx_queues; i++) { - txq = dev->data->tx_queues[i]; - if (txq == NULL || txq->tx_deferred_start) + cpfl_txq = dev->data->tx_queues[i]; + if (cpfl_txq == NULL || cpfl_txq->base.tx_deferred_start) continue; err = cpfl_tx_queue_start(dev, i); if (err != 0) { @@ -736,8 +745,8 @@ cpfl_start_queues(struct rte_eth_dev *dev) } for (i = 0; i < dev->data->nb_rx_queues; i++) { - rxq = dev->data->rx_queues[i]; - if (rxq == NULL || rxq->rx_deferred_start) + cpfl_rxq = dev->data->rx_queues[i]; + if (cpfl_rxq == NULL || cpfl_rxq->base.rx_deferred_start) continue; err = cpfl_rx_queue_start(dev, i); if (err != 0) { @@ -752,7 +761,8 @@ cpfl_start_queues(struct rte_eth_dev *dev) static int cpfl_dev_start(struct rte_eth_dev *dev) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(base); uint16_t num_allocated_vectors = base->caps.num_allocated_vectors; @@ -813,7 +823,8 @@ cpfl_dev_start(struct rte_eth_dev *dev) static int cpfl_dev_stop(struct rte_eth_dev *dev) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; if (dev->data->dev_started == 0) return 0; @@ -832,7 +843,8 @@ cpfl_dev_stop(struct rte_eth_dev *dev) static int cpfl_dev_close(struct rte_eth_dev *dev) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(vport->adapter); cpfl_dev_stop(dev); @@ -842,7 +854,7 @@ cpfl_dev_close(struct rte_eth_dev *dev) adapter->cur_vport_nb--; dev->data->dev_private = NULL; adapter->vports[vport->sw_idx] = NULL; - rte_free(vport); + rte_free(cpfl_vport); return 0; } @@ -1047,7 +1059,7 @@ cpfl_find_vport(struct cpfl_adapter_ext *adapter, uint32_t vport_id) int i; for (i = 0; i < adapter->cur_vport_nb; i++) { - vport = adapter->vports[i]; + vport = &adapter->vports[i]->base; if (vport->vport_id != vport_id) continue; else @@ -1275,7 +1287,8 @@ cpfl_vport_idx_alloc(struct cpfl_adapter_ext *adapter) static int cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct cpfl_vport_param *param = init_params; struct cpfl_adapter_ext *adapter = param->adapter; /* for sending create vport virtchnl msg prepare */ @@ -1300,7 +1313,7 @@ cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params) goto err; } - adapter->vports[param->idx] = vport; + adapter->vports[param->idx] = cpfl_vport; adapter->cur_vports |= RTE_BIT32(param->devarg_id); adapter->cur_vport_nb++; @@ -1415,7 +1428,7 @@ cpfl_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, snprintf(name, sizeof(name), "cpfl_%s_vport_0", pci_dev->device.name); retval = rte_eth_dev_create(&pci_dev->device, name, - sizeof(struct idpf_vport), + sizeof(struct cpfl_vport), NULL, NULL, cpfl_dev_vport_init, &vport_param); if (retval != 0) @@ -1433,7 +1446,7 @@ cpfl_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, pci_dev->device.name, devargs.req_vports[i]); retval = rte_eth_dev_create(&pci_dev->device, name, - sizeof(struct idpf_vport), + sizeof(struct cpfl_vport), NULL, NULL, cpfl_dev_vport_init, &vport_param); if (retval != 0) diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h index 200dfcac02..81fe9ac4c3 100644 --- a/drivers/net/cpfl/cpfl_ethdev.h +++ b/drivers/net/cpfl/cpfl_ethdev.h @@ -69,13 +69,17 @@ struct cpfl_devargs { uint16_t req_vport_nb; }; +struct cpfl_vport { + struct idpf_vport base; +}; + struct cpfl_adapter_ext { TAILQ_ENTRY(cpfl_adapter_ext) next; struct idpf_adapter base; char name[CPFL_ADAPTER_NAME_LEN]; - struct idpf_vport **vports; + struct cpfl_vport **vports; uint16_t max_vport_nb; uint16_t cur_vports; /* bit mask of created vport */ diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c index 75021c3c54..04a51b8d15 100644 --- a/drivers/net/cpfl/cpfl_rxtx.c +++ b/drivers/net/cpfl/cpfl_rxtx.c @@ -128,7 +128,8 @@ cpfl_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *rxq, uint16_t nb_desc, unsigned int socket_id, struct rte_mempool *mp, uint8_t bufq_id) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; struct idpf_hw *hw = &base->hw; const struct rte_memzone *mz; @@ -220,15 +221,69 @@ cpfl_rx_split_bufq_release(struct idpf_rx_queue *bufq) rte_free(bufq); } +static void +cpfl_rx_queue_release(void *rxq) +{ + struct cpfl_rx_queue *cpfl_rxq = rxq; + struct idpf_rx_queue *q = NULL; + + if (cpfl_rxq == NULL) + return; + + q = &cpfl_rxq->base; + + /* Split queue */ + if (!q->adapter->is_rx_singleq) { + if (q->bufq2) + cpfl_rx_split_bufq_release(q->bufq2); + + if (q->bufq1) + cpfl_rx_split_bufq_release(q->bufq1); + + rte_free(cpfl_rxq); + return; + } + + /* Single queue */ + q->ops->release_mbufs(q); + rte_free(q->sw_ring); + rte_memzone_free(q->mz); + rte_free(cpfl_rxq); +} + +static void +cpfl_tx_queue_release(void *txq) +{ + struct cpfl_tx_queue *cpfl_txq = txq; + struct idpf_tx_queue *q = NULL; + + if (cpfl_txq == NULL) + return; + + q = &cpfl_txq->base; + + if (q->complq) { + rte_memzone_free(q->complq->mz); + rte_free(q->complq); + } + + q->ops->release_mbufs(q); + rte_free(q->sw_ring); + rte_memzone_free(q->mz); + rte_free(cpfl_txq); +} + int cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mp) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; struct idpf_hw *hw = &base->hw; + struct cpfl_rx_queue *cpfl_rxq; const struct rte_memzone *mz; struct idpf_rx_queue *rxq; uint16_t rx_free_thresh; @@ -248,21 +303,23 @@ cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, /* Free memory if needed */ if (dev->data->rx_queues[queue_idx] != NULL) { - idpf_qc_rx_queue_release(dev->data->rx_queues[queue_idx]); + cpfl_rx_queue_release(dev->data->rx_queues[queue_idx]); dev->data->rx_queues[queue_idx] = NULL; } /* Setup Rx queue */ - rxq = rte_zmalloc_socket("cpfl rxq", - sizeof(struct idpf_rx_queue), + cpfl_rxq = rte_zmalloc_socket("cpfl rxq", + sizeof(struct cpfl_rx_queue), RTE_CACHE_LINE_SIZE, socket_id); - if (rxq == NULL) { + if (cpfl_rxq == NULL) { PMD_INIT_LOG(ERR, "Failed to allocate memory for rx queue data structure"); ret = -ENOMEM; goto err_rxq_alloc; } + rxq = &cpfl_rxq->base; + is_splitq = !!(vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT); rxq->mp = mp; @@ -329,7 +386,7 @@ cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, } rxq->q_set = true; - dev->data->rx_queues[queue_idx] = rxq; + dev->data->rx_queues[queue_idx] = cpfl_rxq; return 0; @@ -349,7 +406,8 @@ cpfl_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq, uint16_t queue_idx, uint16_t nb_desc, unsigned int socket_id) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; const struct rte_memzone *mz; struct idpf_tx_queue *cq; int ret; @@ -397,9 +455,11 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_txconf *tx_conf) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; uint16_t tx_rs_thresh, tx_free_thresh; + struct cpfl_tx_queue *cpfl_txq; struct idpf_hw *hw = &base->hw; const struct rte_memzone *mz; struct idpf_tx_queue *txq; @@ -419,21 +479,23 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, /* Free memory if needed. */ if (dev->data->tx_queues[queue_idx] != NULL) { - idpf_qc_tx_queue_release(dev->data->tx_queues[queue_idx]); + cpfl_tx_queue_release(dev->data->tx_queues[queue_idx]); dev->data->tx_queues[queue_idx] = NULL; } /* Allocate the TX queue data structure. */ - txq = rte_zmalloc_socket("cpfl txq", - sizeof(struct idpf_tx_queue), + cpfl_txq = rte_zmalloc_socket("cpfl txq", + sizeof(struct cpfl_tx_queue), RTE_CACHE_LINE_SIZE, socket_id); - if (txq == NULL) { + if (cpfl_txq == NULL) { PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure"); ret = -ENOMEM; goto err_txq_alloc; } + txq = &cpfl_txq->base; + is_splitq = !!(vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT); txq->nb_tx_desc = nb_desc; @@ -487,7 +549,7 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, queue_idx * vport->chunks_info.tx_qtail_spacing); txq->ops = &def_txq_ops; txq->q_set = true; - dev->data->tx_queues[queue_idx] = txq; + dev->data->tx_queues[queue_idx] = cpfl_txq; return 0; @@ -503,6 +565,7 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, int cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) { + struct cpfl_rx_queue *cpfl_rxq; struct idpf_rx_queue *rxq; uint16_t max_pkt_len; uint32_t frame_size; @@ -511,7 +574,8 @@ cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) if (rx_queue_id >= dev->data->nb_rx_queues) return -EINVAL; - rxq = dev->data->rx_queues[rx_queue_id]; + cpfl_rxq = dev->data->rx_queues[rx_queue_id]; + rxq = &cpfl_rxq->base; if (rxq == NULL || !rxq->q_set) { PMD_DRV_LOG(ERR, "RX queue %u not available or setup", @@ -575,9 +639,10 @@ cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) int cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) { - struct idpf_vport *vport = dev->data->dev_private; - struct idpf_rx_queue *rxq = - dev->data->rx_queues[rx_queue_id]; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; + struct cpfl_rx_queue *cpfl_rxq = dev->data->rx_queues[rx_queue_id]; + struct idpf_rx_queue *rxq = &cpfl_rxq->base; int err = 0; err = idpf_vc_rxq_config(vport, rxq); @@ -610,15 +675,15 @@ cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) int cpfl_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id) { - struct idpf_tx_queue *txq; + struct cpfl_tx_queue *cpfl_txq; if (tx_queue_id >= dev->data->nb_tx_queues) return -EINVAL; - txq = dev->data->tx_queues[tx_queue_id]; + cpfl_txq = dev->data->tx_queues[tx_queue_id]; /* Init the RX tail register. */ - IDPF_PCI_REG_WRITE(txq->qtx_tail, 0); + IDPF_PCI_REG_WRITE(cpfl_txq->base.qtx_tail, 0); return 0; } @@ -626,12 +691,13 @@ cpfl_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id) int cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) { - struct idpf_vport *vport = dev->data->dev_private; - struct idpf_tx_queue *txq = + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; + struct cpfl_tx_queue *cpfl_txq = dev->data->tx_queues[tx_queue_id]; int err = 0; - err = idpf_vc_txq_config(vport, txq); + err = idpf_vc_txq_config(vport, &cpfl_txq->base); if (err != 0) { PMD_DRV_LOG(ERR, "Fail to configure Tx queue %u", tx_queue_id); return err; @@ -650,7 +716,7 @@ cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on", tx_queue_id); } else { - txq->q_started = true; + cpfl_txq->base.q_started = true; dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; } @@ -661,13 +727,16 @@ cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; + struct cpfl_rx_queue *cpfl_rxq; struct idpf_rx_queue *rxq; int err; if (rx_queue_id >= dev->data->nb_rx_queues) return -EINVAL; + cpfl_rxq = dev->data->rx_queues[rx_queue_id]; err = idpf_vc_queue_switch(vport, rx_queue_id, true, false); if (err != 0) { PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off", @@ -675,7 +744,7 @@ cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) return err; } - rxq = dev->data->rx_queues[rx_queue_id]; + rxq = &cpfl_rxq->base; rxq->q_started = false; if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) { rxq->ops->release_mbufs(rxq); @@ -693,13 +762,17 @@ cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) int cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; + struct cpfl_tx_queue *cpfl_txq; struct idpf_tx_queue *txq; int err; if (tx_queue_id >= dev->data->nb_tx_queues) return -EINVAL; + cpfl_txq = dev->data->tx_queues[tx_queue_id]; + err = idpf_vc_queue_switch(vport, tx_queue_id, false, false); if (err != 0) { PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off", @@ -707,7 +780,7 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) return err; } - txq = dev->data->tx_queues[tx_queue_id]; + txq = &cpfl_txq->base; txq->q_started = false; txq->ops->release_mbufs(txq); if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) { @@ -724,25 +797,25 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid) { - idpf_qc_rx_queue_release(dev->data->rx_queues[qid]); + cpfl_rx_queue_release(dev->data->rx_queues[qid]); } void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid) { - idpf_qc_tx_queue_release(dev->data->tx_queues[qid]); + cpfl_tx_queue_release(dev->data->tx_queues[qid]); } void cpfl_stop_queues(struct rte_eth_dev *dev) { - struct idpf_rx_queue *rxq; - struct idpf_tx_queue *txq; + struct cpfl_rx_queue *cpfl_rxq; + struct cpfl_tx_queue *cpfl_txq; int i; for (i = 0; i < dev->data->nb_rx_queues; i++) { - rxq = dev->data->rx_queues[i]; - if (rxq == NULL) + cpfl_rxq = dev->data->rx_queues[i]; + if (cpfl_rxq == NULL) continue; if (cpfl_rx_queue_stop(dev, i) != 0) @@ -750,8 +823,8 @@ cpfl_stop_queues(struct rte_eth_dev *dev) } for (i = 0; i < dev->data->nb_tx_queues; i++) { - txq = dev->data->tx_queues[i]; - if (txq == NULL) + cpfl_txq = dev->data->tx_queues[i]; + if (cpfl_txq == NULL) continue; if (cpfl_tx_queue_stop(dev, i) != 0) @@ -762,9 +835,10 @@ cpfl_stop_queues(struct rte_eth_dev *dev) void cpfl_set_rx_function(struct rte_eth_dev *dev) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; #ifdef RTE_ARCH_X86 - struct idpf_rx_queue *rxq; + struct cpfl_rx_queue *cpfl_rxq; int i; if (cpfl_rx_vec_dev_check_default(dev) == CPFL_VECTOR_PATH && @@ -790,8 +864,8 @@ cpfl_set_rx_function(struct rte_eth_dev *dev) if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) { if (vport->rx_vec_allowed) { for (i = 0; i < dev->data->nb_rx_queues; i++) { - rxq = dev->data->rx_queues[i]; - (void)idpf_qc_splitq_rx_vec_setup(rxq); + cpfl_rxq = dev->data->rx_queues[i]; + (void)idpf_qc_splitq_rx_vec_setup(&cpfl_rxq->base); } #ifdef CC_AVX512_SUPPORT if (vport->rx_use_avx512) { @@ -810,8 +884,8 @@ cpfl_set_rx_function(struct rte_eth_dev *dev) } else { if (vport->rx_vec_allowed) { for (i = 0; i < dev->data->nb_rx_queues; i++) { - rxq = dev->data->rx_queues[i]; - (void)idpf_qc_singleq_rx_vec_setup(rxq); + cpfl_rxq = dev->data->rx_queues[i]; + (void)idpf_qc_singleq_rx_vec_setup(&cpfl_rxq->base); } #ifdef CC_AVX512_SUPPORT if (vport->rx_use_avx512) { @@ -860,10 +934,11 @@ cpfl_set_rx_function(struct rte_eth_dev *dev) void cpfl_set_tx_function(struct rte_eth_dev *dev) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; #ifdef RTE_ARCH_X86 #ifdef CC_AVX512_SUPPORT - struct idpf_tx_queue *txq; + struct cpfl_tx_queue *cpfl_txq; int i; #endif /* CC_AVX512_SUPPORT */ @@ -878,8 +953,8 @@ cpfl_set_tx_function(struct rte_eth_dev *dev) vport->tx_use_avx512 = true; if (vport->tx_use_avx512) { for (i = 0; i < dev->data->nb_tx_queues; i++) { - txq = dev->data->tx_queues[i]; - idpf_qc_tx_vec_avx512_setup(txq); + cpfl_txq = dev->data->tx_queues[i]; + idpf_qc_tx_vec_avx512_setup(&cpfl_txq->base); } } } @@ -916,10 +991,10 @@ cpfl_set_tx_function(struct rte_eth_dev *dev) #ifdef CC_AVX512_SUPPORT if (vport->tx_use_avx512) { for (i = 0; i < dev->data->nb_tx_queues; i++) { - txq = dev->data->tx_queues[i]; - if (txq == NULL) + cpfl_txq = dev->data->tx_queues[i]; + if (cpfl_txq == NULL) continue; - idpf_qc_tx_vec_avx512_setup(txq); + idpf_qc_tx_vec_avx512_setup(&cpfl_txq->base); } PMD_DRV_LOG(NOTICE, "Using Single AVX512 Vector Tx (port %d).", diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h index fb267d38c8..bfb9ad97bd 100644 --- a/drivers/net/cpfl/cpfl_rxtx.h +++ b/drivers/net/cpfl/cpfl_rxtx.h @@ -23,6 +23,14 @@ #define CPFL_SUPPORT_CHAIN_NUM 5 +struct cpfl_rx_queue { + struct idpf_rx_queue base; +}; + +struct cpfl_tx_queue { + struct idpf_tx_queue base; +}; + int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_txconf *tx_conf); diff --git a/drivers/net/cpfl/cpfl_rxtx_vec_common.h b/drivers/net/cpfl/cpfl_rxtx_vec_common.h index 665418d27d..5690b17911 100644 --- a/drivers/net/cpfl/cpfl_rxtx_vec_common.h +++ b/drivers/net/cpfl/cpfl_rxtx_vec_common.h @@ -76,15 +76,16 @@ cpfl_rx_splitq_vec_default(struct idpf_rx_queue *rxq) static inline int cpfl_rx_vec_dev_check_default(struct rte_eth_dev *dev) { - struct idpf_vport *vport = dev->data->dev_private; - struct idpf_rx_queue *rxq; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; + struct cpfl_rx_queue *cpfl_rxq; int i, default_ret, splitq_ret, ret = CPFL_SCALAR_PATH; for (i = 0; i < dev->data->nb_rx_queues; i++) { - rxq = dev->data->rx_queues[i]; - default_ret = cpfl_rx_vec_queue_default(rxq); + cpfl_rxq = dev->data->rx_queues[i]; + default_ret = cpfl_rx_vec_queue_default(&cpfl_rxq->base); if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) { - splitq_ret = cpfl_rx_splitq_vec_default(rxq); + splitq_ret = cpfl_rx_splitq_vec_default(&cpfl_rxq->base); ret = splitq_ret && default_ret; } else { ret = default_ret; @@ -100,12 +101,12 @@ static inline int cpfl_tx_vec_dev_check_default(struct rte_eth_dev *dev) { int i; - struct idpf_tx_queue *txq; + struct cpfl_tx_queue *cpfl_txq; int ret = 0; for (i = 0; i < dev->data->nb_tx_queues; i++) { - txq = dev->data->tx_queues[i]; - ret = cpfl_tx_vec_queue_default(txq); + cpfl_txq = dev->data->tx_queues[i]; + ret = cpfl_tx_vec_queue_default(&cpfl_txq->base); if (ret == CPFL_SCALAR_PATH) return CPFL_SCALAR_PATH; } From patchwork Mon Jun 5 09:06:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 128103 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 34F6F42C34; Mon, 5 Jun 2023 11:32:04 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4472941141; Mon, 5 Jun 2023 11:32:03 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 747DA40A7F for ; Mon, 5 Jun 2023 11:31:59 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685957521; x=1717493521; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=82N8O95wtbEHqZYor1Qdr1XbDZ5VBYUiACvRUiM8voI=; b=FduaFf8IP6N2hyl2nEcRR0FKXUuQW7tZxY6fx1Q1cVJ/bQeE8Eoi8XFQ XPhfSkfGS2WUI9fKNAE91ODfES+m1w2/wUnYJ61xdRWCBKZ5DQYp0XGrf Yqu1fbnK5VabHE2Gr3KywUGwyYAa+tR65ouUn3gt117rqZIIfT77SxFE/ zklUZkmCOotqAzf5I3T3Buv243b+bC0UziQ19Fe43xHahFACR+QFAv020 caV4LaJBY0ARQru3KjPWoC4uaJ1vk32qTQKuNWJDQJUQquGFdF2HW3iSv cEhj1oWrGHAdKouj5HzzQMruUCTfsz3bvvJ36FIu/Volo5KiRZBWjXM81 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="355181045" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="355181045" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jun 2023 02:31:58 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="741652012" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="741652012" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga001.jf.intel.com with ESMTP; 05 Jun 2023 02:31:56 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing Subject: [PATCH v9 02/14] common/idpf: support queue groups add/delete Date: Mon, 5 Jun 2023 09:06:29 +0000 Message-Id: <20230605090641.36525-3-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230605090641.36525-1-beilei.xing@intel.com> References: <20230605061724.88130-1-beilei.xing@intel.com> <20230605090641.36525-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch adds queue group add/delete virtual channel support. Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/common/idpf/idpf_common_virtchnl.c | 66 ++++++++++++++++++++++ drivers/common/idpf/idpf_common_virtchnl.h | 9 +++ drivers/common/idpf/version.map | 2 + 3 files changed, 77 insertions(+) diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c index b713678634..a3fe55c897 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.c +++ b/drivers/common/idpf/idpf_common_virtchnl.c @@ -359,6 +359,72 @@ idpf_vc_vport_destroy(struct idpf_vport *vport) return err; } +int +idpf_vc_queue_grps_add(struct idpf_vport *vport, + struct virtchnl2_add_queue_groups *p2p_queue_grps_info, + uint8_t *p2p_queue_grps_out) +{ + struct idpf_adapter *adapter = vport->adapter; + struct idpf_cmd_info args; + int size, qg_info_size; + int err = -1; + + size = sizeof(*p2p_queue_grps_info) + + (p2p_queue_grps_info->qg_info.num_queue_groups - 1) * + sizeof(struct virtchnl2_queue_group_info); + + memset(&args, 0, sizeof(args)); + args.ops = VIRTCHNL2_OP_ADD_QUEUE_GROUPS; + args.in_args = (uint8_t *)p2p_queue_grps_info; + args.in_args_size = size; + args.out_buffer = adapter->mbx_resp; + args.out_size = IDPF_DFLT_MBX_BUF_SIZE; + + err = idpf_vc_cmd_execute(adapter, &args); + if (err != 0) { + DRV_LOG(ERR, + "Failed to execute command of VIRTCHNL2_OP_ADD_QUEUE_GROUPS"); + return err; + } + + rte_memcpy(p2p_queue_grps_out, args.out_buffer, IDPF_DFLT_MBX_BUF_SIZE); + return 0; +} + +int idpf_vc_queue_grps_del(struct idpf_vport *vport, + uint16_t num_q_grps, + struct virtchnl2_queue_group_id *qg_ids) +{ + struct idpf_adapter *adapter = vport->adapter; + struct virtchnl2_delete_queue_groups *vc_del_q_grps; + struct idpf_cmd_info args; + int size; + int err; + + size = sizeof(*vc_del_q_grps) + + (num_q_grps - 1) * sizeof(struct virtchnl2_queue_group_id); + vc_del_q_grps = rte_zmalloc("vc_del_q_grps", size, 0); + + vc_del_q_grps->vport_id = vport->vport_id; + vc_del_q_grps->num_queue_groups = num_q_grps; + memcpy(vc_del_q_grps->qg_ids, qg_ids, + num_q_grps * sizeof(struct virtchnl2_queue_group_id)); + + memset(&args, 0, sizeof(args)); + args.ops = VIRTCHNL2_OP_DEL_QUEUE_GROUPS; + args.in_args = (uint8_t *)vc_del_q_grps; + args.in_args_size = size; + args.out_buffer = adapter->mbx_resp; + args.out_size = IDPF_DFLT_MBX_BUF_SIZE; + + err = idpf_vc_cmd_execute(adapter, &args); + if (err != 0) + DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_DEL_QUEUE_GROUPS"); + + rte_free(vc_del_q_grps); + return err; +} + int idpf_vc_rss_key_set(struct idpf_vport *vport) { diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h index c45295290e..58b16e1c5d 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.h +++ b/drivers/common/idpf/idpf_common_virtchnl.h @@ -64,4 +64,13 @@ int idpf_vc_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg, __rte_internal int idpf_vc_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq, u16 *buff_count, struct idpf_dma_mem **buffs); +__rte_internal +int idpf_vc_queue_grps_del(struct idpf_vport *vport, + uint16_t num_q_grps, + struct virtchnl2_queue_group_id *qg_ids); +__rte_internal +int +idpf_vc_queue_grps_add(struct idpf_vport *vport, + struct virtchnl2_add_queue_groups *ptp_queue_grps_info, + uint8_t *ptp_queue_grps_out); #endif /* _IDPF_COMMON_VIRTCHNL_H_ */ diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map index 70334a1b03..01d18f3f3f 100644 --- a/drivers/common/idpf/version.map +++ b/drivers/common/idpf/version.map @@ -43,6 +43,8 @@ INTERNAL { idpf_vc_irq_map_unmap_config; idpf_vc_one_msg_read; idpf_vc_ptype_info_query; + idpf_vc_queue_grps_add; + idpf_vc_queue_grps_del; idpf_vc_queue_switch; idpf_vc_queues_ena_dis; idpf_vc_rss_hash_get; From patchwork Mon Jun 5 09:06:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 128106 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0637E42C34; Mon, 5 Jun 2023 11:32:27 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7362F42D3A; Mon, 5 Jun 2023 11:32:07 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 2C680427F2 for ; Mon, 5 Jun 2023 11:32:04 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685957524; x=1717493524; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=kznGVveTrSn3ervA6eVTU4RO6cs5COwN9VLkt6jwwi4=; b=UeLjdAdz1+er8iyERGWSsUhISVgvVUVAdpBWFceJEGdGfxqmOUExHwsC SZQTEobJCTxdHypp20v2o0sG14HBJFUfoyWaUkoCVeDYC7TStdkdqqNfz wII2dcxVP/WYMX95d84kgX7HYMkhGl2EYG6IBfzCbqN7K+iOupnTWsPvF tisvIYOjVSSDhq2u5nLYWlKwtVUlaEVVh9fqAPz0SJxuiq0hZNvzW2MiZ Mv/wAi8B95KPhUqclzOPEmkVJe2IIyZKgjbFja0V7XRS/4xIp0JL+wQMI JJycofISR3Habpyz1mAapbNc9PWoB10l3V9W1DfXEx8dCx92NRJx3hQfD Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="355181050" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="355181050" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jun 2023 02:32:01 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="741652023" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="741652023" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga001.jf.intel.com with ESMTP; 05 Jun 2023 02:31:58 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing Subject: [PATCH v9 03/14] net/cpfl: add haipin queue group during vport init Date: Mon, 5 Jun 2023 09:06:30 +0000 Message-Id: <20230605090641.36525-4-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230605090641.36525-1-beilei.xing@intel.com> References: <20230605061724.88130-1-beilei.xing@intel.com> <20230605090641.36525-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch adds haipin queue group during vport init. Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/net/cpfl/cpfl_ethdev.c | 134 +++++++++++++++++++++++++++++++++ drivers/net/cpfl/cpfl_ethdev.h | 18 +++++ drivers/net/cpfl/cpfl_rxtx.h | 7 ++ 3 files changed, 159 insertions(+) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index e587155db6..7f34cd288c 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -840,6 +840,20 @@ cpfl_dev_stop(struct rte_eth_dev *dev) return 0; } +static int +cpfl_p2p_queue_grps_del(struct idpf_vport *vport) +{ + struct virtchnl2_queue_group_id qg_ids[CPFL_P2P_NB_QUEUE_GRPS] = {0}; + int ret = 0; + + qg_ids[0].queue_group_id = CPFL_P2P_QUEUE_GRP_ID; + qg_ids[0].queue_group_type = VIRTCHNL2_QUEUE_GROUP_P2P; + ret = idpf_vc_queue_grps_del(vport, CPFL_P2P_NB_QUEUE_GRPS, qg_ids); + if (ret) + PMD_DRV_LOG(ERR, "Failed to delete p2p queue groups"); + return ret; +} + static int cpfl_dev_close(struct rte_eth_dev *dev) { @@ -848,7 +862,12 @@ cpfl_dev_close(struct rte_eth_dev *dev) struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(vport->adapter); cpfl_dev_stop(dev); + + if (!adapter->base.is_rx_singleq && !adapter->base.is_tx_singleq) + cpfl_p2p_queue_grps_del(vport); + idpf_vport_deinit(vport); + rte_free(cpfl_vport->p2p_q_chunks_info); adapter->cur_vports &= ~RTE_BIT32(vport->devarg_id); adapter->cur_vport_nb--; @@ -1284,6 +1303,96 @@ cpfl_vport_idx_alloc(struct cpfl_adapter_ext *adapter) return vport_idx; } +static int +cpfl_p2p_q_grps_add(struct idpf_vport *vport, + struct virtchnl2_add_queue_groups *p2p_queue_grps_info, + uint8_t *p2p_q_vc_out_info) +{ + int ret; + + p2p_queue_grps_info->vport_id = vport->vport_id; + p2p_queue_grps_info->qg_info.num_queue_groups = CPFL_P2P_NB_QUEUE_GRPS; + p2p_queue_grps_info->qg_info.groups[0].num_rx_q = CPFL_MAX_P2P_NB_QUEUES; + p2p_queue_grps_info->qg_info.groups[0].num_rx_bufq = CPFL_P2P_NB_RX_BUFQ; + p2p_queue_grps_info->qg_info.groups[0].num_tx_q = CPFL_MAX_P2P_NB_QUEUES; + p2p_queue_grps_info->qg_info.groups[0].num_tx_complq = CPFL_P2P_NB_TX_COMPLQ; + p2p_queue_grps_info->qg_info.groups[0].qg_id.queue_group_id = CPFL_P2P_QUEUE_GRP_ID; + p2p_queue_grps_info->qg_info.groups[0].qg_id.queue_group_type = VIRTCHNL2_QUEUE_GROUP_P2P; + p2p_queue_grps_info->qg_info.groups[0].rx_q_grp_info.rss_lut_size = 0; + p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.tx_tc = 0; + p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.priority = 0; + p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.is_sp = 0; + p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.pir_weight = 0; + + ret = idpf_vc_queue_grps_add(vport, p2p_queue_grps_info, p2p_q_vc_out_info); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to add p2p queue groups."); + return ret; + } + + return ret; +} + +static int +cpfl_p2p_queue_info_init(struct cpfl_vport *cpfl_vport, + struct virtchnl2_add_queue_groups *p2p_q_vc_out_info) +{ + struct p2p_queue_chunks_info *p2p_q_chunks_info = cpfl_vport->p2p_q_chunks_info; + struct virtchnl2_queue_reg_chunks *vc_chunks_out; + int i, type; + + if (p2p_q_vc_out_info->qg_info.groups[0].qg_id.queue_group_type != + VIRTCHNL2_QUEUE_GROUP_P2P) { + PMD_DRV_LOG(ERR, "Add queue group response mismatch."); + return -EINVAL; + } + + vc_chunks_out = &p2p_q_vc_out_info->qg_info.groups[0].chunks; + + for (i = 0; i < vc_chunks_out->num_chunks; i++) { + type = vc_chunks_out->chunks[i].type; + switch (type) { + case VIRTCHNL2_QUEUE_TYPE_TX: + p2p_q_chunks_info->tx_start_qid = + vc_chunks_out->chunks[i].start_queue_id; + p2p_q_chunks_info->tx_qtail_start = + vc_chunks_out->chunks[i].qtail_reg_start; + p2p_q_chunks_info->tx_qtail_spacing = + vc_chunks_out->chunks[i].qtail_reg_spacing; + break; + case VIRTCHNL2_QUEUE_TYPE_RX: + p2p_q_chunks_info->rx_start_qid = + vc_chunks_out->chunks[i].start_queue_id; + p2p_q_chunks_info->rx_qtail_start = + vc_chunks_out->chunks[i].qtail_reg_start; + p2p_q_chunks_info->rx_qtail_spacing = + vc_chunks_out->chunks[i].qtail_reg_spacing; + break; + case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION: + p2p_q_chunks_info->tx_compl_start_qid = + vc_chunks_out->chunks[i].start_queue_id; + p2p_q_chunks_info->tx_compl_qtail_start = + vc_chunks_out->chunks[i].qtail_reg_start; + p2p_q_chunks_info->tx_compl_qtail_spacing = + vc_chunks_out->chunks[i].qtail_reg_spacing; + break; + case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER: + p2p_q_chunks_info->rx_buf_start_qid = + vc_chunks_out->chunks[i].start_queue_id; + p2p_q_chunks_info->rx_buf_qtail_start = + vc_chunks_out->chunks[i].qtail_reg_start; + p2p_q_chunks_info->rx_buf_qtail_spacing = + vc_chunks_out->chunks[i].qtail_reg_spacing; + break; + default: + PMD_DRV_LOG(ERR, "Unsupported queue type"); + break; + } + } + + return 0; +} + static int cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params) { @@ -1293,6 +1402,8 @@ cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params) struct cpfl_adapter_ext *adapter = param->adapter; /* for sending create vport virtchnl msg prepare */ struct virtchnl2_create_vport create_vport_info; + struct virtchnl2_add_queue_groups p2p_queue_grps_info; + uint8_t p2p_q_vc_out_info[IDPF_DFLT_MBX_BUF_SIZE] = {0}; int ret = 0; dev->dev_ops = &cpfl_eth_dev_ops; @@ -1327,6 +1438,29 @@ cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params) rte_ether_addr_copy((struct rte_ether_addr *)vport->default_mac_addr, &dev->data->mac_addrs[0]); + if (!adapter->base.is_rx_singleq && !adapter->base.is_tx_singleq) { + memset(&p2p_queue_grps_info, 0, sizeof(p2p_queue_grps_info)); + ret = cpfl_p2p_q_grps_add(vport, &p2p_queue_grps_info, p2p_q_vc_out_info); + if (ret != 0) { + PMD_INIT_LOG(WARNING, "Failed to add p2p queue group."); + return 0; + } + cpfl_vport->p2p_q_chunks_info = rte_zmalloc(NULL, + sizeof(struct p2p_queue_chunks_info), 0); + if (cpfl_vport->p2p_q_chunks_info == NULL) { + PMD_INIT_LOG(WARNING, "Failed to allocate p2p queue info."); + cpfl_p2p_queue_grps_del(vport); + return 0; + } + ret = cpfl_p2p_queue_info_init(cpfl_vport, + (struct virtchnl2_add_queue_groups *)p2p_q_vc_out_info); + if (ret != 0) { + PMD_INIT_LOG(WARNING, "Failed to init p2p queue info."); + rte_free(cpfl_vport->p2p_q_chunks_info); + cpfl_p2p_queue_grps_del(vport); + } + } + return 0; err_mac_addrs: diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h index 81fe9ac4c3..666d46a44a 100644 --- a/drivers/net/cpfl/cpfl_ethdev.h +++ b/drivers/net/cpfl/cpfl_ethdev.h @@ -56,6 +56,7 @@ /* Device IDs */ #define IDPF_DEV_ID_CPF 0x1453 +#define VIRTCHNL2_QUEUE_GROUP_P2P 0x100 struct cpfl_vport_param { struct cpfl_adapter_ext *adapter; @@ -69,8 +70,25 @@ struct cpfl_devargs { uint16_t req_vport_nb; }; +struct p2p_queue_chunks_info { + uint32_t tx_start_qid; + uint32_t rx_start_qid; + uint32_t tx_compl_start_qid; + uint32_t rx_buf_start_qid; + + uint64_t tx_qtail_start; + uint32_t tx_qtail_spacing; + uint64_t rx_qtail_start; + uint32_t rx_qtail_spacing; + uint64_t tx_compl_qtail_start; + uint32_t tx_compl_qtail_spacing; + uint64_t rx_buf_qtail_start; + uint32_t rx_buf_qtail_spacing; +}; + struct cpfl_vport { struct idpf_vport base; + struct p2p_queue_chunks_info *p2p_q_chunks_info; }; struct cpfl_adapter_ext { diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h index bfb9ad97bd..1fe65778f0 100644 --- a/drivers/net/cpfl/cpfl_rxtx.h +++ b/drivers/net/cpfl/cpfl_rxtx.h @@ -13,6 +13,13 @@ #define CPFL_MIN_RING_DESC 32 #define CPFL_MAX_RING_DESC 4096 #define CPFL_DMA_MEM_ALIGN 4096 + +#define CPFL_MAX_P2P_NB_QUEUES 16 +#define CPFL_P2P_NB_RX_BUFQ 1 +#define CPFL_P2P_NB_TX_COMPLQ 1 +#define CPFL_P2P_NB_QUEUE_GRPS 1 +#define CPFL_P2P_QUEUE_GRP_ID 1 + /* Base address of the HW descriptor ring should be 128B aligned. */ #define CPFL_RING_BASE_ALIGN 128 From patchwork Mon Jun 5 09:06:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 128105 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3EC3B42C34; Mon, 5 Jun 2023 11:32:20 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5857D42D36; Mon, 5 Jun 2023 11:32:06 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 88FD0427F2 for ; Mon, 5 Jun 2023 11:32:03 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685957523; x=1717493523; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ehwtPl09hBFYmOt36mTYt7mzkw0GSat4WthFB4XzqCY=; b=X9LZPb13CIq926lUDzOmKHQc8mGuO4SfCuBFZrlgoLBl/2B33uZtwZUa MHvih2el/ZaMyIlHN5WuH9A7p/roQlUNb+EFYMrzWWwDRkFpgE7j/4xmL 3+zOUWRusFM2nvu0aaKnV64EAVg+xUT506mIpdYqXDnpUE2+CMQBGh/Ed tXS9zFKQd32RsHR5AvPqup4EQ7xo55H1g1+g3WGv89aginxXE4BYvc02p /ABzkRjQA5UOUGlQmljhAOvsLeri281CAxKv0QGRxkQzf01Hcx50Q/Tgq Z0QAD/a1CpLZivL1L3krB1+QC9hHbuR0p47YFI4phwUFJRhkpVzYgyS8V g==; X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="355181055" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="355181055" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jun 2023 02:32:01 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="741652036" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="741652036" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga001.jf.intel.com with ESMTP; 05 Jun 2023 02:32:00 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing , Xiao Wang Subject: [PATCH v9 04/14] net/cpfl: support hairpin queue capbility get Date: Mon, 5 Jun 2023 09:06:31 +0000 Message-Id: <20230605090641.36525-5-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230605090641.36525-1-beilei.xing@intel.com> References: <20230605061724.88130-1-beilei.xing@intel.com> <20230605090641.36525-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch adds hairpin_cap_get ops support. Signed-off-by: Xiao Wang Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/net/cpfl/cpfl_ethdev.c | 18 ++++++++++++++++++ drivers/net/cpfl/cpfl_rxtx.h | 3 +++ 2 files changed, 21 insertions(+) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index 7f34cd288c..4a7e1124b1 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -154,6 +154,23 @@ cpfl_dev_link_update(struct rte_eth_dev *dev, return rte_eth_linkstatus_set(dev, &new_link); } +static int +cpfl_hairpin_cap_get(struct rte_eth_dev *dev, + struct rte_eth_hairpin_cap *cap) +{ + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + + if (cpfl_vport->p2p_q_chunks_info == NULL) + return -ENOTSUP; + + cap->max_nb_queues = CPFL_MAX_P2P_NB_QUEUES; + cap->max_rx_2_tx = CPFL_MAX_HAIRPINQ_RX_2_TX; + cap->max_tx_2_rx = CPFL_MAX_HAIRPINQ_TX_2_RX; + cap->max_nb_desc = CPFL_MAX_HAIRPINQ_NB_DESC; + + return 0; +} + static int cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) { @@ -904,6 +921,7 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = { .xstats_get = cpfl_dev_xstats_get, .xstats_get_names = cpfl_dev_xstats_get_names, .xstats_reset = cpfl_dev_xstats_reset, + .hairpin_cap_get = cpfl_hairpin_cap_get, }; static int diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h index 1fe65778f0..a4a164d462 100644 --- a/drivers/net/cpfl/cpfl_rxtx.h +++ b/drivers/net/cpfl/cpfl_rxtx.h @@ -14,6 +14,9 @@ #define CPFL_MAX_RING_DESC 4096 #define CPFL_DMA_MEM_ALIGN 4096 +#define CPFL_MAX_HAIRPINQ_RX_2_TX 1 +#define CPFL_MAX_HAIRPINQ_TX_2_RX 1 +#define CPFL_MAX_HAIRPINQ_NB_DESC 1024 #define CPFL_MAX_P2P_NB_QUEUES 16 #define CPFL_P2P_NB_RX_BUFQ 1 #define CPFL_P2P_NB_TX_COMPLQ 1 From patchwork Mon Jun 5 09:06:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 128107 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 480C542C34; Mon, 5 Jun 2023 11:32:34 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8067C42D12; Mon, 5 Jun 2023 11:32:08 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 1010F427F2 for ; Mon, 5 Jun 2023 11:32:04 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685957525; x=1717493525; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mWtpI6FEZgixj6mV/dbBcZD7d7YQ6U2LLyZQ4yZV0wA=; b=VGx+n6KRQ04VgFc3gQclH9Ai6pzpY9kmM3szESn3wBcICUgVxhCxFnGu H7IBukkQLTZJ2hS5KFonHkRxGj97iNUh4ZaOkWeqSjpY7kjbj3HzTZI/M Q72pM47aoirwEVZw+j8+49kMRvZvNX3xV56ihql971w6+PSjn4ILcJ6KJ k7VxQVj3Bn5e4td8MqnDK53mBUegN1P68ekp7QamchMaDUtH/qRb2NDuR yNfHh6ztqDiH5CkmYgnk+0mWju9GQeJA4cgURmVFrJe0eX7Cdz67wWVfW 3BqA36LNigLS5DIy49PgvvA1VVtwfh4uua/+TnTXhZJ9tOWf9JLEhCsmy w==; X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="355181063" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="355181063" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jun 2023 02:32:03 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="741652052" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="741652052" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga001.jf.intel.com with ESMTP; 05 Jun 2023 02:32:01 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing , Xiao Wang Subject: [PATCH v9 05/14] net/cpfl: support hairpin queue setup and release Date: Mon, 5 Jun 2023 09:06:32 +0000 Message-Id: <20230605090641.36525-6-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230605090641.36525-1-beilei.xing@intel.com> References: <20230605061724.88130-1-beilei.xing@intel.com> <20230605090641.36525-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing Support hairpin Rx/Tx queue setup and release. Signed-off-by: Xiao Wang Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/net/cpfl/cpfl_ethdev.c | 6 + drivers/net/cpfl/cpfl_ethdev.h | 11 + drivers/net/cpfl/cpfl_rxtx.c | 364 +++++++++++++++++++++++- drivers/net/cpfl/cpfl_rxtx.h | 36 +++ drivers/net/cpfl/cpfl_rxtx_vec_common.h | 4 + 5 files changed, 420 insertions(+), 1 deletion(-) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index 4a7e1124b1..d64b506038 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -879,6 +879,10 @@ cpfl_dev_close(struct rte_eth_dev *dev) struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(vport->adapter); cpfl_dev_stop(dev); + if (cpfl_vport->p2p_mp) { + rte_mempool_free(cpfl_vport->p2p_mp); + cpfl_vport->p2p_mp = NULL; + } if (!adapter->base.is_rx_singleq && !adapter->base.is_tx_singleq) cpfl_p2p_queue_grps_del(vport); @@ -922,6 +926,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = { .xstats_get_names = cpfl_dev_xstats_get_names, .xstats_reset = cpfl_dev_xstats_reset, .hairpin_cap_get = cpfl_hairpin_cap_get, + .rx_hairpin_queue_setup = cpfl_rx_hairpin_queue_setup, + .tx_hairpin_queue_setup = cpfl_tx_hairpin_queue_setup, }; static int diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h index 666d46a44a..2e42354f70 100644 --- a/drivers/net/cpfl/cpfl_ethdev.h +++ b/drivers/net/cpfl/cpfl_ethdev.h @@ -89,6 +89,17 @@ struct p2p_queue_chunks_info { struct cpfl_vport { struct idpf_vport base; struct p2p_queue_chunks_info *p2p_q_chunks_info; + + struct rte_mempool *p2p_mp; + + uint16_t nb_data_rxq; + uint16_t nb_data_txq; + uint16_t nb_p2p_rxq; + uint16_t nb_p2p_txq; + + struct idpf_rx_queue *p2p_rx_bufq; + struct idpf_tx_queue *p2p_tx_complq; + bool p2p_manual_bind; }; struct cpfl_adapter_ext { diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c index 04a51b8d15..90b408d1f4 100644 --- a/drivers/net/cpfl/cpfl_rxtx.c +++ b/drivers/net/cpfl/cpfl_rxtx.c @@ -10,6 +10,67 @@ #include "cpfl_rxtx.h" #include "cpfl_rxtx_vec_common.h" +static inline void +cpfl_tx_hairpin_descq_reset(struct idpf_tx_queue *txq) +{ + uint32_t i, size; + + if (!txq) { + PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL"); + return; + } + + size = txq->nb_tx_desc * CPFL_P2P_DESC_LEN; + for (i = 0; i < size; i++) + ((volatile char *)txq->desc_ring)[i] = 0; +} + +static inline void +cpfl_tx_hairpin_complq_reset(struct idpf_tx_queue *cq) +{ + uint32_t i, size; + + if (!cq) { + PMD_DRV_LOG(DEBUG, "Pointer to complq is NULL"); + return; + } + + size = cq->nb_tx_desc * CPFL_P2P_DESC_LEN; + for (i = 0; i < size; i++) + ((volatile char *)cq->compl_ring)[i] = 0; +} + +static inline void +cpfl_rx_hairpin_descq_reset(struct idpf_rx_queue *rxq) +{ + uint16_t len; + uint32_t i; + + if (!rxq) + return; + + len = rxq->nb_rx_desc; + for (i = 0; i < len * CPFL_P2P_DESC_LEN; i++) + ((volatile char *)rxq->rx_ring)[i] = 0; +} + +static inline void +cpfl_rx_hairpin_bufq_reset(struct idpf_rx_queue *rxbq) +{ + uint16_t len; + uint32_t i; + + if (!rxbq) + return; + + len = rxbq->nb_rx_desc; + for (i = 0; i < len * CPFL_P2P_DESC_LEN; i++) + ((volatile char *)rxbq->rx_ring)[i] = 0; + + rxbq->bufq1 = NULL; + rxbq->bufq2 = NULL; +} + static uint64_t cpfl_rx_offload_convert(uint64_t offload) { @@ -234,7 +295,10 @@ cpfl_rx_queue_release(void *rxq) /* Split queue */ if (!q->adapter->is_rx_singleq) { - if (q->bufq2) + /* the mz is shared between Tx/Rx hairpin, let Rx_release + * free the buf, q->bufq1->mz and q->mz. + */ + if (!cpfl_rxq->hairpin_info.hairpin_q && q->bufq2) cpfl_rx_split_bufq_release(q->bufq2); if (q->bufq1) @@ -385,6 +449,7 @@ cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, } } + cpfl_vport->nb_data_rxq++; rxq->q_set = true; dev->data->rx_queues[queue_idx] = cpfl_rxq; @@ -548,6 +613,7 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start + queue_idx * vport->chunks_info.tx_qtail_spacing); txq->ops = &def_txq_ops; + cpfl_vport->nb_data_txq++; txq->q_set = true; dev->data->tx_queues[queue_idx] = cpfl_txq; @@ -562,6 +628,300 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, return ret; } +static int +cpfl_rx_hairpin_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *bufq, + uint16_t logic_qid, uint16_t nb_desc) +{ + struct cpfl_vport *cpfl_vport = + (struct cpfl_vport *)dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; + struct idpf_adapter *adapter = vport->adapter; + struct rte_mempool *mp; + char pool_name[RTE_MEMPOOL_NAMESIZE]; + + mp = cpfl_vport->p2p_mp; + if (!mp) { + snprintf(pool_name, RTE_MEMPOOL_NAMESIZE, "p2p_mb_pool_%u", + dev->data->port_id); + mp = rte_pktmbuf_pool_create(pool_name, CPFL_P2P_NB_MBUF * CPFL_MAX_P2P_NB_QUEUES, + CPFL_P2P_CACHE_SIZE, 0, CPFL_P2P_MBUF_SIZE, + dev->device->numa_node); + if (!mp) { + PMD_INIT_LOG(ERR, "Failed to allocate mbuf pool for p2p"); + return -ENOMEM; + } + cpfl_vport->p2p_mp = mp; + } + + bufq->mp = mp; + bufq->nb_rx_desc = nb_desc; + bufq->queue_id = cpfl_hw_qid_get(cpfl_vport->p2p_q_chunks_info->rx_buf_start_qid, + logic_qid); + bufq->port_id = dev->data->port_id; + bufq->adapter = adapter; + bufq->rx_buf_len = CPFL_P2P_MBUF_SIZE - RTE_PKTMBUF_HEADROOM; + + bufq->q_set = true; + bufq->ops = &def_rxq_ops; + + return 0; +} + +int +cpfl_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, + uint16_t nb_desc, + const struct rte_eth_hairpin_conf *conf) +{ + struct cpfl_vport *cpfl_vport = (struct cpfl_vport *)dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; + struct idpf_adapter *adapter_base = vport->adapter; + uint16_t logic_qid = cpfl_vport->nb_p2p_rxq; + struct cpfl_rxq_hairpin_info *hairpin_info; + struct cpfl_rx_queue *cpfl_rxq; + struct idpf_rx_queue *bufq1 = NULL; + struct idpf_rx_queue *rxq; + uint16_t peer_port, peer_q; + uint16_t qid; + int ret; + + if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) { + PMD_INIT_LOG(ERR, "Only spilt queue model supports hairpin queue."); + return -EINVAL; + } + + if (conf->peer_count != 1) { + PMD_INIT_LOG(ERR, "Can't support Rx hairpin queue peer count %d", conf->peer_count); + return -EINVAL; + } + + peer_port = conf->peers[0].port; + peer_q = conf->peers[0].queue; + + if (nb_desc % CPFL_ALIGN_RING_DESC != 0 || + nb_desc > CPFL_MAX_RING_DESC || + nb_desc < CPFL_MIN_RING_DESC) { + PMD_INIT_LOG(ERR, "Number (%u) of receive descriptors is invalid", nb_desc); + return -EINVAL; + } + + /* Free memory if needed */ + if (dev->data->rx_queues[queue_idx]) { + cpfl_rx_queue_release(dev->data->rx_queues[queue_idx]); + dev->data->rx_queues[queue_idx] = NULL; + } + + /* Setup Rx description queue */ + cpfl_rxq = rte_zmalloc_socket("cpfl hairpin rxq", + sizeof(struct cpfl_rx_queue), + RTE_CACHE_LINE_SIZE, + SOCKET_ID_ANY); + if (!cpfl_rxq) { + PMD_INIT_LOG(ERR, "Failed to allocate memory for rx queue data structure"); + return -ENOMEM; + } + + rxq = &cpfl_rxq->base; + hairpin_info = &cpfl_rxq->hairpin_info; + rxq->nb_rx_desc = nb_desc * 2; + rxq->queue_id = cpfl_hw_qid_get(cpfl_vport->p2p_q_chunks_info->rx_start_qid, logic_qid); + rxq->port_id = dev->data->port_id; + rxq->adapter = adapter_base; + rxq->rx_buf_len = CPFL_P2P_MBUF_SIZE - RTE_PKTMBUF_HEADROOM; + hairpin_info->hairpin_q = true; + hairpin_info->peer_txp = peer_port; + hairpin_info->peer_txq_id = peer_q; + + if (conf->manual_bind != 0) + cpfl_vport->p2p_manual_bind = true; + else + cpfl_vport->p2p_manual_bind = false; + + if (cpfl_vport->p2p_rx_bufq == NULL) { + bufq1 = rte_zmalloc_socket("hairpin rx bufq1", + sizeof(struct idpf_rx_queue), + RTE_CACHE_LINE_SIZE, + SOCKET_ID_ANY); + if (!bufq1) { + PMD_INIT_LOG(ERR, "Failed to allocate memory for hairpin Rx buffer queue 1."); + ret = -ENOMEM; + goto err_alloc_bufq1; + } + qid = 2 * logic_qid; + ret = cpfl_rx_hairpin_bufq_setup(dev, bufq1, qid, nb_desc); + if (ret) { + PMD_INIT_LOG(ERR, "Failed to setup hairpin Rx buffer queue 1"); + ret = -EINVAL; + goto err_setup_bufq1; + } + cpfl_vport->p2p_rx_bufq = bufq1; + } + + rxq->bufq1 = cpfl_vport->p2p_rx_bufq; + rxq->bufq2 = NULL; + + cpfl_vport->nb_p2p_rxq++; + rxq->q_set = true; + dev->data->rx_queues[queue_idx] = cpfl_rxq; + + return 0; + +err_setup_bufq1: + rte_mempool_free(cpfl_vport->p2p_mp); + rte_free(bufq1); +err_alloc_bufq1: + rte_free(cpfl_rxq); + + return ret; +} + +int +cpfl_tx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, + uint16_t nb_desc, + const struct rte_eth_hairpin_conf *conf) +{ + struct cpfl_vport *cpfl_vport = + (struct cpfl_vport *)dev->data->dev_private; + + struct idpf_vport *vport = &cpfl_vport->base; + struct idpf_adapter *adapter_base = vport->adapter; + uint16_t logic_qid = cpfl_vport->nb_p2p_txq; + struct cpfl_txq_hairpin_info *hairpin_info; + struct idpf_hw *hw = &adapter_base->hw; + struct cpfl_tx_queue *cpfl_txq; + struct idpf_tx_queue *txq, *cq; + const struct rte_memzone *mz; + uint32_t ring_size; + uint16_t peer_port, peer_q; + int ret; + + if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) { + PMD_INIT_LOG(ERR, "Only spilt queue model supports hairpin queue."); + return -EINVAL; + } + + if (conf->peer_count != 1) { + PMD_INIT_LOG(ERR, "Can't support Tx hairpin queue peer count %d", conf->peer_count); + return -EINVAL; + } + + peer_port = conf->peers[0].port; + peer_q = conf->peers[0].queue; + + if (nb_desc % CPFL_ALIGN_RING_DESC != 0 || + nb_desc > CPFL_MAX_RING_DESC || + nb_desc < CPFL_MIN_RING_DESC) { + PMD_INIT_LOG(ERR, "Number (%u) of transmit descriptors is invalid", + nb_desc); + return -EINVAL; + } + + /* Free memory if needed. */ + if (dev->data->tx_queues[queue_idx]) { + cpfl_tx_queue_release(dev->data->tx_queues[queue_idx]); + dev->data->tx_queues[queue_idx] = NULL; + } + + /* Allocate the TX queue data structure. */ + cpfl_txq = rte_zmalloc_socket("cpfl hairpin txq", + sizeof(struct cpfl_tx_queue), + RTE_CACHE_LINE_SIZE, + SOCKET_ID_ANY); + if (!cpfl_txq) { + PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure"); + return -ENOMEM; + } + + txq = &cpfl_txq->base; + hairpin_info = &cpfl_txq->hairpin_info; + /* Txq ring length should be 2 times of Tx completion queue size. */ + txq->nb_tx_desc = nb_desc * 2; + txq->queue_id = cpfl_hw_qid_get(cpfl_vport->p2p_q_chunks_info->tx_start_qid, logic_qid); + txq->port_id = dev->data->port_id; + hairpin_info->hairpin_q = true; + hairpin_info->peer_rxp = peer_port; + hairpin_info->peer_rxq_id = peer_q; + + if (conf->manual_bind != 0) + cpfl_vport->p2p_manual_bind = true; + else + cpfl_vport->p2p_manual_bind = false; + + /* Always Tx hairpin queue allocates Tx HW ring */ + ring_size = RTE_ALIGN(txq->nb_tx_desc * CPFL_P2P_DESC_LEN, + CPFL_DMA_MEM_ALIGN); + mz = rte_eth_dma_zone_reserve(dev, "hairpin_tx_ring", logic_qid, + ring_size + CPFL_P2P_RING_BUF, + CPFL_RING_BASE_ALIGN, + dev->device->numa_node); + if (!mz) { + PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX"); + ret = -ENOMEM; + goto err_txq_mz_rsv; + } + + txq->tx_ring_phys_addr = mz->iova; + txq->desc_ring = mz->addr; + txq->mz = mz; + + cpfl_tx_hairpin_descq_reset(txq); + txq->qtx_tail = hw->hw_addr + + cpfl_hw_qtail_get(cpfl_vport->p2p_q_chunks_info->tx_qtail_start, + logic_qid, cpfl_vport->p2p_q_chunks_info->tx_qtail_spacing); + txq->ops = &def_txq_ops; + + if (cpfl_vport->p2p_tx_complq == NULL) { + cq = rte_zmalloc_socket("cpfl hairpin cq", + sizeof(struct idpf_tx_queue), + RTE_CACHE_LINE_SIZE, + dev->device->numa_node); + if (!cq) { + PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure"); + ret = -ENOMEM; + goto err_cq_alloc; + } + + cq->nb_tx_desc = nb_desc; + cq->queue_id = cpfl_hw_qid_get(cpfl_vport->p2p_q_chunks_info->tx_compl_start_qid, + 0); + cq->port_id = dev->data->port_id; + + /* Tx completion queue always allocates the HW ring */ + ring_size = RTE_ALIGN(cq->nb_tx_desc * CPFL_P2P_DESC_LEN, + CPFL_DMA_MEM_ALIGN); + mz = rte_eth_dma_zone_reserve(dev, "hairpin_tx_compl_ring", logic_qid, + ring_size + CPFL_P2P_RING_BUF, + CPFL_RING_BASE_ALIGN, + dev->device->numa_node); + if (!mz) { + PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX completion queue"); + ret = -ENOMEM; + goto err_cq_mz_rsv; + } + cq->tx_ring_phys_addr = mz->iova; + cq->compl_ring = mz->addr; + cq->mz = mz; + + cpfl_tx_hairpin_complq_reset(cq); + cpfl_vport->p2p_tx_complq = cq; + } + + txq->complq = cpfl_vport->p2p_tx_complq; + + cpfl_vport->nb_p2p_txq++; + txq->q_set = true; + dev->data->tx_queues[queue_idx] = cpfl_txq; + + return 0; + +err_cq_mz_rsv: + rte_free(cq); +err_cq_alloc: + cpfl_dma_zone_release(mz); +err_txq_mz_rsv: + rte_free(cpfl_txq); + return ret; +} + int cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) { @@ -865,6 +1225,8 @@ cpfl_set_rx_function(struct rte_eth_dev *dev) if (vport->rx_vec_allowed) { for (i = 0; i < dev->data->nb_rx_queues; i++) { cpfl_rxq = dev->data->rx_queues[i]; + if (cpfl_rxq->hairpin_info.hairpin_q) + continue; (void)idpf_qc_splitq_rx_vec_setup(&cpfl_rxq->base); } #ifdef CC_AVX512_SUPPORT diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h index a4a164d462..06198d4aad 100644 --- a/drivers/net/cpfl/cpfl_rxtx.h +++ b/drivers/net/cpfl/cpfl_rxtx.h @@ -22,6 +22,11 @@ #define CPFL_P2P_NB_TX_COMPLQ 1 #define CPFL_P2P_NB_QUEUE_GRPS 1 #define CPFL_P2P_QUEUE_GRP_ID 1 +#define CPFL_P2P_DESC_LEN 16 +#define CPFL_P2P_NB_MBUF 4096 +#define CPFL_P2P_CACHE_SIZE 250 +#define CPFL_P2P_MBUF_SIZE 2048 +#define CPFL_P2P_RING_BUF 128 /* Base address of the HW descriptor ring should be 128B aligned. */ #define CPFL_RING_BASE_ALIGN 128 @@ -33,14 +38,40 @@ #define CPFL_SUPPORT_CHAIN_NUM 5 +struct cpfl_rxq_hairpin_info { + bool hairpin_q; /* if rx queue is a hairpin queue */ + uint16_t peer_txp; + uint16_t peer_txq_id; +}; + struct cpfl_rx_queue { struct idpf_rx_queue base; + struct cpfl_rxq_hairpin_info hairpin_info; +}; + +struct cpfl_txq_hairpin_info { + bool hairpin_q; /* if tx queue is a hairpin queue */ + uint16_t peer_rxp; + uint16_t peer_rxq_id; }; struct cpfl_tx_queue { struct idpf_tx_queue base; + struct cpfl_txq_hairpin_info hairpin_info; }; +static inline uint16_t +cpfl_hw_qid_get(uint16_t start_qid, uint16_t offset) +{ + return start_qid + offset; +} + +static inline uint64_t +cpfl_hw_qtail_get(uint64_t tail_start, uint16_t offset, uint64_t tail_spacing) +{ + return tail_start + offset * tail_spacing; +} + int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_txconf *tx_conf); @@ -59,4 +90,9 @@ void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid); void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid); void cpfl_set_rx_function(struct rte_eth_dev *dev); void cpfl_set_tx_function(struct rte_eth_dev *dev); +int cpfl_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, + uint16_t nb_desc, const struct rte_eth_hairpin_conf *conf); +int cpfl_tx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, + uint16_t nb_desc, + const struct rte_eth_hairpin_conf *conf); #endif /* _CPFL_RXTX_H_ */ diff --git a/drivers/net/cpfl/cpfl_rxtx_vec_common.h b/drivers/net/cpfl/cpfl_rxtx_vec_common.h index 5690b17911..d8e9191196 100644 --- a/drivers/net/cpfl/cpfl_rxtx_vec_common.h +++ b/drivers/net/cpfl/cpfl_rxtx_vec_common.h @@ -85,6 +85,8 @@ cpfl_rx_vec_dev_check_default(struct rte_eth_dev *dev) cpfl_rxq = dev->data->rx_queues[i]; default_ret = cpfl_rx_vec_queue_default(&cpfl_rxq->base); if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) { + if (cpfl_rxq->hairpin_info.hairpin_q) + continue; splitq_ret = cpfl_rx_splitq_vec_default(&cpfl_rxq->base); ret = splitq_ret && default_ret; } else { @@ -106,6 +108,8 @@ cpfl_tx_vec_dev_check_default(struct rte_eth_dev *dev) for (i = 0; i < dev->data->nb_tx_queues; i++) { cpfl_txq = dev->data->tx_queues[i]; + if (cpfl_txq->hairpin_info.hairpin_q) + continue; ret = cpfl_tx_vec_queue_default(&cpfl_txq->base); if (ret == CPFL_SCALAR_PATH) return CPFL_SCALAR_PATH; From patchwork Mon Jun 5 09:06:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 128108 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6EEBB42C34; Mon, 5 Jun 2023 11:32:44 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0339042D52; Mon, 5 Jun 2023 11:32:10 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 12A6742D31 for ; Mon, 5 Jun 2023 11:32:05 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685957526; x=1717493526; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=CvMN8NOITqEoPij/b8a7zIGb2DvQk1boRsk9Q2X9GsI=; b=L6MWp7yYJIUQsUPRGXW09WJVc2JrrcPUNrEyO9D+rQXwkP/LK5ItIk26 4LTWZioc/PGN18mpQW4wpONUNr4NaFpFVBDeuY56Xj3eE3rKQmWMCQM3G xZ5AZf51QKW0jZ8G9LRW+USuTCLU0UG5XBrdtkfEd6L59zuH0yvv+ttJu 4OpXLxLpAKNb1548OBDn4NwMDqBwHXcId6Y08mdYY8ObAmo58F2Z1df62 iIoQuMz6p3dkrEEoIBJrDpEQalybBFynrkokPyOXZ/93SM5792CU7gljU 2jkrWUFafzUSlBg3DUwAWm5h6+u7duODCBUS1r+QyPhVJ0XqaYRZKOjLp Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="355181069" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="355181069" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jun 2023 02:32:05 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="741652073" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="741652073" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga001.jf.intel.com with ESMTP; 05 Jun 2023 02:32:03 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing Subject: [PATCH v9 06/14] common/idpf: add queue config API Date: Mon, 5 Jun 2023 09:06:33 +0000 Message-Id: <20230605090641.36525-7-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230605090641.36525-1-beilei.xing@intel.com> References: <20230605061724.88130-1-beilei.xing@intel.com> <20230605090641.36525-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch supports Rx/Tx queue configuration APIs. Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/common/idpf/idpf_common_virtchnl.c | 70 ++++++++++++++++++++++ drivers/common/idpf/idpf_common_virtchnl.h | 6 ++ drivers/common/idpf/version.map | 2 + 3 files changed, 78 insertions(+) diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c index a3fe55c897..211b44a88e 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.c +++ b/drivers/common/idpf/idpf_common_virtchnl.c @@ -1050,6 +1050,41 @@ idpf_vc_rxq_config(struct idpf_vport *vport, struct idpf_rx_queue *rxq) return err; } +int idpf_vc_rxq_config_by_info(struct idpf_vport *vport, struct virtchnl2_rxq_info *rxq_info, + uint16_t num_qs) +{ + struct idpf_adapter *adapter = vport->adapter; + struct virtchnl2_config_rx_queues *vc_rxqs = NULL; + struct idpf_cmd_info args; + int size, err, i; + + size = sizeof(*vc_rxqs) + (num_qs - 1) * + sizeof(struct virtchnl2_rxq_info); + vc_rxqs = rte_zmalloc("cfg_rxqs", size, 0); + if (vc_rxqs == NULL) { + DRV_LOG(ERR, "Failed to allocate virtchnl2_config_rx_queues"); + err = -ENOMEM; + return err; + } + vc_rxqs->vport_id = vport->vport_id; + vc_rxqs->num_qinfo = num_qs; + memcpy(vc_rxqs->qinfo, rxq_info, num_qs * sizeof(struct virtchnl2_rxq_info)); + + memset(&args, 0, sizeof(args)); + args.ops = VIRTCHNL2_OP_CONFIG_RX_QUEUES; + args.in_args = (uint8_t *)vc_rxqs; + args.in_args_size = size; + args.out_buffer = adapter->mbx_resp; + args.out_size = IDPF_DFLT_MBX_BUF_SIZE; + + err = idpf_vc_cmd_execute(adapter, &args); + rte_free(vc_rxqs); + if (err != 0) + DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_RX_QUEUES"); + + return err; +} + int idpf_vc_txq_config(struct idpf_vport *vport, struct idpf_tx_queue *txq) { @@ -1121,6 +1156,41 @@ idpf_vc_txq_config(struct idpf_vport *vport, struct idpf_tx_queue *txq) return err; } +int +idpf_vc_txq_config_by_info(struct idpf_vport *vport, struct virtchnl2_txq_info *txq_info, + uint16_t num_qs) +{ + struct idpf_adapter *adapter = vport->adapter; + struct virtchnl2_config_tx_queues *vc_txqs = NULL; + struct idpf_cmd_info args; + int size, err; + + size = sizeof(*vc_txqs) + (num_qs - 1) * sizeof(struct virtchnl2_txq_info); + vc_txqs = rte_zmalloc("cfg_txqs", size, 0); + if (vc_txqs == NULL) { + DRV_LOG(ERR, "Failed to allocate virtchnl2_config_tx_queues"); + err = -ENOMEM; + return err; + } + vc_txqs->vport_id = vport->vport_id; + vc_txqs->num_qinfo = num_qs; + memcpy(vc_txqs->qinfo, txq_info, num_qs * sizeof(struct virtchnl2_txq_info)); + + memset(&args, 0, sizeof(args)); + args.ops = VIRTCHNL2_OP_CONFIG_TX_QUEUES; + args.in_args = (uint8_t *)vc_txqs; + args.in_args_size = size; + args.out_buffer = adapter->mbx_resp; + args.out_size = IDPF_DFLT_MBX_BUF_SIZE; + + err = idpf_vc_cmd_execute(adapter, &args); + rte_free(vc_txqs); + if (err != 0) + DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_TX_QUEUES"); + + return err; +} + int idpf_vc_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg, struct idpf_ctlq_msg *q_msg) diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h index 58b16e1c5d..db83761a5e 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.h +++ b/drivers/common/idpf/idpf_common_virtchnl.h @@ -65,6 +65,12 @@ __rte_internal int idpf_vc_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq, u16 *buff_count, struct idpf_dma_mem **buffs); __rte_internal +int idpf_vc_rxq_config_by_info(struct idpf_vport *vport, struct virtchnl2_rxq_info *rxq_info, + uint16_t num_qs); +__rte_internal +int idpf_vc_txq_config_by_info(struct idpf_vport *vport, struct virtchnl2_txq_info *txq_info, + uint16_t num_qs); +__rte_internal int idpf_vc_queue_grps_del(struct idpf_vport *vport, uint16_t num_q_grps, struct virtchnl2_queue_group_id *qg_ids); diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map index 01d18f3f3f..17e77884ce 100644 --- a/drivers/common/idpf/version.map +++ b/drivers/common/idpf/version.map @@ -54,8 +54,10 @@ INTERNAL { idpf_vc_rss_lut_get; idpf_vc_rss_lut_set; idpf_vc_rxq_config; + idpf_vc_rxq_config_by_info; idpf_vc_stats_query; idpf_vc_txq_config; + idpf_vc_txq_config_by_info; idpf_vc_vectors_alloc; idpf_vc_vectors_dealloc; idpf_vc_vport_create; From patchwork Mon Jun 5 09:06:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 128109 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E0DF342C34; Mon, 5 Jun 2023 11:32:50 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2958042D56; Mon, 5 Jun 2023 11:32:11 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 1D9F742D40 for ; Mon, 5 Jun 2023 11:32:07 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685957528; x=1717493528; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=BYDtVvYo4GpzCWqOajJTQ3H9j4vZkCIM2mEJlsQf0R0=; b=BXlTo+cPn+qk0SqpXObiSiolwdmdSlkxQz2fzUhq0iBbAjTx1I9N46p+ jKsoVoGTPYVPGwoMLN/6nc9nDgTkTgdwaovimeF4ZUyUd1Auerh79sUuf xzh0PTdtavzPRnaxC86bCcU1UwIGH2M5XRyE3o/9w2XrKfzEVkatEP5ze l6CrVbPxbISxkIHfFGFhXNVpLtC2LVcWa4wogpKHw3UIDZ3IdEFVi6U4b YD0CtHM0ghetvdeV0CC+Xcv53bxtUAHD212MqTIMCBsdG3M3oOXvsKT+z 1NTQ6l03VUzU3FRjO3//V75wVEhxKz9YJUqDanuLvfvGJsDKZbDOrIs2T Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="355181081" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="355181081" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jun 2023 02:32:07 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="741652093" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="741652093" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga001.jf.intel.com with ESMTP; 05 Jun 2023 02:32:05 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing , Xiao Wang Subject: [PATCH v9 07/14] net/cpfl: support hairpin queue configuration Date: Mon, 5 Jun 2023 09:06:34 +0000 Message-Id: <20230605090641.36525-8-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230605090641.36525-1-beilei.xing@intel.com> References: <20230605061724.88130-1-beilei.xing@intel.com> <20230605090641.36525-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch supports Rx/Tx hairpin queue configuration. Signed-off-by: Xiao Wang Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/net/cpfl/cpfl_ethdev.c | 136 +++++++++++++++++++++++++++++++-- drivers/net/cpfl/cpfl_rxtx.c | 88 +++++++++++++++++++++ drivers/net/cpfl/cpfl_rxtx.h | 7 ++ 3 files changed, 225 insertions(+), 6 deletions(-) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index d64b506038..0696c6bc68 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -742,33 +742,157 @@ cpfl_config_rx_queues_irqs(struct rte_eth_dev *dev) return idpf_vport_irq_map_config(vport, nb_rx_queues); } +/* Update hairpin_info for dev's tx hairpin queue */ +static int +cpfl_txq_hairpin_info_update(struct rte_eth_dev *dev, uint16_t rx_port) +{ + struct cpfl_vport *cpfl_tx_vport = dev->data->dev_private; + struct rte_eth_dev *peer_dev = &rte_eth_devices[rx_port]; + struct cpfl_vport *cpfl_rx_vport = peer_dev->data->dev_private; + struct cpfl_txq_hairpin_info *hairpin_info; + struct cpfl_tx_queue *cpfl_txq; + int i; + + for (i = cpfl_tx_vport->nb_data_txq; i < dev->data->nb_tx_queues; i++) { + cpfl_txq = dev->data->tx_queues[i]; + hairpin_info = &cpfl_txq->hairpin_info; + if (hairpin_info->peer_rxp != rx_port) { + PMD_DRV_LOG(ERR, "port %d is not the peer port", rx_port); + return -EINVAL; + } + hairpin_info->peer_rxq_id = + cpfl_hw_qid_get(cpfl_rx_vport->p2p_q_chunks_info->rx_start_qid, + hairpin_info->peer_rxq_id - cpfl_rx_vport->nb_data_rxq); + } + + return 0; +} + +/* Bind Rx hairpin queue's memory zone to peer Tx hairpin queue's memory zone */ +static void +cpfl_rxq_hairpin_mz_bind(struct rte_eth_dev *dev) +{ + struct cpfl_vport *cpfl_rx_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_rx_vport->base; + struct idpf_adapter *adapter = vport->adapter; + struct idpf_hw *hw = &adapter->hw; + struct cpfl_rx_queue *cpfl_rxq; + struct cpfl_tx_queue *cpfl_txq; + struct rte_eth_dev *peer_dev; + const struct rte_memzone *mz; + uint16_t peer_tx_port; + uint16_t peer_tx_qid; + int i; + + for (i = cpfl_rx_vport->nb_data_rxq; i < dev->data->nb_rx_queues; i++) { + cpfl_rxq = dev->data->rx_queues[i]; + peer_tx_port = cpfl_rxq->hairpin_info.peer_txp; + peer_tx_qid = cpfl_rxq->hairpin_info.peer_txq_id; + peer_dev = &rte_eth_devices[peer_tx_port]; + cpfl_txq = peer_dev->data->tx_queues[peer_tx_qid]; + + /* bind rx queue */ + mz = cpfl_txq->base.mz; + cpfl_rxq->base.rx_ring_phys_addr = mz->iova; + cpfl_rxq->base.rx_ring = mz->addr; + cpfl_rxq->base.mz = mz; + + /* bind rx buffer queue */ + mz = cpfl_txq->base.complq->mz; + cpfl_rxq->base.bufq1->rx_ring_phys_addr = mz->iova; + cpfl_rxq->base.bufq1->rx_ring = mz->addr; + cpfl_rxq->base.bufq1->mz = mz; + cpfl_rxq->base.bufq1->qrx_tail = hw->hw_addr + + cpfl_hw_qtail_get(cpfl_rx_vport->p2p_q_chunks_info->rx_buf_qtail_start, + 0, cpfl_rx_vport->p2p_q_chunks_info->rx_buf_qtail_spacing); + } +} + static int cpfl_start_queues(struct rte_eth_dev *dev) { + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct cpfl_rx_queue *cpfl_rxq; struct cpfl_tx_queue *cpfl_txq; + int update_flag = 0; int err = 0; int i; + /* For normal data queues, configure, init and enale Txq. + * For non-manual bind hairpin queues, configure Txq. + */ for (i = 0; i < dev->data->nb_tx_queues; i++) { cpfl_txq = dev->data->tx_queues[i]; if (cpfl_txq == NULL || cpfl_txq->base.tx_deferred_start) continue; - err = cpfl_tx_queue_start(dev, i); + if (!cpfl_txq->hairpin_info.hairpin_q) { + err = cpfl_tx_queue_start(dev, i); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to start Tx queue %u", i); + return err; + } + } else if (!cpfl_vport->p2p_manual_bind) { + if (update_flag == 0) { + err = cpfl_txq_hairpin_info_update(dev, + cpfl_txq->hairpin_info.peer_rxp); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to update Tx hairpin queue info"); + return err; + } + update_flag = 1; + } + err = cpfl_hairpin_txq_config(vport, cpfl_txq); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to configure hairpin Tx queue %u", i); + return err; + } + } + } + + /* For non-manual bind hairpin queues, configure Tx completion queue first.*/ + if (!cpfl_vport->p2p_manual_bind && cpfl_vport->p2p_tx_complq != NULL) { + err = cpfl_hairpin_tx_complq_config(cpfl_vport); if (err != 0) { - PMD_DRV_LOG(ERR, "Fail to start Tx queue %u", i); + PMD_DRV_LOG(ERR, "Fail to config Tx completion queue"); return err; } } + /* For non-manual bind hairpin queues, configure Rx buffer queue.*/ + if (!cpfl_vport->p2p_manual_bind && cpfl_vport->p2p_rx_bufq != NULL) { + cpfl_rxq_hairpin_mz_bind(dev); + err = cpfl_hairpin_rx_bufq_config(cpfl_vport); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to config Rx buffer queue"); + return err; + } + } + + /* For normal data queues, configure, init and enale Rxq. + * For non-manual bind hairpin queues, configure Rxq, and then init Rxq. + */ for (i = 0; i < dev->data->nb_rx_queues; i++) { cpfl_rxq = dev->data->rx_queues[i]; if (cpfl_rxq == NULL || cpfl_rxq->base.rx_deferred_start) continue; - err = cpfl_rx_queue_start(dev, i); - if (err != 0) { - PMD_DRV_LOG(ERR, "Fail to start Rx queue %u", i); - return err; + if (!cpfl_rxq->hairpin_info.hairpin_q) { + err = cpfl_rx_queue_start(dev, i); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to start Rx queue %u", i); + return err; + } + } else if (!cpfl_vport->p2p_manual_bind) { + err = cpfl_hairpin_rxq_config(vport, cpfl_rxq); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to configure hairpin Rx queue %u", i); + return err; + } + err = cpfl_rx_queue_init(dev, i); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to init hairpin Rx queue %u", i); + return err; + } } } diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c index 90b408d1f4..fd24d544a1 100644 --- a/drivers/net/cpfl/cpfl_rxtx.c +++ b/drivers/net/cpfl/cpfl_rxtx.c @@ -922,6 +922,94 @@ cpfl_tx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, return ret; } +int +cpfl_hairpin_rx_bufq_config(struct cpfl_vport *cpfl_vport) +{ + struct idpf_rx_queue *rx_bufq = cpfl_vport->p2p_rx_bufq; + struct virtchnl2_rxq_info rxq_info; + + memset(&rxq_info, 0, sizeof(rxq_info)); + + rxq_info.type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER; + rxq_info.queue_id = rx_bufq->queue_id; + rxq_info.ring_len = rx_bufq->nb_rx_desc; + rxq_info.dma_ring_addr = rx_bufq->rx_ring_phys_addr; + rxq_info.desc_ids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M; + rxq_info.rx_buffer_low_watermark = CPFL_RXBUF_LOW_WATERMARK; + rxq_info.model = VIRTCHNL2_QUEUE_MODEL_SPLIT; + rxq_info.data_buffer_size = rx_bufq->rx_buf_len; + rxq_info.buffer_notif_stride = CPFL_RX_BUF_STRIDE; + + return idpf_vc_rxq_config_by_info(&cpfl_vport->base, &rxq_info, 1); +} + +int +cpfl_hairpin_rxq_config(struct idpf_vport *vport, struct cpfl_rx_queue *cpfl_rxq) +{ + struct virtchnl2_rxq_info rxq_info; + struct idpf_rx_queue *rxq = &cpfl_rxq->base; + + memset(&rxq_info, 0, sizeof(rxq_info)); + + rxq_info.type = VIRTCHNL2_QUEUE_TYPE_RX; + rxq_info.queue_id = rxq->queue_id; + rxq_info.ring_len = rxq->nb_rx_desc; + rxq_info.dma_ring_addr = rxq->rx_ring_phys_addr; + rxq_info.rx_bufq1_id = rxq->bufq1->queue_id; + rxq_info.max_pkt_size = vport->max_pkt_len; + rxq_info.desc_ids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M; + rxq_info.qflags |= VIRTCHNL2_RX_DESC_SIZE_16BYTE; + + rxq_info.data_buffer_size = rxq->rx_buf_len; + rxq_info.model = VIRTCHNL2_QUEUE_MODEL_SPLIT; + rxq_info.rx_buffer_low_watermark = CPFL_RXBUF_LOW_WATERMARK; + + PMD_DRV_LOG(NOTICE, "hairpin: vport %u, Rxq id 0x%x", + vport->vport_id, rxq_info.queue_id); + + return idpf_vc_rxq_config_by_info(vport, &rxq_info, 1); +} + +int +cpfl_hairpin_tx_complq_config(struct cpfl_vport *cpfl_vport) +{ + struct idpf_tx_queue *tx_complq = cpfl_vport->p2p_tx_complq; + struct virtchnl2_txq_info txq_info; + + memset(&txq_info, 0, sizeof(txq_info)); + + txq_info.dma_ring_addr = tx_complq->tx_ring_phys_addr; + txq_info.type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION; + txq_info.queue_id = tx_complq->queue_id; + txq_info.ring_len = tx_complq->nb_tx_desc; + txq_info.peer_rx_queue_id = cpfl_vport->p2p_rx_bufq->queue_id; + txq_info.model = VIRTCHNL2_QUEUE_MODEL_SPLIT; + txq_info.sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW; + + return idpf_vc_txq_config_by_info(&cpfl_vport->base, &txq_info, 1); +} + +int +cpfl_hairpin_txq_config(struct idpf_vport *vport, struct cpfl_tx_queue *cpfl_txq) +{ + struct idpf_tx_queue *txq = &cpfl_txq->base; + struct virtchnl2_txq_info txq_info; + + memset(&txq_info, 0, sizeof(txq_info)); + + txq_info.dma_ring_addr = txq->tx_ring_phys_addr; + txq_info.type = VIRTCHNL2_QUEUE_TYPE_TX; + txq_info.queue_id = txq->queue_id; + txq_info.ring_len = txq->nb_tx_desc; + txq_info.tx_compl_queue_id = txq->complq->queue_id; + txq_info.relative_queue_id = txq->queue_id; + txq_info.peer_rx_queue_id = cpfl_txq->hairpin_info.peer_rxq_id; + txq_info.model = VIRTCHNL2_QUEUE_MODEL_SPLIT; + txq_info.sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW; + + return idpf_vc_txq_config_by_info(vport, &txq_info, 1); +} + int cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) { diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h index 06198d4aad..872ebc1bfd 100644 --- a/drivers/net/cpfl/cpfl_rxtx.h +++ b/drivers/net/cpfl/cpfl_rxtx.h @@ -32,12 +32,15 @@ #define CPFL_RING_BASE_ALIGN 128 #define CPFL_DEFAULT_RX_FREE_THRESH 32 +#define CPFL_RXBUF_LOW_WATERMARK 64 #define CPFL_DEFAULT_TX_RS_THRESH 32 #define CPFL_DEFAULT_TX_FREE_THRESH 32 #define CPFL_SUPPORT_CHAIN_NUM 5 +#define CPFL_RX_BUF_STRIDE 64 + struct cpfl_rxq_hairpin_info { bool hairpin_q; /* if rx queue is a hairpin queue */ uint16_t peer_txp; @@ -95,4 +98,8 @@ int cpfl_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, int cpfl_tx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc, const struct rte_eth_hairpin_conf *conf); +int cpfl_hairpin_tx_complq_config(struct cpfl_vport *cpfl_vport); +int cpfl_hairpin_txq_config(struct idpf_vport *vport, struct cpfl_tx_queue *cpfl_txq); +int cpfl_hairpin_rx_bufq_config(struct cpfl_vport *cpfl_vport); +int cpfl_hairpin_rxq_config(struct idpf_vport *vport, struct cpfl_rx_queue *cpfl_rxq); #endif /* _CPFL_RXTX_H_ */ From patchwork Mon Jun 5 09:06:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 128110 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0999942C34; Mon, 5 Jun 2023 11:32:57 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 53CAD42D61; Mon, 5 Jun 2023 11:32:12 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 743C042D10 for ; Mon, 5 Jun 2023 11:32:09 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685957529; x=1717493529; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=a2LfBNxAPScF49xIh5G9q18lGBZR3G+zBXOEzuYHjJE=; b=kj00KXD+uNTfMaVkU1H7WycYghrpsuf/V/DgNxKxqDhdfxQOAUZrFNAW DlVx/UP3EROhv51QhV0bKjfJlPBZ11035B1pCgGDEDHmoqFakSzMdC+sa c4qJJhH28GXkfItPCQK+m/1n16pamQIuNI7g0YCfqAx+OBcQuv76NP1zd gLbx4a0/M8y48I/SEwnofsTM95mk6zyv5r2+AZzL6BPqdnOZrg3Mpp4Rq /3EWLMO41HSOVvxh9gtL/rVt2lbQ0B5VCqmXH0wlygkTMxonOoEPLSv2C hkVWyh3sjDEmM/DF8p+b1BGaRC7oQy4XtKNP/tKdSZF/6vllGu75C23u4 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="355181088" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="355181088" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jun 2023 02:32:08 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="741652110" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="741652110" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga001.jf.intel.com with ESMTP; 05 Jun 2023 02:32:07 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing Subject: [PATCH v9 08/14] common/idpf: add switch queue API Date: Mon, 5 Jun 2023 09:06:35 +0000 Message-Id: <20230605090641.36525-9-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230605090641.36525-1-beilei.xing@intel.com> References: <20230605061724.88130-1-beilei.xing@intel.com> <20230605090641.36525-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch adds idpf_vc_ena_dis_one_queue API. Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/common/idpf/idpf_common_virtchnl.c | 2 +- drivers/common/idpf/idpf_common_virtchnl.h | 3 +++ drivers/common/idpf/version.map | 1 + 3 files changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c index 211b44a88e..6455f640da 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.c +++ b/drivers/common/idpf/idpf_common_virtchnl.c @@ -733,7 +733,7 @@ idpf_vc_vectors_dealloc(struct idpf_vport *vport) return err; } -static int +int idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid, uint32_t type, bool on) { diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h index db83761a5e..9ff5c38c26 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.h +++ b/drivers/common/idpf/idpf_common_virtchnl.h @@ -71,6 +71,9 @@ __rte_internal int idpf_vc_txq_config_by_info(struct idpf_vport *vport, struct virtchnl2_txq_info *txq_info, uint16_t num_qs); __rte_internal +int idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid, + uint32_t type, bool on); +__rte_internal int idpf_vc_queue_grps_del(struct idpf_vport *vport, uint16_t num_q_grps, struct virtchnl2_queue_group_id *qg_ids); diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map index 17e77884ce..25624732b0 100644 --- a/drivers/common/idpf/version.map +++ b/drivers/common/idpf/version.map @@ -40,6 +40,7 @@ INTERNAL { idpf_vc_cmd_execute; idpf_vc_ctlq_post_rx_buffs; idpf_vc_ctlq_recv; + idpf_vc_ena_dis_one_queue; idpf_vc_irq_map_unmap_config; idpf_vc_one_msg_read; idpf_vc_ptype_info_query; From patchwork Mon Jun 5 09:06:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 128111 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 284D042C34; Mon, 5 Jun 2023 11:33:02 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6B11142D63; Mon, 5 Jun 2023 11:32:13 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 816DE42D5C for ; Mon, 5 Jun 2023 11:32:11 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685957531; x=1717493531; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=UKzI+dBQ3OJeLiK2B3DmxPgfeeKTkA7g3pJGxbi3Ljg=; b=mdV5KiSApV9WKBw4bXWBfENmJgWnYR8WPDTV0cNE8ciYRMKxWJEw6WVJ LF8mGCcw83wXOGk3gZytcKmdZi5XSo/yDNUiVli2ysZK/dnHZ08Ipxxir 2Sw230jQfYTbwrn2FkmJLK9QgBm3nEdMMAT4nAJDITKgitL3BnWQoTkaB vLugVN9ExuAr5mi1hwNsUzhfAOKMqTYE83yMT0+ZfnFesZ9xuUJ4v5bYP Hes7DxRHzhUAv+0VOAZTgLvrpQk+afyQiZ0Ua5EyAf/fZDSY8tQnenSUg j3SZrl/K6yUrWyBbtT53F+dI7QFz4PCFmvFdESd3E1XGLE8ZFSfvZIUvz g==; X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="355181103" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="355181103" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jun 2023 02:32:11 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="741652135" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="741652135" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga001.jf.intel.com with ESMTP; 05 Jun 2023 02:32:09 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing , Xiao Wang Subject: [PATCH v9 09/14] net/cpfl: support hairpin queue start/stop Date: Mon, 5 Jun 2023 09:06:36 +0000 Message-Id: <20230605090641.36525-10-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230605090641.36525-1-beilei.xing@intel.com> References: <20230605061724.88130-1-beilei.xing@intel.com> <20230605090641.36525-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch supports Rx/Tx hairpin queue start/stop. Signed-off-by: Xiao Wang Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/net/cpfl/cpfl_ethdev.c | 46 +++++++++ drivers/net/cpfl/cpfl_rxtx.c | 164 +++++++++++++++++++++++++++++---- drivers/net/cpfl/cpfl_rxtx.h | 15 +++ 3 files changed, 207 insertions(+), 18 deletions(-) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index 0696c6bc68..48e956f151 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -896,6 +896,52 @@ cpfl_start_queues(struct rte_eth_dev *dev) } } + /* For non-manual bind hairpin queues, enable Tx queue and Rx queue, + * then enable Tx completion queue and Rx buffer queue. + */ + for (i = cpfl_vport->nb_data_txq; i < dev->data->nb_tx_queues; i++) { + cpfl_txq = dev->data->tx_queues[i]; + if (cpfl_txq->hairpin_info.hairpin_q && !cpfl_vport->p2p_manual_bind) { + err = cpfl_switch_hairpin_rxtx_queue(cpfl_vport, + i - cpfl_vport->nb_data_txq, + false, true); + if (err) + PMD_DRV_LOG(ERR, "Failed to switch hairpin TX queue %u on", + i); + else + cpfl_txq->base.q_started = true; + } + } + + for (i = cpfl_vport->nb_data_rxq; i < dev->data->nb_rx_queues; i++) { + cpfl_rxq = dev->data->rx_queues[i]; + if (cpfl_rxq->hairpin_info.hairpin_q && !cpfl_vport->p2p_manual_bind) { + err = cpfl_switch_hairpin_rxtx_queue(cpfl_vport, + i - cpfl_vport->nb_data_rxq, + true, true); + if (err) + PMD_DRV_LOG(ERR, "Failed to switch hairpin RX queue %u on", + i); + else + cpfl_rxq->base.q_started = true; + } + } + + if (!cpfl_vport->p2p_manual_bind && + cpfl_vport->p2p_tx_complq != NULL && + cpfl_vport->p2p_rx_bufq != NULL) { + err = cpfl_switch_hairpin_complq(cpfl_vport, true); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to switch hairpin Tx complq"); + return err; + } + err = cpfl_switch_hairpin_bufq(cpfl_vport, true); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to switch hairpin Rx bufq"); + return err; + } + } + return err; } diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c index fd24d544a1..9d278dca54 100644 --- a/drivers/net/cpfl/cpfl_rxtx.c +++ b/drivers/net/cpfl/cpfl_rxtx.c @@ -1010,6 +1010,89 @@ cpfl_hairpin_txq_config(struct idpf_vport *vport, struct cpfl_tx_queue *cpfl_txq return idpf_vc_txq_config_by_info(vport, &txq_info, 1); } +int +cpfl_switch_hairpin_complq(struct cpfl_vport *cpfl_vport, bool on) +{ + struct idpf_vport *vport = &cpfl_vport->base; + uint32_t type; + int err, queue_id; + + type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION; + queue_id = cpfl_vport->p2p_tx_complq->queue_id; + err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on); + + return err; +} + +int +cpfl_switch_hairpin_bufq(struct cpfl_vport *cpfl_vport, bool on) +{ + struct idpf_vport *vport = &cpfl_vport->base; + uint32_t type; + int err, queue_id; + + type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER; + queue_id = cpfl_vport->p2p_rx_bufq->queue_id; + err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on); + + return err; +} + +int +cpfl_switch_hairpin_rxtx_queue(struct cpfl_vport *cpfl_vport, uint16_t logic_qid, + bool rx, bool on) +{ + struct idpf_vport *vport = &cpfl_vport->base; + uint32_t type; + int err, queue_id; + + type = rx ? VIRTCHNL2_QUEUE_TYPE_RX : VIRTCHNL2_QUEUE_TYPE_TX; + + if (type == VIRTCHNL2_QUEUE_TYPE_RX) + queue_id = cpfl_hw_qid_get(cpfl_vport->p2p_q_chunks_info->rx_start_qid, logic_qid); + else + queue_id = cpfl_hw_qid_get(cpfl_vport->p2p_q_chunks_info->tx_start_qid, logic_qid); + err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on); + if (err) + return err; + + return err; +} + +static int +cpfl_alloc_split_p2p_rxq_mbufs(struct idpf_rx_queue *rxq) +{ + volatile struct virtchnl2_p2p_rx_buf_desc *rxd; + struct rte_mbuf *mbuf = NULL; + uint64_t dma_addr; + uint16_t i; + + for (i = 0; i < rxq->nb_rx_desc; i++) { + mbuf = rte_mbuf_raw_alloc(rxq->mp); + if (unlikely(!mbuf)) { + PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX"); + return -ENOMEM; + } + + rte_mbuf_refcnt_set(mbuf, 1); + mbuf->next = NULL; + mbuf->data_off = RTE_PKTMBUF_HEADROOM; + mbuf->nb_segs = 1; + mbuf->port = rxq->port_id; + dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf)); + + rxd = &((volatile struct virtchnl2_p2p_rx_buf_desc *)(rxq->rx_ring))[i]; + rxd->reserve0 = 0; + rxd->pkt_addr = dma_addr; + } + + rxq->nb_rx_hold = 0; + /* The value written in the RX buffer queue tail register, must be a multiple of 8.*/ + rxq->rx_tail = rxq->nb_rx_desc - CPFL_HAIRPIN_Q_TAIL_AUX_VALUE; + + return 0; +} + int cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) { @@ -1063,22 +1146,31 @@ cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) IDPF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1); } else { /* Split queue */ - err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq1); - if (err != 0) { - PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf"); - return err; - } - err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq2); - if (err != 0) { - PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf"); - return err; + if (cpfl_rxq->hairpin_info.hairpin_q) { + err = cpfl_alloc_split_p2p_rxq_mbufs(rxq->bufq1); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to allocate p2p RX buffer queue mbuf"); + return err; + } + } else { + err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq1); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf"); + return err; + } + err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq2); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf"); + return err; + } } rte_wmb(); /* Init the RX tail register. */ IDPF_PCI_REG_WRITE(rxq->bufq1->qrx_tail, rxq->bufq1->rx_tail); - IDPF_PCI_REG_WRITE(rxq->bufq2->qrx_tail, rxq->bufq2->rx_tail); + if (rxq->bufq2) + IDPF_PCI_REG_WRITE(rxq->bufq2->qrx_tail, rxq->bufq2->rx_tail); } return err; @@ -1185,7 +1277,12 @@ cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) return -EINVAL; cpfl_rxq = dev->data->rx_queues[rx_queue_id]; - err = idpf_vc_queue_switch(vport, rx_queue_id, true, false); + if (cpfl_rxq->hairpin_info.hairpin_q) + err = cpfl_switch_hairpin_rxtx_queue(cpfl_vport, + rx_queue_id - cpfl_vport->nb_data_txq, + true, false); + else + err = idpf_vc_queue_switch(vport, rx_queue_id, true, false); if (err != 0) { PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off", rx_queue_id); @@ -1199,10 +1296,17 @@ cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) idpf_qc_single_rx_queue_reset(rxq); } else { rxq->bufq1->ops->release_mbufs(rxq->bufq1); - rxq->bufq2->ops->release_mbufs(rxq->bufq2); - idpf_qc_split_rx_queue_reset(rxq); + if (rxq->bufq2) + rxq->bufq2->ops->release_mbufs(rxq->bufq2); + if (cpfl_rxq->hairpin_info.hairpin_q) { + cpfl_rx_hairpin_descq_reset(rxq); + cpfl_rx_hairpin_bufq_reset(rxq->bufq1); + } else { + idpf_qc_split_rx_queue_reset(rxq); + } } - dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + if (!cpfl_rxq->hairpin_info.hairpin_q) + dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; return 0; } @@ -1221,7 +1325,12 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) cpfl_txq = dev->data->tx_queues[tx_queue_id]; - err = idpf_vc_queue_switch(vport, tx_queue_id, false, false); + if (cpfl_txq->hairpin_info.hairpin_q) + err = cpfl_switch_hairpin_rxtx_queue(cpfl_vport, + tx_queue_id - cpfl_vport->nb_data_txq, + false, false); + else + err = idpf_vc_queue_switch(vport, tx_queue_id, false, false); if (err != 0) { PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off", tx_queue_id); @@ -1234,10 +1343,17 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) { idpf_qc_single_tx_queue_reset(txq); } else { - idpf_qc_split_tx_descq_reset(txq); - idpf_qc_split_tx_complq_reset(txq->complq); + if (cpfl_txq->hairpin_info.hairpin_q) { + cpfl_tx_hairpin_descq_reset(txq); + cpfl_tx_hairpin_complq_reset(txq->complq); + } else { + idpf_qc_split_tx_descq_reset(txq); + idpf_qc_split_tx_complq_reset(txq->complq); + } } - dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + + if (!cpfl_txq->hairpin_info.hairpin_q) + dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; return 0; } @@ -1257,10 +1373,22 @@ cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid) void cpfl_stop_queues(struct rte_eth_dev *dev) { + struct cpfl_vport *cpfl_vport = + (struct cpfl_vport *)dev->data->dev_private; struct cpfl_rx_queue *cpfl_rxq; struct cpfl_tx_queue *cpfl_txq; int i; + if (cpfl_vport->p2p_tx_complq != NULL) { + if (cpfl_switch_hairpin_complq(cpfl_vport, false) != 0) + PMD_DRV_LOG(ERR, "Failed to stop hairpin Tx complq"); + } + + if (cpfl_vport->p2p_rx_bufq != NULL) { + if (cpfl_switch_hairpin_bufq(cpfl_vport, false) != 0) + PMD_DRV_LOG(ERR, "Failed to stop hairpin Rx bufq"); + } + for (i = 0; i < dev->data->nb_rx_queues; i++) { cpfl_rxq = dev->data->rx_queues[i]; if (cpfl_rxq == NULL) diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h index 872ebc1bfd..aacd087b56 100644 --- a/drivers/net/cpfl/cpfl_rxtx.h +++ b/drivers/net/cpfl/cpfl_rxtx.h @@ -41,6 +41,17 @@ #define CPFL_RX_BUF_STRIDE 64 +/* The value written in the RX buffer queue tail register, + * and in WritePTR field in the TX completion queue context, + * must be a multiple of 8. + */ +#define CPFL_HAIRPIN_Q_TAIL_AUX_VALUE 8 + +struct virtchnl2_p2p_rx_buf_desc { + __le64 reserve0; + __le64 pkt_addr; /* Packet buffer address */ +}; + struct cpfl_rxq_hairpin_info { bool hairpin_q; /* if rx queue is a hairpin queue */ uint16_t peer_txp; @@ -102,4 +113,8 @@ int cpfl_hairpin_tx_complq_config(struct cpfl_vport *cpfl_vport); int cpfl_hairpin_txq_config(struct idpf_vport *vport, struct cpfl_tx_queue *cpfl_txq); int cpfl_hairpin_rx_bufq_config(struct cpfl_vport *cpfl_vport); int cpfl_hairpin_rxq_config(struct idpf_vport *vport, struct cpfl_rx_queue *cpfl_rxq); +int cpfl_switch_hairpin_complq(struct cpfl_vport *cpfl_vport, bool on); +int cpfl_switch_hairpin_bufq(struct cpfl_vport *cpfl_vport, bool on); +int cpfl_switch_hairpin_rxtx_queue(struct cpfl_vport *cpfl_vport, uint16_t qid, + bool rx, bool on); #endif /* _CPFL_RXTX_H_ */ From patchwork Mon Jun 5 09:06:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 128112 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C67E642C34; Mon, 5 Jun 2023 11:33:09 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E7B5442D6C; Mon, 5 Jun 2023 11:32:15 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 45B3D42D35 for ; Mon, 5 Jun 2023 11:32:13 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685957533; x=1717493533; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=AgdOhh+iIeT2R0RNmGL0vg8twvdrqp6FQBooT3cjfX8=; b=bw7kVz9sUta1tPz9OB6a7m3XESIUc8Qgl0+k5EFp98RpAs9t8+TqdR6P Wl59s4jPzoiuvrXO5WDaQI1SdDPW3SYAAxcb1FGvp6Y12ryNDQVX227fi oFfDZj0EljcBntb39M/x8zmTJYQQ4FxQ0Mf1M3Tn49N+Y1emzYWpasQLZ jeOaL3AAfujpWSlCRLYoE220GEHOJ7KADy8Jngq+o0PLtcfHKvHF890W3 AmgWPDW9twc0WXP0tyGfn9++kc9qZygZH/LD1q04V8LEyr+htNd0vu+zE 8uDVT7kYcdk61bdLE2SG2S2s8lEEC/xqJlxdCbdcb97flJT0PFwc2hXdG g==; X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="355181112" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="355181112" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jun 2023 02:32:12 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="741652159" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="741652159" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga001.jf.intel.com with ESMTP; 05 Jun 2023 02:32:11 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing Subject: [PATCH v9 10/14] common/idpf: add irq map config API Date: Mon, 5 Jun 2023 09:06:37 +0000 Message-Id: <20230605090641.36525-11-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230605090641.36525-1-beilei.xing@intel.com> References: <20230605061724.88130-1-beilei.xing@intel.com> <20230605090641.36525-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch supports idpf_vport_irq_map_config_by_qids API. Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/common/idpf/idpf_common_device.c | 75 ++++++++++++++++++++++++ drivers/common/idpf/idpf_common_device.h | 4 ++ drivers/common/idpf/version.map | 1 + 3 files changed, 80 insertions(+) diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c index dc47551b17..cc4207a46e 100644 --- a/drivers/common/idpf/idpf_common_device.c +++ b/drivers/common/idpf/idpf_common_device.c @@ -667,6 +667,81 @@ idpf_vport_irq_map_config(struct idpf_vport *vport, uint16_t nb_rx_queues) return ret; } +int +idpf_vport_irq_map_config_by_qids(struct idpf_vport *vport, uint32_t *qids, uint16_t nb_rx_queues) +{ + struct idpf_adapter *adapter = vport->adapter; + struct virtchnl2_queue_vector *qv_map; + struct idpf_hw *hw = &adapter->hw; + uint32_t dynctl_val, itrn_val; + uint32_t dynctl_reg_start; + uint32_t itrn_reg_start; + uint16_t i; + int ret; + + qv_map = rte_zmalloc("qv_map", + nb_rx_queues * + sizeof(struct virtchnl2_queue_vector), 0); + if (qv_map == NULL) { + DRV_LOG(ERR, "Failed to allocate %d queue-vector map", + nb_rx_queues); + ret = -ENOMEM; + goto qv_map_alloc_err; + } + + /* Rx interrupt disabled, Map interrupt only for writeback */ + + /* The capability flags adapter->caps.other_caps should be + * compared with bit VIRTCHNL2_CAP_WB_ON_ITR here. The if + * condition should be updated when the FW can return the + * correct flag bits. + */ + dynctl_reg_start = + vport->recv_vectors->vchunks.vchunks->dynctl_reg_start; + itrn_reg_start = + vport->recv_vectors->vchunks.vchunks->itrn_reg_start; + dynctl_val = IDPF_READ_REG(hw, dynctl_reg_start); + DRV_LOG(DEBUG, "Value of dynctl_reg_start is 0x%x", dynctl_val); + itrn_val = IDPF_READ_REG(hw, itrn_reg_start); + DRV_LOG(DEBUG, "Value of itrn_reg_start is 0x%x", itrn_val); + /* Force write-backs by setting WB_ON_ITR bit in DYN_CTL + * register. WB_ON_ITR and INTENA are mutually exclusive + * bits. Setting WB_ON_ITR bits means TX and RX Descs + * are written back based on ITR expiration irrespective + * of INTENA setting. + */ + /* TBD: need to tune INTERVAL value for better performance. */ + itrn_val = (itrn_val == 0) ? IDPF_DFLT_INTERVAL : itrn_val; + dynctl_val = VIRTCHNL2_ITR_IDX_0 << + PF_GLINT_DYN_CTL_ITR_INDX_S | + PF_GLINT_DYN_CTL_WB_ON_ITR_M | + itrn_val << PF_GLINT_DYN_CTL_INTERVAL_S; + IDPF_WRITE_REG(hw, dynctl_reg_start, dynctl_val); + + for (i = 0; i < nb_rx_queues; i++) { + /* map all queues to the same vector */ + qv_map[i].queue_id = qids[i]; + qv_map[i].vector_id = + vport->recv_vectors->vchunks.vchunks->start_vector_id; + } + vport->qv_map = qv_map; + + ret = idpf_vc_irq_map_unmap_config(vport, nb_rx_queues, true); + if (ret != 0) { + DRV_LOG(ERR, "config interrupt mapping failed"); + goto config_irq_map_err; + } + + return 0; + +config_irq_map_err: + rte_free(vport->qv_map); + vport->qv_map = NULL; + +qv_map_alloc_err: + return ret; +} + int idpf_vport_irq_unmap_config(struct idpf_vport *vport, uint16_t nb_rx_queues) { diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h index 112367dae8..f767ea7cec 100644 --- a/drivers/common/idpf/idpf_common_device.h +++ b/drivers/common/idpf/idpf_common_device.h @@ -200,5 +200,9 @@ int idpf_vport_info_init(struct idpf_vport *vport, struct virtchnl2_create_vport *vport_info); __rte_internal void idpf_vport_stats_update(struct virtchnl2_vport_stats *oes, struct virtchnl2_vport_stats *nes); +__rte_internal +int idpf_vport_irq_map_config_by_qids(struct idpf_vport *vport, + uint32_t *qids, + uint16_t nb_rx_queues); #endif /* _IDPF_COMMON_DEVICE_H_ */ diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map index 25624732b0..0729f6b912 100644 --- a/drivers/common/idpf/version.map +++ b/drivers/common/idpf/version.map @@ -69,6 +69,7 @@ INTERNAL { idpf_vport_info_init; idpf_vport_init; idpf_vport_irq_map_config; + idpf_vport_irq_map_config_by_qids; idpf_vport_irq_unmap_config; idpf_vport_rss_config; idpf_vport_stats_update; From patchwork Mon Jun 5 09:06:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 128113 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7915E42C34; Mon, 5 Jun 2023 11:33:16 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0FE5542D8B; Mon, 5 Jun 2023 11:32:17 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 83D0542D6C for ; Mon, 5 Jun 2023 11:32:14 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685957534; x=1717493534; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=YMkNKy87h8Koory9oHyRpDGsAQ9Qlxkb6wVSIREM+nA=; b=S2FBmYB7BD98lA0tfpdkHm3Iq3erGDmglVmm1CJtxvPVL4fg2uBK7hup maAWYNoQNcqOUFbIbMSmvbL5quuVz1NRkGS72+x+/pSpv/HEyeMRbuvVl s9JYiZjkVxwCgWZW3o9u2Ggf9nPI0PJDnIGPHXtyjATp5LUup+A+7U5IG Cs/pmmt7vECtjFuCXVDx0K+yuzz/Dv8ljot27YwOZdTEP0nyWYUmuRVN7 +nZVMCFYjHFcloZdJtWvZxyry6tn3KCthuTrg5qyt8uQ3VtfLrI/Dcnyv wiffBuB0TrPDm65v1sV4LZC7EGlONuX1nPj9AwDla0a2eHZdHuUPfItWt A==; X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="355181120" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="355181120" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jun 2023 02:32:14 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="741652175" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="741652175" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga001.jf.intel.com with ESMTP; 05 Jun 2023 02:32:12 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing Subject: [PATCH v9 11/14] net/cpfl: enable write back based on ITR expire Date: Mon, 5 Jun 2023 09:06:38 +0000 Message-Id: <20230605090641.36525-12-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230605090641.36525-1-beilei.xing@intel.com> References: <20230605061724.88130-1-beilei.xing@intel.com> <20230605090641.36525-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch enables write back based on ITR expire (WR_ON_ITR) for hairpin queues. Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/net/cpfl/cpfl_ethdev.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index 48e956f151..4502f04130 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -735,11 +735,22 @@ cpfl_dev_configure(struct rte_eth_dev *dev) static int cpfl_config_rx_queues_irqs(struct rte_eth_dev *dev) { + uint32_t qids[CPFL_MAX_P2P_NB_QUEUES + IDPF_DEFAULT_RXQ_NUM] = {0}; struct cpfl_vport *cpfl_vport = dev->data->dev_private; struct idpf_vport *vport = &cpfl_vport->base; uint16_t nb_rx_queues = dev->data->nb_rx_queues; + struct cpfl_rx_queue *cpfl_rxq; + int i; - return idpf_vport_irq_map_config(vport, nb_rx_queues); + for (i = 0; i < nb_rx_queues; i++) { + cpfl_rxq = dev->data->rx_queues[i]; + if (cpfl_rxq->hairpin_info.hairpin_q) + qids[i] = cpfl_hw_qid_get(cpfl_vport->p2p_q_chunks_info->rx_start_qid, + (i - cpfl_vport->nb_data_rxq)); + else + qids[i] = cpfl_hw_qid_get(vport->chunks_info.rx_start_qid, i); + } + return idpf_vport_irq_map_config_by_qids(vport, qids, nb_rx_queues); } /* Update hairpin_info for dev's tx hairpin queue */ From patchwork Mon Jun 5 09:06:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 128114 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 76FBF42C34; Mon, 5 Jun 2023 11:33:23 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3836442D73; Mon, 5 Jun 2023 11:32:18 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 2596342D6D for ; Mon, 5 Jun 2023 11:32:15 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685957536; x=1717493536; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=rg41lREyObey5R/ykI08Cp5jqHlheTrV+L5+HBG0mo8=; b=jEKkE1RueWouwNJgufnE7F5mKaYQPRRYV0XlSu4DokTJ2e6Sa87rOnby sBCXODvh3fP9943Ug7LhVhf8/Tgdbqqhco/PHSHyjJA9hpWc1xY0mA7Q9 kOv6nTnjAEWrmHZzRT84rBsYoglPVVDPxrtTtrxEMIH11UEyIHPyh+nF8 xksAsHmm3b6vw8Dyv6pZsF4rH8lrRMUXQXq/GMgo+AMsME9n82B4FAD2r /nTYxKPZwCmGTscuJ2qEuRU3YRhYueELXVpV+NqXgWDf1c0Z8TvMt6TIm UT/sq4Z2sm7yvC8hF28EMm4eDvcxdsip88Ux7N8NB0UL7VNcdw5MqfAVV Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="355181125" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="355181125" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jun 2023 02:32:15 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="741652192" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="741652192" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga001.jf.intel.com with ESMTP; 05 Jun 2023 02:32:14 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing , Xiao Wang Subject: [PATCH v9 12/14] net/cpfl: support peer ports get Date: Mon, 5 Jun 2023 09:06:39 +0000 Message-Id: <20230605090641.36525-13-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230605090641.36525-1-beilei.xing@intel.com> References: <20230605061724.88130-1-beilei.xing@intel.com> <20230605090641.36525-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch supports get hairpin peer ports. Signed-off-by: Xiao Wang Signed-off-by: Beilei Xing --- drivers/net/cpfl/cpfl_ethdev.c | 41 ++++++++++++++++++++++++++++++++++ 1 file changed, 41 insertions(+) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index 4502f04130..49d1b8b58b 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -1080,6 +1080,46 @@ cpfl_dev_close(struct rte_eth_dev *dev) return 0; } +static int +cpfl_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports, + size_t len, uint32_t tx) +{ + struct cpfl_vport *cpfl_vport = + (struct cpfl_vport *)dev->data->dev_private; + struct idpf_tx_queue *txq; + struct idpf_rx_queue *rxq; + struct cpfl_tx_queue *cpfl_txq; + struct cpfl_rx_queue *cpfl_rxq; + int i; + int j = 0; + + if (len <= 0) + return -EINVAL; + + if (cpfl_vport->p2p_q_chunks_info == NULL) + return -ENOTSUP; + + if (tx > 0) { + for (i = cpfl_vport->nb_data_txq, j = 0; i < dev->data->nb_tx_queues; i++, j++) { + txq = dev->data->tx_queues[i]; + if (txq == NULL) + return -EINVAL; + cpfl_txq = (struct cpfl_tx_queue *)txq; + peer_ports[j] = cpfl_txq->hairpin_info.peer_rxp; + } + } else if (tx == 0) { + for (i = cpfl_vport->nb_data_rxq, j = 0; i < dev->data->nb_rx_queues; i++, j++) { + rxq = dev->data->rx_queues[i]; + if (rxq == NULL) + return -EINVAL; + cpfl_rxq = (struct cpfl_rx_queue *)rxq; + peer_ports[j] = cpfl_rxq->hairpin_info.peer_txp; + } + } + + return j; +} + static const struct eth_dev_ops cpfl_eth_dev_ops = { .dev_configure = cpfl_dev_configure, .dev_close = cpfl_dev_close, @@ -1109,6 +1149,7 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = { .hairpin_cap_get = cpfl_hairpin_cap_get, .rx_hairpin_queue_setup = cpfl_rx_hairpin_queue_setup, .tx_hairpin_queue_setup = cpfl_tx_hairpin_queue_setup, + .hairpin_get_peer_ports = cpfl_hairpin_get_peer_ports, }; static int From patchwork Mon Jun 5 09:06:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 128115 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 762F642C34; Mon, 5 Jun 2023 11:33:29 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5530842D8E; Mon, 5 Jun 2023 11:32:20 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 0036A42D8E for ; Mon, 5 Jun 2023 11:32:17 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685957538; x=1717493538; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=tbPX5QCUAVQICW3yjcC1/fAv5MT9l0vyp+O49Ypm800=; b=CfnSu+YIQ9eT6qphfxWgJxcAwX5RHETTUHDG33kosz/FRZ3nHLd+YZZ7 hw+qMFsyLgIKC6TGarJWFQR/cOz4gM2Uqz55IhBsqVWInx3FO7IDfDOey lCV+FH8y6ppjcXLGcflp3MjRo5MkPYqbUJsC0WMMamZi9CHPbwFVhgajo nWd9s1Y7DCcNDTTQVTqmdN/3Wqi+iIC2BcymNS9ps/01wk7XTtzpW9OHj 7rDoRhYMuWDKk7rX/OIozs+8SeqfDwMh82BAA1xujgB4UaW1oVi0rrKKW rbonnIgF0v2hDNOgcg34bw1yBqmOz/pcWl8whfYwLaNrqp4j/r2iujn+I A==; X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="355181134" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="355181134" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jun 2023 02:32:17 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="741652214" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="741652214" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga001.jf.intel.com with ESMTP; 05 Jun 2023 02:32:15 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing , Xiao Wang Subject: [PATCH v9 13/14] net/cpfl: support hairpin bind/unbind Date: Mon, 5 Jun 2023 09:06:40 +0000 Message-Id: <20230605090641.36525-14-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230605090641.36525-1-beilei.xing@intel.com> References: <20230605061724.88130-1-beilei.xing@intel.com> <20230605090641.36525-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch supports hairpin_bind/unbind ops. Signed-off-by: Xiao Wang Signed-off-by: Beilei Xing --- drivers/net/cpfl/cpfl_ethdev.c | 137 +++++++++++++++++++++++++++++++++ 1 file changed, 137 insertions(+) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index 49d1b8b58b..ac97622a15 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -1120,6 +1120,141 @@ cpfl_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports, return j; } +static int +cpfl_hairpin_bind(struct rte_eth_dev *dev, uint16_t rx_port) +{ + struct cpfl_vport *cpfl_tx_vport = dev->data->dev_private; + struct idpf_vport *tx_vport = &cpfl_tx_vport->base; + struct cpfl_vport *cpfl_rx_vport; + struct cpfl_tx_queue *cpfl_txq; + struct cpfl_rx_queue *cpfl_rxq; + struct rte_eth_dev *peer_dev; + struct idpf_vport *rx_vport; + int err = 0; + int i; + + err = cpfl_txq_hairpin_info_update(dev, rx_port); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to update Tx hairpin queue info."); + return err; + } + + /* configure hairpin queues */ + for (i = cpfl_tx_vport->nb_data_txq; i < dev->data->nb_tx_queues; i++) { + cpfl_txq = dev->data->tx_queues[i]; + err = cpfl_hairpin_txq_config(tx_vport, cpfl_txq); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to configure hairpin Tx queue %u", i); + return err; + } + } + + err = cpfl_hairpin_tx_complq_config(cpfl_tx_vport); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to config Tx completion queue"); + return err; + } + + peer_dev = &rte_eth_devices[rx_port]; + cpfl_rx_vport = (struct cpfl_vport *)peer_dev->data->dev_private; + rx_vport = &cpfl_rx_vport->base; + cpfl_rxq_hairpin_mz_bind(peer_dev); + + err = cpfl_hairpin_rx_bufq_config(cpfl_rx_vport); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to config Rx buffer queue"); + return err; + } + + for (i = cpfl_rx_vport->nb_data_rxq; i < peer_dev->data->nb_rx_queues; i++) { + cpfl_rxq = peer_dev->data->rx_queues[i]; + err = cpfl_hairpin_rxq_config(rx_vport, cpfl_rxq); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to configure hairpin Rx queue %u", i); + return err; + } + err = cpfl_rx_queue_init(peer_dev, i); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to init hairpin Rx queue %u", i); + return err; + } + } + + /* enable hairpin queues */ + for (i = cpfl_tx_vport->nb_data_txq; i < dev->data->nb_tx_queues; i++) { + cpfl_txq = dev->data->tx_queues[i]; + err = cpfl_switch_hairpin_rxtx_queue(cpfl_tx_vport, + i - cpfl_tx_vport->nb_data_txq, + false, true); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to switch hairpin TX queue %u on", + i); + return err; + } + cpfl_txq->base.q_started = true; + } + + err = cpfl_switch_hairpin_complq(cpfl_tx_vport, true); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to switch hairpin Tx complq"); + return err; + } + + for (i = cpfl_rx_vport->nb_data_rxq; i < peer_dev->data->nb_rx_queues; i++) { + cpfl_rxq = peer_dev->data->rx_queues[i]; + err = cpfl_switch_hairpin_rxtx_queue(cpfl_rx_vport, + i - cpfl_rx_vport->nb_data_rxq, + true, true); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to switch hairpin RX queue %u on", + i); + } + cpfl_rxq->base.q_started = true; + } + + err = cpfl_switch_hairpin_bufq(cpfl_rx_vport, true); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to switch hairpin Rx buffer queue"); + return err; + } + + return 0; +} + +static int +cpfl_hairpin_unbind(struct rte_eth_dev *dev, uint16_t rx_port) +{ + struct cpfl_vport *cpfl_tx_vport = dev->data->dev_private; + struct rte_eth_dev *peer_dev = &rte_eth_devices[rx_port]; + struct cpfl_vport *cpfl_rx_vport = peer_dev->data->dev_private; + struct cpfl_tx_queue *cpfl_txq; + struct cpfl_rx_queue *cpfl_rxq; + int i; + + /* disable hairpin queues */ + for (i = cpfl_tx_vport->nb_data_txq; i < dev->data->nb_tx_queues; i++) { + cpfl_txq = dev->data->tx_queues[i]; + cpfl_switch_hairpin_rxtx_queue(cpfl_tx_vport, + i - cpfl_tx_vport->nb_data_txq, + false, false); + cpfl_txq->base.q_started = false; + } + + cpfl_switch_hairpin_complq(cpfl_tx_vport, false); + + for (i = cpfl_rx_vport->nb_data_rxq; i < peer_dev->data->nb_rx_queues; i++) { + cpfl_rxq = peer_dev->data->rx_queues[i]; + cpfl_switch_hairpin_rxtx_queue(cpfl_rx_vport, + i - cpfl_rx_vport->nb_data_rxq, + true, false); + cpfl_rxq->base.q_started = false; + } + + cpfl_switch_hairpin_bufq(cpfl_rx_vport, false); + + return 0; +} + static const struct eth_dev_ops cpfl_eth_dev_ops = { .dev_configure = cpfl_dev_configure, .dev_close = cpfl_dev_close, @@ -1150,6 +1285,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = { .rx_hairpin_queue_setup = cpfl_rx_hairpin_queue_setup, .tx_hairpin_queue_setup = cpfl_tx_hairpin_queue_setup, .hairpin_get_peer_ports = cpfl_hairpin_get_peer_ports, + .hairpin_bind = cpfl_hairpin_bind, + .hairpin_unbind = cpfl_hairpin_unbind, }; static int From patchwork Mon Jun 5 09:06:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 128116 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4220D42C34; Mon, 5 Jun 2023 11:33:36 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5D80842DAD; Mon, 5 Jun 2023 11:32:21 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 5237B42DA4 for ; Mon, 5 Jun 2023 11:32:19 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685957539; x=1717493539; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=oVQhuz58lMsIDG0WuorafsAG9p4gkLFTqvO9dYitu7A=; b=DFuFmhtO2zzcWgCKv1YbfkzejQklezbHHX1XxiyWtuGrGz2yw2/yEC5D 7zfhpFw2NvNcBKwV1ivQddemdOIpeETi37zysVmX8SI9/srMONwf7fvQb RaojpU7C921iEs3esmj3NVogig/X7cdHWGWlImnRSdOfKuybYGCif6roy PtWCZ3FthJZQWTNqnV/3HsePixvoe2FI3t8ZngvbsScaav7e9kI1rdlUm E9jmKoFAFN3jeWIxpn9n3fo8aBzJP7tlHM1c1+apxsMN8jBsBvx1tRVrp 89vUoKFm1vczkKEAdTJ2SAKYkOJQN7QVDdG4XWdJJfLuXh8TBStj4Sy22 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="355181139" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="355181139" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jun 2023 02:32:18 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="741652228" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="741652228" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga001.jf.intel.com with ESMTP; 05 Jun 2023 02:32:17 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing Subject: [PATCH v9 14/14] doc: update the doc of CPFL PMD Date: Mon, 5 Jun 2023 09:06:41 +0000 Message-Id: <20230605090641.36525-15-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230605090641.36525-1-beilei.xing@intel.com> References: <20230605061724.88130-1-beilei.xing@intel.com> <20230605090641.36525-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing Update cpfl.rst to clarify hairpin support. Signed-off-by: Beilei Xing --- doc/guides/nics/cpfl.rst | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/doc/guides/nics/cpfl.rst b/doc/guides/nics/cpfl.rst index d25db088eb..8d5c3082e4 100644 --- a/doc/guides/nics/cpfl.rst +++ b/doc/guides/nics/cpfl.rst @@ -106,3 +106,10 @@ The paths are chosen based on 2 conditions: A value "P" means the offload feature is not supported by vector path. If any not supported features are used, cpfl vector PMD is disabled and the scalar paths are chosen. + +Hairpin queue +~~~~~~~~~~~~~ + + E2100 Series can loopback packets from RX port to TX port, this feature is + called port-to-port or hairpin. + Currently, the PMD only supports single port hairpin.