From patchwork Fri Apr 21 06:50:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 126345 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6B8F2429A9; Fri, 21 Apr 2023 09:13:59 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9BDCB41156; Fri, 21 Apr 2023 09:13:55 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id 39F57410DD for ; Fri, 21 Apr 2023 09:13:52 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1682061233; x=1713597233; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=I0+3lO/1KDJs2KT/65ipDy0lXWiaUqFH8Cu73UgH4fI=; b=FQbUGxf0ABApYXlrSri9l6UU7nsaA4MsIvLkGkL89lqmHh3/Z2rpoOGs qRCHwiEvKIABm9B/sZ+w6pki6lX/8sBw+/EqLiI9F4fqMoQMWYLp+x3Ai DaNcFscY62oW2JRpUksIaNVJbm68WrcF0/bzhvZHAwblW6cvw4fjMwcsm 7mfrDRFbIAs/FpeWKn/6McwG8uDlzN63Oi8uswXKXhghfn5RPqMqE1WQi EspE6K0NTiT59zuZZCJr2d2stn5BSIs4VwNETsRHKZZaJaySBmoH+e13K MHBsss3PRmpDmrKHIIXYfk0bMXJQuO149FPmjV9W5obcNkLZ9VulkS0sd Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10686"; a="326260036" X-IronPort-AV: E=Sophos;i="5.99,214,1677571200"; d="scan'208";a="326260036" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2023 00:13:52 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10686"; a="722669099" X-IronPort-AV: E=Sophos;i="5.99,214,1677571200"; d="scan'208";a="722669099" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga008.jf.intel.com with ESMTP; 21 Apr 2023 00:13:50 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing Subject: [PATCH 01/10] net/cpfl: refine structures Date: Fri, 21 Apr 2023 06:50:39 +0000 Message-Id: <20230421065048.106899-2-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230421065048.106899-1-beilei.xing@intel.com> References: <20230421065048.106899-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch refines some structures to support hairpin queue, cpfl_rx_queue/cpfl_tx_queue/cpfl_vport. Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/net/cpfl/cpfl_ethdev.c | 85 +++++++----- drivers/net/cpfl/cpfl_ethdev.h | 6 +- drivers/net/cpfl/cpfl_rxtx.c | 175 +++++++++++++++++------- drivers/net/cpfl/cpfl_rxtx.h | 8 ++ drivers/net/cpfl/cpfl_rxtx_vec_common.h | 17 +-- 5 files changed, 196 insertions(+), 95 deletions(-) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index 306b8ad769..4a507f05d5 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -124,7 +124,8 @@ static int cpfl_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct rte_eth_link new_link; unsigned int i; @@ -156,7 +157,8 @@ cpfl_dev_link_update(struct rte_eth_dev *dev, static int cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; dev_info->max_rx_queues = base->caps.max_rx_q; @@ -216,7 +218,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) static int cpfl_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; /* mtu setting is forbidden if port is start */ if (dev->data->dev_started) { @@ -256,12 +259,12 @@ static uint64_t cpfl_get_mbuf_alloc_failed_stats(struct rte_eth_dev *dev) { uint64_t mbuf_alloc_failed = 0; - struct idpf_rx_queue *rxq; + struct cpfl_rx_queue *cpfl_rxq; int i = 0; for (i = 0; i < dev->data->nb_rx_queues; i++) { - rxq = dev->data->rx_queues[i]; - mbuf_alloc_failed += __atomic_load_n(&rxq->rx_stats.mbuf_alloc_failed, + cpfl_rxq = dev->data->rx_queues[i]; + mbuf_alloc_failed += __atomic_load_n(&cpfl_rxq->base.rx_stats.mbuf_alloc_failed, __ATOMIC_RELAXED); } @@ -271,8 +274,8 @@ cpfl_get_mbuf_alloc_failed_stats(struct rte_eth_dev *dev) static int cpfl_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { - struct idpf_vport *vport = - (struct idpf_vport *)dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct virtchnl2_vport_stats *pstats = NULL; int ret; @@ -305,20 +308,20 @@ cpfl_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) static void cpfl_reset_mbuf_alloc_failed_stats(struct rte_eth_dev *dev) { - struct idpf_rx_queue *rxq; + struct cpfl_rx_queue *cpfl_rxq; int i; for (i = 0; i < dev->data->nb_rx_queues; i++) { - rxq = dev->data->rx_queues[i]; - __atomic_store_n(&rxq->rx_stats.mbuf_alloc_failed, 0, __ATOMIC_RELAXED); + cpfl_rxq = dev->data->rx_queues[i]; + __atomic_store_n(&cpfl_rxq->base.rx_stats.mbuf_alloc_failed, 0, __ATOMIC_RELAXED); } } static int cpfl_dev_stats_reset(struct rte_eth_dev *dev) { - struct idpf_vport *vport = - (struct idpf_vport *)dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct virtchnl2_vport_stats *pstats = NULL; int ret; @@ -343,8 +346,8 @@ static int cpfl_dev_xstats_reset(struct rte_eth_dev *dev) static int cpfl_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, unsigned int n) { - struct idpf_vport *vport = - (struct idpf_vport *)dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct virtchnl2_vport_stats *pstats = NULL; unsigned int i; int ret; @@ -459,7 +462,8 @@ cpfl_rss_reta_update(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; uint16_t idx, shift; int ret = 0; @@ -498,7 +502,8 @@ cpfl_rss_reta_query(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; uint16_t idx, shift; int ret = 0; @@ -536,7 +541,8 @@ static int cpfl_rss_hash_update(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; int ret = 0; @@ -601,7 +607,8 @@ static int cpfl_rss_hash_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; int ret = 0; @@ -638,7 +645,8 @@ cpfl_rss_hash_conf_get(struct rte_eth_dev *dev, static int cpfl_dev_configure(struct rte_eth_dev *dev) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct rte_eth_conf *conf = &dev->data->dev_conf; struct idpf_adapter *base = vport->adapter; int ret; @@ -710,7 +718,8 @@ cpfl_dev_configure(struct rte_eth_dev *dev) static int cpfl_config_rx_queues_irqs(struct rte_eth_dev *dev) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; uint16_t nb_rx_queues = dev->data->nb_rx_queues; return idpf_vport_irq_map_config(vport, nb_rx_queues); @@ -719,14 +728,14 @@ cpfl_config_rx_queues_irqs(struct rte_eth_dev *dev) static int cpfl_start_queues(struct rte_eth_dev *dev) { - struct idpf_rx_queue *rxq; - struct idpf_tx_queue *txq; + struct cpfl_rx_queue *cpfl_rxq; + struct cpfl_tx_queue *cpfl_txq; int err = 0; int i; for (i = 0; i < dev->data->nb_tx_queues; i++) { - txq = dev->data->tx_queues[i]; - if (txq == NULL || txq->tx_deferred_start) + cpfl_txq = dev->data->tx_queues[i]; + if (cpfl_txq == NULL || cpfl_txq->base.tx_deferred_start) continue; err = cpfl_tx_queue_start(dev, i); if (err != 0) { @@ -736,8 +745,8 @@ cpfl_start_queues(struct rte_eth_dev *dev) } for (i = 0; i < dev->data->nb_rx_queues; i++) { - rxq = dev->data->rx_queues[i]; - if (rxq == NULL || rxq->rx_deferred_start) + cpfl_rxq = dev->data->rx_queues[i]; + if (cpfl_rxq == NULL || cpfl_rxq->base.rx_deferred_start) continue; err = cpfl_rx_queue_start(dev, i); if (err != 0) { @@ -752,7 +761,8 @@ cpfl_start_queues(struct rte_eth_dev *dev) static int cpfl_dev_start(struct rte_eth_dev *dev) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(base); uint16_t num_allocated_vectors = base->caps.num_allocated_vectors; @@ -815,7 +825,8 @@ cpfl_dev_start(struct rte_eth_dev *dev) static int cpfl_dev_stop(struct rte_eth_dev *dev) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; if (vport->stopped == 1) return 0; @@ -836,7 +847,8 @@ cpfl_dev_stop(struct rte_eth_dev *dev) static int cpfl_dev_close(struct rte_eth_dev *dev) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(vport->adapter); cpfl_dev_stop(dev); @@ -846,7 +858,7 @@ cpfl_dev_close(struct rte_eth_dev *dev) adapter->cur_vport_nb--; dev->data->dev_private = NULL; adapter->vports[vport->sw_idx] = NULL; - rte_free(vport); + rte_free(cpfl_vport); return 0; } @@ -1051,7 +1063,7 @@ cpfl_find_vport(struct cpfl_adapter_ext *adapter, uint32_t vport_id) int i; for (i = 0; i < adapter->cur_vport_nb; i++) { - vport = adapter->vports[i]; + vport = &adapter->vports[i]->base; if (vport->vport_id != vport_id) continue; else @@ -1328,7 +1340,8 @@ cpfl_vport_idx_alloc(struct cpfl_adapter_ext *adapter) static int cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct cpfl_vport_param *param = init_params; struct cpfl_adapter_ext *adapter = param->adapter; /* for sending create vport virtchnl msg prepare */ @@ -1354,7 +1367,7 @@ cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params) goto err; } - adapter->vports[param->idx] = vport; + adapter->vports[param->idx] = cpfl_vport; adapter->cur_vports |= RTE_BIT32(param->devarg_id); adapter->cur_vport_nb++; @@ -1470,7 +1483,7 @@ cpfl_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, snprintf(name, sizeof(name), "cpfl_%s_vport_0", pci_dev->device.name); retval = rte_eth_dev_create(&pci_dev->device, name, - sizeof(struct idpf_vport), + sizeof(struct cpfl_vport), NULL, NULL, cpfl_dev_vport_init, &vport_param); if (retval != 0) @@ -1488,7 +1501,7 @@ cpfl_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, pci_dev->device.name, devargs.req_vports[i]); retval = rte_eth_dev_create(&pci_dev->device, name, - sizeof(struct idpf_vport), + sizeof(struct cpfl_vport), NULL, NULL, cpfl_dev_vport_init, &vport_param); if (retval != 0) diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h index 200dfcac02..81fe9ac4c3 100644 --- a/drivers/net/cpfl/cpfl_ethdev.h +++ b/drivers/net/cpfl/cpfl_ethdev.h @@ -69,13 +69,17 @@ struct cpfl_devargs { uint16_t req_vport_nb; }; +struct cpfl_vport { + struct idpf_vport base; +}; + struct cpfl_adapter_ext { TAILQ_ENTRY(cpfl_adapter_ext) next; struct idpf_adapter base; char name[CPFL_ADAPTER_NAME_LEN]; - struct idpf_vport **vports; + struct cpfl_vport **vports; uint16_t max_vport_nb; uint16_t cur_vports; /* bit mask of created vport */ diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c index de59b31b3d..a441e2ffbe 100644 --- a/drivers/net/cpfl/cpfl_rxtx.c +++ b/drivers/net/cpfl/cpfl_rxtx.c @@ -128,7 +128,8 @@ cpfl_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *rxq, uint16_t nb_desc, unsigned int socket_id, struct rte_mempool *mp, uint8_t bufq_id) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; struct idpf_hw *hw = &base->hw; const struct rte_memzone *mz; @@ -219,15 +220,69 @@ cpfl_rx_split_bufq_release(struct idpf_rx_queue *bufq) rte_free(bufq); } +static void +cpfl_rx_queue_release(void *rxq) +{ + struct cpfl_rx_queue *cpfl_rxq = rxq; + struct idpf_rx_queue *q = NULL; + + if (cpfl_rxq == NULL) + return; + + q = &cpfl_rxq->base; + + /* Split queue */ + if (!q->adapter->is_rx_singleq) { + if (q->bufq2) + cpfl_rx_split_bufq_release(q->bufq2); + + if (q->bufq1) + cpfl_rx_split_bufq_release(q->bufq1); + + rte_free(cpfl_rxq); + return; + } + + /* Single queue */ + q->ops->release_mbufs(q); + rte_free(q->sw_ring); + rte_memzone_free(q->mz); + rte_free(cpfl_rxq); +} + +static void +cpfl_tx_queue_release(void *txq) +{ + struct cpfl_tx_queue *cpfl_txq = txq; + struct idpf_tx_queue *q = NULL; + + if (cpfl_txq == NULL) + return; + + q = &cpfl_txq->base; + + if (q->complq) { + rte_memzone_free(q->complq->mz); + rte_free(q->complq); + } + + q->ops->release_mbufs(q); + rte_free(q->sw_ring); + rte_memzone_free(q->mz); + rte_free(cpfl_txq); +} + int cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mp) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; struct idpf_hw *hw = &base->hw; + struct cpfl_rx_queue *cpfl_rxq; const struct rte_memzone *mz; struct idpf_rx_queue *rxq; uint16_t rx_free_thresh; @@ -247,21 +302,23 @@ cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, /* Free memory if needed */ if (dev->data->rx_queues[queue_idx] != NULL) { - idpf_qc_rx_queue_release(dev->data->rx_queues[queue_idx]); + cpfl_rx_queue_release(dev->data->rx_queues[queue_idx]); dev->data->rx_queues[queue_idx] = NULL; } /* Setup Rx queue */ - rxq = rte_zmalloc_socket("cpfl rxq", - sizeof(struct idpf_rx_queue), + cpfl_rxq = rte_zmalloc_socket("cpfl rxq", + sizeof(struct cpfl_rx_queue), RTE_CACHE_LINE_SIZE, socket_id); - if (rxq == NULL) { + if (cpfl_rxq == NULL) { PMD_INIT_LOG(ERR, "Failed to allocate memory for rx queue data structure"); ret = -ENOMEM; goto err_rxq_alloc; } + rxq = &cpfl_rxq->base; + is_splitq = !!(vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT); rxq->mp = mp; @@ -328,7 +385,7 @@ cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, } rxq->q_set = true; - dev->data->rx_queues[queue_idx] = rxq; + dev->data->rx_queues[queue_idx] = cpfl_rxq; return 0; @@ -348,7 +405,8 @@ cpfl_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq, uint16_t queue_idx, uint16_t nb_desc, unsigned int socket_id) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; const struct rte_memzone *mz; struct idpf_tx_queue *cq; int ret; @@ -396,9 +454,11 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_txconf *tx_conf) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; uint16_t tx_rs_thresh, tx_free_thresh; + struct cpfl_tx_queue *cpfl_txq; struct idpf_hw *hw = &base->hw; const struct rte_memzone *mz; struct idpf_tx_queue *txq; @@ -418,21 +478,23 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, /* Free memory if needed. */ if (dev->data->tx_queues[queue_idx] != NULL) { - idpf_qc_tx_queue_release(dev->data->tx_queues[queue_idx]); + cpfl_tx_queue_release(dev->data->tx_queues[queue_idx]); dev->data->tx_queues[queue_idx] = NULL; } /* Allocate the TX queue data structure. */ - txq = rte_zmalloc_socket("cpfl txq", - sizeof(struct idpf_tx_queue), + cpfl_txq = rte_zmalloc_socket("cpfl txq", + sizeof(struct cpfl_tx_queue), RTE_CACHE_LINE_SIZE, socket_id); - if (txq == NULL) { + if (cpfl_txq == NULL) { PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure"); ret = -ENOMEM; goto err_txq_alloc; } + txq = &cpfl_txq->base; + is_splitq = !!(vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT); txq->nb_tx_desc = nb_desc; @@ -486,7 +548,7 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, queue_idx * vport->chunks_info.tx_qtail_spacing); txq->ops = &def_txq_ops; txq->q_set = true; - dev->data->tx_queues[queue_idx] = txq; + dev->data->tx_queues[queue_idx] = cpfl_txq; return 0; @@ -502,6 +564,7 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, int cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) { + struct cpfl_rx_queue *cpfl_rxq; struct idpf_rx_queue *rxq; uint16_t max_pkt_len; uint32_t frame_size; @@ -510,7 +573,8 @@ cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) if (rx_queue_id >= dev->data->nb_rx_queues) return -EINVAL; - rxq = dev->data->rx_queues[rx_queue_id]; + cpfl_rxq = dev->data->rx_queues[rx_queue_id]; + rxq = &cpfl_rxq->base; if (rxq == NULL || !rxq->q_set) { PMD_DRV_LOG(ERR, "RX queue %u not available or setup", @@ -574,9 +638,10 @@ cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) int cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) { - struct idpf_vport *vport = dev->data->dev_private; - struct idpf_rx_queue *rxq = - dev->data->rx_queues[rx_queue_id]; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; + struct cpfl_rx_queue *cpfl_rxq = dev->data->rx_queues[rx_queue_id]; + struct idpf_rx_queue *rxq = &cpfl_rxq->base; int err = 0; err = idpf_vc_rxq_config(vport, rxq); @@ -609,15 +674,15 @@ cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) int cpfl_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id) { - struct idpf_tx_queue *txq; + struct cpfl_tx_queue *cpfl_txq; if (tx_queue_id >= dev->data->nb_tx_queues) return -EINVAL; - txq = dev->data->tx_queues[tx_queue_id]; + cpfl_txq = dev->data->tx_queues[tx_queue_id]; /* Init the RX tail register. */ - IDPF_PCI_REG_WRITE(txq->qtx_tail, 0); + IDPF_PCI_REG_WRITE(cpfl_txq->base.qtx_tail, 0); return 0; } @@ -625,12 +690,13 @@ cpfl_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id) int cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) { - struct idpf_vport *vport = dev->data->dev_private; - struct idpf_tx_queue *txq = + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; + struct cpfl_tx_queue *cpfl_txq = dev->data->tx_queues[tx_queue_id]; int err = 0; - err = idpf_vc_txq_config(vport, txq); + err = idpf_vc_txq_config(vport, &cpfl_txq->base); if (err != 0) { PMD_DRV_LOG(ERR, "Fail to configure Tx queue %u", tx_queue_id); return err; @@ -649,7 +715,7 @@ cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on", tx_queue_id); } else { - txq->q_started = true; + cpfl_txq->base.q_started = true; dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; } @@ -660,13 +726,16 @@ cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; + struct cpfl_rx_queue *cpfl_rxq; struct idpf_rx_queue *rxq; int err; if (rx_queue_id >= dev->data->nb_rx_queues) return -EINVAL; + cpfl_rxq = dev->data->rx_queues[rx_queue_id]; err = idpf_vc_queue_switch(vport, rx_queue_id, true, false); if (err != 0) { PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off", @@ -674,7 +743,7 @@ cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) return err; } - rxq = dev->data->rx_queues[rx_queue_id]; + rxq = &cpfl_rxq->base; rxq->q_started = false; if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) { rxq->ops->release_mbufs(rxq); @@ -692,13 +761,17 @@ cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) int cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; + struct cpfl_tx_queue *cpfl_txq; struct idpf_tx_queue *txq; int err; if (tx_queue_id >= dev->data->nb_tx_queues) return -EINVAL; + cpfl_txq = dev->data->tx_queues[tx_queue_id]; + err = idpf_vc_queue_switch(vport, tx_queue_id, false, false); if (err != 0) { PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off", @@ -706,7 +779,7 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) return err; } - txq = dev->data->tx_queues[tx_queue_id]; + txq = &cpfl_txq->base; txq->q_started = false; txq->ops->release_mbufs(txq); if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) { @@ -723,25 +796,25 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid) { - idpf_qc_rx_queue_release(dev->data->rx_queues[qid]); + cpfl_rx_queue_release(dev->data->rx_queues[qid]); } void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid) { - idpf_qc_tx_queue_release(dev->data->tx_queues[qid]); + cpfl_tx_queue_release(dev->data->tx_queues[qid]); } void cpfl_stop_queues(struct rte_eth_dev *dev) { - struct idpf_rx_queue *rxq; - struct idpf_tx_queue *txq; + struct cpfl_rx_queue *cpfl_rxq; + struct cpfl_tx_queue *cpfl_txq; int i; for (i = 0; i < dev->data->nb_rx_queues; i++) { - rxq = dev->data->rx_queues[i]; - if (rxq == NULL) + cpfl_rxq = dev->data->rx_queues[i]; + if (cpfl_rxq == NULL) continue; if (cpfl_rx_queue_stop(dev, i) != 0) @@ -749,8 +822,8 @@ cpfl_stop_queues(struct rte_eth_dev *dev) } for (i = 0; i < dev->data->nb_tx_queues; i++) { - txq = dev->data->tx_queues[i]; - if (txq == NULL) + cpfl_txq = dev->data->tx_queues[i]; + if (cpfl_txq == NULL) continue; if (cpfl_tx_queue_stop(dev, i) != 0) @@ -761,9 +834,10 @@ cpfl_stop_queues(struct rte_eth_dev *dev) void cpfl_set_rx_function(struct rte_eth_dev *dev) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; #ifdef RTE_ARCH_X86 - struct idpf_rx_queue *rxq; + struct cpfl_rx_queue *cpfl_rxq; int i; if (cpfl_rx_vec_dev_check_default(dev) == CPFL_VECTOR_PATH && @@ -789,8 +863,8 @@ cpfl_set_rx_function(struct rte_eth_dev *dev) if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) { if (vport->rx_vec_allowed) { for (i = 0; i < dev->data->nb_rx_queues; i++) { - rxq = dev->data->rx_queues[i]; - (void)idpf_qc_splitq_rx_vec_setup(rxq); + cpfl_rxq = dev->data->rx_queues[i]; + (void)idpf_qc_splitq_rx_vec_setup(&cpfl_rxq->base); } #ifdef CC_AVX512_SUPPORT if (vport->rx_use_avx512) { @@ -809,8 +883,8 @@ cpfl_set_rx_function(struct rte_eth_dev *dev) } else { if (vport->rx_vec_allowed) { for (i = 0; i < dev->data->nb_rx_queues; i++) { - rxq = dev->data->rx_queues[i]; - (void)idpf_qc_singleq_rx_vec_setup(rxq); + cpfl_rxq = dev->data->rx_queues[i]; + (void)idpf_qc_singleq_rx_vec_setup(&cpfl_rxq->base); } #ifdef CC_AVX512_SUPPORT if (vport->rx_use_avx512) { @@ -859,10 +933,11 @@ cpfl_set_rx_function(struct rte_eth_dev *dev) void cpfl_set_tx_function(struct rte_eth_dev *dev) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; #ifdef RTE_ARCH_X86 #ifdef CC_AVX512_SUPPORT - struct idpf_tx_queue *txq; + struct cpfl_tx_queue *cpfl_txq; int i; #endif /* CC_AVX512_SUPPORT */ @@ -877,8 +952,8 @@ cpfl_set_tx_function(struct rte_eth_dev *dev) vport->tx_use_avx512 = true; if (vport->tx_use_avx512) { for (i = 0; i < dev->data->nb_tx_queues; i++) { - txq = dev->data->tx_queues[i]; - idpf_qc_tx_vec_avx512_setup(txq); + cpfl_txq = dev->data->tx_queues[i]; + idpf_qc_tx_vec_avx512_setup(&cpfl_txq->base); } } } @@ -915,10 +990,10 @@ cpfl_set_tx_function(struct rte_eth_dev *dev) #ifdef CC_AVX512_SUPPORT if (vport->tx_use_avx512) { for (i = 0; i < dev->data->nb_tx_queues; i++) { - txq = dev->data->tx_queues[i]; - if (txq == NULL) + cpfl_txq = dev->data->tx_queues[i]; + if (cpfl_txq == NULL) continue; - idpf_qc_tx_vec_avx512_setup(txq); + idpf_qc_tx_vec_avx512_setup(&cpfl_txq->base); } PMD_DRV_LOG(NOTICE, "Using Single AVX512 Vector Tx (port %d).", diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h index fb267d38c8..bfb9ad97bd 100644 --- a/drivers/net/cpfl/cpfl_rxtx.h +++ b/drivers/net/cpfl/cpfl_rxtx.h @@ -23,6 +23,14 @@ #define CPFL_SUPPORT_CHAIN_NUM 5 +struct cpfl_rx_queue { + struct idpf_rx_queue base; +}; + +struct cpfl_tx_queue { + struct idpf_tx_queue base; +}; + int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_txconf *tx_conf); diff --git a/drivers/net/cpfl/cpfl_rxtx_vec_common.h b/drivers/net/cpfl/cpfl_rxtx_vec_common.h index 665418d27d..5690b17911 100644 --- a/drivers/net/cpfl/cpfl_rxtx_vec_common.h +++ b/drivers/net/cpfl/cpfl_rxtx_vec_common.h @@ -76,15 +76,16 @@ cpfl_rx_splitq_vec_default(struct idpf_rx_queue *rxq) static inline int cpfl_rx_vec_dev_check_default(struct rte_eth_dev *dev) { - struct idpf_vport *vport = dev->data->dev_private; - struct idpf_rx_queue *rxq; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; + struct cpfl_rx_queue *cpfl_rxq; int i, default_ret, splitq_ret, ret = CPFL_SCALAR_PATH; for (i = 0; i < dev->data->nb_rx_queues; i++) { - rxq = dev->data->rx_queues[i]; - default_ret = cpfl_rx_vec_queue_default(rxq); + cpfl_rxq = dev->data->rx_queues[i]; + default_ret = cpfl_rx_vec_queue_default(&cpfl_rxq->base); if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) { - splitq_ret = cpfl_rx_splitq_vec_default(rxq); + splitq_ret = cpfl_rx_splitq_vec_default(&cpfl_rxq->base); ret = splitq_ret && default_ret; } else { ret = default_ret; @@ -100,12 +101,12 @@ static inline int cpfl_tx_vec_dev_check_default(struct rte_eth_dev *dev) { int i; - struct idpf_tx_queue *txq; + struct cpfl_tx_queue *cpfl_txq; int ret = 0; for (i = 0; i < dev->data->nb_tx_queues; i++) { - txq = dev->data->tx_queues[i]; - ret = cpfl_tx_vec_queue_default(txq); + cpfl_txq = dev->data->tx_queues[i]; + ret = cpfl_tx_vec_queue_default(&cpfl_txq->base); if (ret == CPFL_SCALAR_PATH) return CPFL_SCALAR_PATH; } From patchwork Fri Apr 21 06:50:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 126346 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 26A51429A9; Fri, 21 Apr 2023 09:14:10 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1970C42D13; Fri, 21 Apr 2023 09:13:57 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id BC5E84114B for ; Fri, 21 Apr 2023 09:13:54 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1682061234; x=1713597234; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=XqX9PiSVIznW82mGRQY6lvnnDyr1NPJN+Yktkgx+4jw=; b=IHzRiViSJPTy3DJh/9vmQPVGdg6KmIjeCaWG3wVEmmGXFLm9HdO41mXs lbuXzJUqA7h6ec7UECtbb/ymj6IGq/4ehVBv/aHt3zMhW1UPNzPY2nFJE xFa363KNp/9xaEulItpLpKlcQQhEWsG6L2Qt6qHL6dkqxQ3rJ4TwPSWGo 6OO5spXJ8gjzylqs9E3orbUzosE0AXXmmkw5OUXaE4rUqx8g/zrOvGBbK 4kLSz6izGvZd4QtepaQT+Rn5slWmON/sM5qASFYKzxmbrzMcB5DUSWW4T 13jnwOvvFpnaUIvSWc6bs4faaULaxuWvbFtYWt3vv0VPfK4vPFoOaCeo6 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10686"; a="326260045" X-IronPort-AV: E=Sophos;i="5.99,214,1677571200"; d="scan'208";a="326260045" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2023 00:13:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10686"; a="722669104" X-IronPort-AV: E=Sophos;i="5.99,214,1677571200"; d="scan'208";a="722669104" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga008.jf.intel.com with ESMTP; 21 Apr 2023 00:13:52 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing , Xiao Wang Subject: [PATCH 02/10] net/cpfl: support hairpin queue capbility get Date: Fri, 21 Apr 2023 06:50:40 +0000 Message-Id: <20230421065048.106899-3-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230421065048.106899-1-beilei.xing@intel.com> References: <20230421065048.106899-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch adds hairpin_cap_get ops support. Signed-off-by: Xiao Wang Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/net/cpfl/cpfl_ethdev.c | 13 +++++++++++++ drivers/net/cpfl/cpfl_rxtx.h | 4 ++++ 2 files changed, 17 insertions(+) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index 4a507f05d5..114fc18f5f 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -154,6 +154,18 @@ cpfl_dev_link_update(struct rte_eth_dev *dev, return rte_eth_linkstatus_set(dev, &new_link); } +static int +cpfl_hairpin_cap_get(__rte_unused struct rte_eth_dev *dev, + struct rte_eth_hairpin_cap *cap) +{ + cap->max_nb_queues = CPFL_MAX_P2P_NB_QUEUES; + cap->max_rx_2_tx = CPFL_MAX_HAIRPINQ_RX_2_TX; + cap->max_tx_2_rx = CPFL_MAX_HAIRPINQ_TX_2_RX; + cap->max_nb_desc = CPFL_MAX_HAIRPINQ_NB_DESC; + + return 0; +} + static int cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) { @@ -889,6 +901,7 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = { .xstats_get = cpfl_dev_xstats_get, .xstats_get_names = cpfl_dev_xstats_get_names, .xstats_reset = cpfl_dev_xstats_reset, + .hairpin_cap_get = cpfl_hairpin_cap_get, }; static int diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h index bfb9ad97bd..b2b3537d10 100644 --- a/drivers/net/cpfl/cpfl_rxtx.h +++ b/drivers/net/cpfl/cpfl_rxtx.h @@ -13,6 +13,10 @@ #define CPFL_MIN_RING_DESC 32 #define CPFL_MAX_RING_DESC 4096 #define CPFL_DMA_MEM_ALIGN 4096 +#define CPFL_MAX_HAIRPINQ_RX_2_TX 1 +#define CPFL_MAX_HAIRPINQ_TX_2_RX 1 +#define CPFL_MAX_HAIRPINQ_NB_DESC 1024 +#define CPFL_MAX_P2P_NB_QUEUES 16 /* Base address of the HW descriptor ring should be 128B aligned. */ #define CPFL_RING_BASE_ALIGN 128 From patchwork Fri Apr 21 06:50:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 126347 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 43F37429A9; Fri, 21 Apr 2023 09:14:16 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 26FFC42D1D; Fri, 21 Apr 2023 09:13:58 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id 7669142BD9 for ; Fri, 21 Apr 2023 09:13:56 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1682061236; x=1713597236; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=NJqnHZrgqtZ2oo+pHdX6rMFCdljJPxx+NW0SPYHoII4=; b=FQMg15JTnMqbBB9010tZzVl08qlMjU97nBK2rVQFl1plYoagBhrhBFlv x4UrXPMJ0dTM+K6zt0U7LUsoLlfd+H8cjQy8w1hE2KfrrmV6jdgAzfzjt FVN1iHO9uspeEW2FsVW0FDyKWFaqJAwk20kj9tATngSegSoCMfNPXH+10 odCwAPAWjyfY0Wj3PDMXpCgnIK5+vayBzRE6CIZ7EgpWt4Y4BWprV/F/e C/o6+abbI7jYFc4WkkQ36gkkCKV6vEvlYFaxTamArAcLpcQP5/HZwz+h9 4DUFTNKutT+crvq4W9j8fNXS4okO0CMGeBMK4aa4ObX96VMHtXD8vYW7C w==; X-IronPort-AV: E=McAfee;i="6600,9927,10686"; a="326260049" X-IronPort-AV: E=Sophos;i="5.99,214,1677571200"; d="scan'208";a="326260049" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2023 00:13:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10686"; a="722669107" X-IronPort-AV: E=Sophos;i="5.99,214,1677571200"; d="scan'208";a="722669107" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga008.jf.intel.com with ESMTP; 21 Apr 2023 00:13:54 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing Subject: [PATCH 03/10] common/idpf: support queue groups add/delete Date: Fri, 21 Apr 2023 06:50:41 +0000 Message-Id: <20230421065048.106899-4-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230421065048.106899-1-beilei.xing@intel.com> References: <20230421065048.106899-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch adds queue group add/delete virtual channel support. Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/common/idpf/idpf_common_virtchnl.c | 66 ++++++++++++++++++++++ drivers/common/idpf/idpf_common_virtchnl.h | 9 +++ drivers/common/idpf/version.map | 2 + 3 files changed, 77 insertions(+) diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c index a4e129062e..76a658bb26 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.c +++ b/drivers/common/idpf/idpf_common_virtchnl.c @@ -359,6 +359,72 @@ idpf_vc_vport_destroy(struct idpf_vport *vport) return err; } +int +idpf_vc_queue_grps_add(struct idpf_vport *vport, + struct virtchnl2_add_queue_groups *ptp_queue_grps_info, + uint8_t *ptp_queue_grps_out) +{ + struct idpf_adapter *adapter = vport->adapter; + struct idpf_cmd_info args; + int size, qg_info_size; + int err = -1; + + size = sizeof(*ptp_queue_grps_info) + + (ptp_queue_grps_info->qg_info.num_queue_groups - 1) * + sizeof(struct virtchnl2_queue_group_info); + + memset(&args, 0, sizeof(args)); + args.ops = VIRTCHNL2_OP_ADD_QUEUE_GROUPS; + args.in_args = (uint8_t *)ptp_queue_grps_info; + args.in_args_size = size; + args.out_buffer = adapter->mbx_resp; + args.out_size = IDPF_DFLT_MBX_BUF_SIZE; + + err = idpf_vc_cmd_execute(adapter, &args); + if (err != 0) { + DRV_LOG(ERR, + "Failed to execute command of VIRTCHNL2_OP_ADD_QUEUE_GROUPS"); + return err; + } + + rte_memcpy(ptp_queue_grps_out, args.out_buffer, IDPF_DFLT_MBX_BUF_SIZE); + return 0; +} + +int idpf_vc_queue_grps_del(struct idpf_vport *vport, + uint16_t num_q_grps, + struct virtchnl2_queue_group_id *qg_ids) +{ + struct idpf_adapter *adapter = vport->adapter; + struct virtchnl2_delete_queue_groups *vc_del_q_grps; + struct idpf_cmd_info args; + int size; + int err; + + size = sizeof(*vc_del_q_grps) + + (num_q_grps - 1) * sizeof(struct virtchnl2_queue_group_id); + vc_del_q_grps = rte_zmalloc("vc_del_q_grps", size, 0); + + vc_del_q_grps->vport_id = vport->vport_id; + vc_del_q_grps->num_queue_groups = num_q_grps; + memcpy(vc_del_q_grps->qg_ids, qg_ids, + num_q_grps * sizeof(struct virtchnl2_queue_group_id)); + + memset(&args, 0, sizeof(args)); + args.ops = VIRTCHNL2_OP_DEL_QUEUE_GROUPS; + args.in_args = (uint8_t *)vc_del_q_grps; + args.in_args_size = size; + args.out_buffer = adapter->mbx_resp; + args.out_size = IDPF_DFLT_MBX_BUF_SIZE; + + err = idpf_vc_cmd_execute(adapter, &args); + if (err != 0) + DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_DEL_QUEUE_GROUPS"); + + rte_free(vc_del_q_grps); + return err; +} + int idpf_vc_rss_key_set(struct idpf_vport *vport) { diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h index d479d93c8e..bf1d014c8d 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.h +++ b/drivers/common/idpf/idpf_common_virtchnl.h @@ -64,4 +64,13 @@ int idpf_vc_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg, __rte_internal int idpf_vc_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq, u16 *buff_count, struct idpf_dma_mem **buffs); +__rte_internal +int idpf_vc_queue_grps_del(struct idpf_vport *vport, + uint16_t num_q_grps, + struct virtchnl2_queue_group_id *qg_ids); +__rte_internal +int +idpf_vc_queue_grps_add(struct idpf_vport *vport, + struct virtchnl2_add_queue_groups *ptp_queue_grps_info, + uint8_t *ptp_queue_grps_out); #endif /* _IDPF_COMMON_VIRTCHNL_H_ */ diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map index 7076759024..aa67f7ee27 100644 --- a/drivers/common/idpf/version.map +++ b/drivers/common/idpf/version.map @@ -48,6 +48,8 @@ INTERNAL { idpf_vc_irq_map_unmap_config; idpf_vc_one_msg_read; idpf_vc_ptype_info_query; + idpf_vc_queue_grps_add; + idpf_vc_queue_grps_del; idpf_vc_queue_switch; idpf_vc_queues_ena_dis; idpf_vc_rss_hash_get; From patchwork Fri Apr 21 06:50:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 126348 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 836CC429A9; Fri, 21 Apr 2023 09:14:22 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 27B0242D2C; Fri, 21 Apr 2023 09:14:00 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id 3E76742D29 for ; Fri, 21 Apr 2023 09:13:58 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1682061238; x=1713597238; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=H24FaLJF++6WE73hgUI4dHlg31lRf4gwNyeJOhTcvrM=; b=ZK5nKtfgYdxmK2T2gHMbGS7kvGwHUduD1xh+GRTsImKvkbrhgr1RzOn9 d+b8xpw1zKu7cfC9G4Weg76po9iLKyp2jzPXsVd0LBMDw8b0IjKMXwf/H XDC0bYKPGVYCiv9+v6M06YBzJ+Qvp5b0bx9ZM944HIfakPm6SfPOw+5dc hvwb1xcJ5RTxqTiuul0rx7Lyig/VAxBXEHBO402NCop3ro4n99qPH3tn0 W2QO11UV7VoJIr6CMjGes0qL7wf/HlKNr7D4UfhalQVeKBajY4RHkzbyA qu3kjwtGBjDQdXw8PAQ3997jAzBtnWC4Ncmdg1UcqRFXIlw+4TgRmU8D3 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10686"; a="326260057" X-IronPort-AV: E=Sophos;i="5.99,214,1677571200"; d="scan'208";a="326260057" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2023 00:13:57 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10686"; a="722669112" X-IronPort-AV: E=Sophos;i="5.99,214,1677571200"; d="scan'208";a="722669112" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga008.jf.intel.com with ESMTP; 21 Apr 2023 00:13:56 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing Subject: [PATCH 04/10] net/cpfl: add haipin queue group during vpotr init Date: Fri, 21 Apr 2023 06:50:42 +0000 Message-Id: <20230421065048.106899-5-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230421065048.106899-1-beilei.xing@intel.com> References: <20230421065048.106899-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch adds haipin queue group during vpotr init. Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/net/cpfl/cpfl_ethdev.c | 125 +++++++++++++++++++++++++++++++++ drivers/net/cpfl/cpfl_ethdev.h | 17 +++++ drivers/net/cpfl/cpfl_rxtx.h | 4 ++ 3 files changed, 146 insertions(+) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index 114fc18f5f..ad5ddebd3a 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -856,6 +856,20 @@ cpfl_dev_stop(struct rte_eth_dev *dev) return 0; } +static int +cpfl_p2p_queue_grps_del(struct idpf_vport *vport) +{ + struct virtchnl2_queue_group_id qg_ids[CPFL_P2P_NB_QUEUE_GRPS] = {0}; + int ret = 0; + + qg_ids[0].queue_group_id = CPFL_P2P_QUEUE_GRP_ID; + qg_ids[0].queue_group_type = VIRTCHNL2_QUEUE_GROUP_P2P; + ret = idpf_vc_queue_grps_del(vport, CPFL_P2P_NB_QUEUE_GRPS, qg_ids); + if (ret) + PMD_DRV_LOG(ERR, "Failed to delete p2p queue groups"); + return ret; +} + static int cpfl_dev_close(struct rte_eth_dev *dev) { @@ -864,6 +878,9 @@ cpfl_dev_close(struct rte_eth_dev *dev) struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(vport->adapter); cpfl_dev_stop(dev); + + cpfl_p2p_queue_grps_del(vport); + idpf_vport_deinit(vport); adapter->cur_vports &= ~RTE_BIT32(vport->devarg_id); @@ -1350,6 +1367,96 @@ cpfl_vport_idx_alloc(struct cpfl_adapter_ext *adapter) return vport_idx; } +static int +cpfl_p2p_q_grps_add(struct idpf_vport *vport, + struct virtchnl2_add_queue_groups *p2p_queue_grps_info, + uint8_t *p2p_q_vc_out_info) +{ + int ret; + + p2p_queue_grps_info->vport_id = vport->vport_id; + p2p_queue_grps_info->qg_info.num_queue_groups = CPFL_P2P_NB_QUEUE_GRPS; + p2p_queue_grps_info->qg_info.groups[0].num_rx_q = CPFL_MAX_P2P_NB_QUEUES; + p2p_queue_grps_info->qg_info.groups[0].num_rx_bufq = CPFL_P2P_NB_RX_BUFQ; + p2p_queue_grps_info->qg_info.groups[0].num_tx_q = CPFL_MAX_P2P_NB_QUEUES; + p2p_queue_grps_info->qg_info.groups[0].num_tx_complq = CPFL_P2P_NB_TX_COMPLQ; + p2p_queue_grps_info->qg_info.groups[0].qg_id.queue_group_id = CPFL_P2P_QUEUE_GRP_ID; + p2p_queue_grps_info->qg_info.groups[0].qg_id.queue_group_type = VIRTCHNL2_QUEUE_GROUP_P2P; + p2p_queue_grps_info->qg_info.groups[0].rx_q_grp_info.rss_lut_size = 0; + p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.tx_tc = 0; + p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.priority = 0; + p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.is_sp = 0; + p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.pir_weight = 0; + + ret = idpf_vc_queue_grps_add(vport, p2p_queue_grps_info, p2p_q_vc_out_info); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to add p2p queue groups."); + return ret; + } + + return ret; +} + +static int +cpfl_p2p_queue_info_init(struct cpfl_vport *cpfl_vport, + struct virtchnl2_add_queue_groups *p2p_q_vc_out_info) +{ + struct p2p_queue_chunks_info *p2p_q_chunks_info = &cpfl_vport->p2p_q_chunks_info; + struct virtchnl2_queue_reg_chunks *vc_chunks_out; + int i, type; + + if (p2p_q_vc_out_info->qg_info.groups[0].qg_id.queue_group_type != + VIRTCHNL2_QUEUE_GROUP_P2P) { + PMD_DRV_LOG(ERR, "Add queue group response mismatch."); + return -EINVAL; + } + + vc_chunks_out = &p2p_q_vc_out_info->qg_info.groups[0].chunks; + + for (i = 0; i < vc_chunks_out->num_chunks; i++) { + type = vc_chunks_out->chunks[i].type; + switch (type) { + case VIRTCHNL2_QUEUE_TYPE_TX: + p2p_q_chunks_info->tx_start_qid = + vc_chunks_out->chunks[i].start_queue_id; + p2p_q_chunks_info->tx_qtail_start = + vc_chunks_out->chunks[i].qtail_reg_start; + p2p_q_chunks_info->tx_qtail_spacing = + vc_chunks_out->chunks[i].qtail_reg_spacing; + break; + case VIRTCHNL2_QUEUE_TYPE_RX: + p2p_q_chunks_info->rx_start_qid = + vc_chunks_out->chunks[i].start_queue_id; + p2p_q_chunks_info->rx_qtail_start = + vc_chunks_out->chunks[i].qtail_reg_start; + p2p_q_chunks_info->rx_qtail_spacing = + vc_chunks_out->chunks[i].qtail_reg_spacing; + break; + case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION: + p2p_q_chunks_info->tx_compl_start_qid = + vc_chunks_out->chunks[i].start_queue_id; + p2p_q_chunks_info->tx_compl_qtail_start = + vc_chunks_out->chunks[i].qtail_reg_start; + p2p_q_chunks_info->tx_compl_qtail_spacing = + vc_chunks_out->chunks[i].qtail_reg_spacing; + break; + case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER: + p2p_q_chunks_info->rx_buf_start_qid = + vc_chunks_out->chunks[i].start_queue_id; + p2p_q_chunks_info->rx_buf_qtail_start = + vc_chunks_out->chunks[i].qtail_reg_start; + p2p_q_chunks_info->rx_buf_qtail_spacing = + vc_chunks_out->chunks[i].qtail_reg_spacing; + break; + default: + PMD_DRV_LOG(ERR, "Unsupported queue type"); + break; + } + } + + return 0; +} + static int cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params) { @@ -1359,6 +1466,8 @@ cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params) struct cpfl_adapter_ext *adapter = param->adapter; /* for sending create vport virtchnl msg prepare */ struct virtchnl2_create_vport create_vport_info; + struct virtchnl2_add_queue_groups p2p_queue_grps_info; + uint8_t p2p_q_vc_out_info[IDPF_DFLT_MBX_BUF_SIZE] = {0}; int ret = 0; dev->dev_ops = &cpfl_eth_dev_ops; @@ -1380,6 +1489,19 @@ cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params) goto err; } + memset(&p2p_queue_grps_info, 0, sizeof(p2p_queue_grps_info)); + ret = cpfl_p2p_q_grps_add(vport, &p2p_queue_grps_info, p2p_q_vc_out_info); + if (ret != 0) { + PMD_INIT_LOG(ERR, "Failed to add p2p queue group."); + goto err_q_grps_add; + } + ret = cpfl_p2p_queue_info_init(cpfl_vport, + (struct virtchnl2_add_queue_groups *)p2p_q_vc_out_info); + if (ret != 0) { + PMD_INIT_LOG(ERR, "Failed to init p2p queue info."); + goto err_p2p_qinfo_init; + } + adapter->vports[param->idx] = cpfl_vport; adapter->cur_vports |= RTE_BIT32(param->devarg_id); adapter->cur_vport_nb++; @@ -1397,6 +1519,9 @@ cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params) return 0; err_mac_addrs: +err_p2p_qinfo_init: + cpfl_p2p_queue_grps_del(vport); +err_q_grps_add: adapter->vports[param->idx] = NULL; /* reset */ idpf_vport_deinit(vport); adapter->cur_vports &= ~RTE_BIT32(param->devarg_id); diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h index 81fe9ac4c3..5e2e7a1bfb 100644 --- a/drivers/net/cpfl/cpfl_ethdev.h +++ b/drivers/net/cpfl/cpfl_ethdev.h @@ -69,8 +69,25 @@ struct cpfl_devargs { uint16_t req_vport_nb; }; +struct p2p_queue_chunks_info { + uint32_t tx_start_qid; + uint32_t rx_start_qid; + uint32_t tx_compl_start_qid; + uint32_t rx_buf_start_qid; + + uint64_t tx_qtail_start; + uint32_t tx_qtail_spacing; + uint64_t rx_qtail_start; + uint32_t rx_qtail_spacing; + uint64_t tx_compl_qtail_start; + uint32_t tx_compl_qtail_spacing; + uint64_t rx_buf_qtail_start; + uint32_t rx_buf_qtail_spacing; +}; + struct cpfl_vport { struct idpf_vport base; + struct p2p_queue_chunks_info p2p_q_chunks_info; }; struct cpfl_adapter_ext { diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h index b2b3537d10..3a87a1f4b3 100644 --- a/drivers/net/cpfl/cpfl_rxtx.h +++ b/drivers/net/cpfl/cpfl_rxtx.h @@ -17,6 +17,10 @@ #define CPFL_MAX_HAIRPINQ_TX_2_RX 1 #define CPFL_MAX_HAIRPINQ_NB_DESC 1024 #define CPFL_MAX_P2P_NB_QUEUES 16 +#define CPFL_P2P_NB_RX_BUFQ 1 +#define CPFL_P2P_NB_TX_COMPLQ 1 +#define CPFL_P2P_NB_QUEUE_GRPS 1 +#define CPFL_P2P_QUEUE_GRP_ID 1 /* Base address of the HW descriptor ring should be 128B aligned. */ #define CPFL_RING_BASE_ALIGN 128 From patchwork Fri Apr 21 06:50:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 126349 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 764BD429A9; Fri, 21 Apr 2023 09:14:28 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 424DB42D2F; Fri, 21 Apr 2023 09:14:02 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id A6B3542D2F for ; Fri, 21 Apr 2023 09:14:00 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1682061240; x=1713597240; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qajTJ8Odv/ZNSadz28uM+4imjGOqqx6eA53lPbWo4Pw=; b=cYKVER/CKyWvHzzZXXmvoi9sWF3kuBT6pGL2FsY8AC/5Y8oA17dGaS0/ PsTIqVLG4xaMifBlV0CI2vr+WAkxW1gx5Q4OenOtkxciQtbWvinsKAx7D /c61+/3gmhbn6R5FnCvIGEj0x1xIC1S59YqHYlmJ68pWyt91paI4MKI18 3yiyu6h+q8diRZDV7RxOSKEWGmXUI0R972feL1HRrbk8xcnsPIJfU8rAn oYCw9FxvLWIE0E5LtrohfwZA60N/6VKlWotRv4eONRoQHbCcl45kzPCpa rKSzpBETzSxMRG201RMXl+0EVxw4D/qGNIRrVfVP9vnqMh8hhhrhPNeWQ A==; X-IronPort-AV: E=McAfee;i="6600,9927,10686"; a="326260067" X-IronPort-AV: E=Sophos;i="5.99,214,1677571200"; d="scan'208";a="326260067" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2023 00:13:59 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10686"; a="722669115" X-IronPort-AV: E=Sophos;i="5.99,214,1677571200"; d="scan'208";a="722669115" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga008.jf.intel.com with ESMTP; 21 Apr 2023 00:13:57 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing , Xiao Wang Subject: [PATCH 05/10] net/cpfl: support hairpin queue setup and release Date: Fri, 21 Apr 2023 06:50:43 +0000 Message-Id: <20230421065048.106899-6-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230421065048.106899-1-beilei.xing@intel.com> References: <20230421065048.106899-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing Support hairpin Rx/Tx queue setup and release. Signed-off-by: Xiao Wang Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/net/cpfl/cpfl_ethdev.c | 6 + drivers/net/cpfl/cpfl_ethdev.h | 10 + drivers/net/cpfl/cpfl_rxtx.c | 373 +++++++++++++++++++++++- drivers/net/cpfl/cpfl_rxtx.h | 28 ++ drivers/net/cpfl/cpfl_rxtx_vec_common.h | 4 + 5 files changed, 420 insertions(+), 1 deletion(-) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index ad5ddebd3a..d3300f17cc 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -878,6 +878,10 @@ cpfl_dev_close(struct rte_eth_dev *dev) struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(vport->adapter); cpfl_dev_stop(dev); + if (cpfl_vport->p2p_mp) { + rte_mempool_free(cpfl_vport->p2p_mp); + cpfl_vport->p2p_mp = NULL; + } cpfl_p2p_queue_grps_del(vport); @@ -919,6 +923,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = { .xstats_get_names = cpfl_dev_xstats_get_names, .xstats_reset = cpfl_dev_xstats_reset, .hairpin_cap_get = cpfl_hairpin_cap_get, + .rx_hairpin_queue_setup = cpfl_rx_hairpin_queue_setup, + .tx_hairpin_queue_setup = cpfl_tx_hairpin_queue_setup, }; static int diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h index 5e2e7a1bfb..2cc8790da0 100644 --- a/drivers/net/cpfl/cpfl_ethdev.h +++ b/drivers/net/cpfl/cpfl_ethdev.h @@ -88,6 +88,16 @@ struct p2p_queue_chunks_info { struct cpfl_vport { struct idpf_vport base; struct p2p_queue_chunks_info p2p_q_chunks_info; + + struct rte_mempool *p2p_mp; + + uint16_t nb_data_rxq; + uint16_t nb_data_txq; + uint16_t nb_p2p_rxq; + uint16_t nb_p2p_txq; + + struct idpf_rx_queue *p2p_rx_bufq; + struct idpf_tx_queue *p2p_tx_complq; }; struct cpfl_adapter_ext { diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c index a441e2ffbe..64ed331a6d 100644 --- a/drivers/net/cpfl/cpfl_rxtx.c +++ b/drivers/net/cpfl/cpfl_rxtx.c @@ -10,6 +10,79 @@ #include "cpfl_rxtx.h" #include "cpfl_rxtx_vec_common.h" +uint16_t +cpfl_hw_qid_get(uint16_t start_qid, uint16_t offset) +{ + return start_qid + offset; +} + +uint64_t +cpfl_hw_qtail_get(uint64_t tail_start, uint16_t offset, uint64_t tail_spacing) +{ + return tail_start + offset * tail_spacing; +} + +static inline void +cpfl_tx_hairpin_descq_reset(struct idpf_tx_queue *txq) +{ + uint32_t i, size; + + if (!txq) { + PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL"); + return; + } + + size = txq->nb_tx_desc * CPFL_P2P_DESC_LEN; + for (i = 0; i < size; i++) + ((volatile char *)txq->desc_ring)[i] = 0; +} + +static inline void +cpfl_tx_hairpin_complq_reset(struct idpf_tx_queue *cq) +{ + uint32_t i, size; + + if (!cq) { + PMD_DRV_LOG(DEBUG, "Pointer to complq is NULL"); + return; + } + + size = cq->nb_tx_desc * CPFL_P2P_DESC_LEN; + for (i = 0; i < size; i++) + ((volatile char *)cq->compl_ring)[i] = 0; +} + +static inline void +cpfl_rx_hairpin_descq_reset(struct idpf_rx_queue *rxq) +{ + uint16_t len; + uint32_t i; + + if (!rxq) + return; + + len = rxq->nb_rx_desc; + for (i = 0; i < len * CPFL_P2P_DESC_LEN; i++) + ((volatile char *)rxq->rx_ring)[i] = 0; +} + +static inline void +cpfl_rx_hairpin_bufq_reset(struct idpf_rx_queue *rxbq) +{ + uint16_t len; + uint32_t i; + + if (!rxbq) + return; + + len = rxbq->nb_rx_desc; + for (i = 0; i < len * CPFL_P2P_DESC_LEN; i++) + ((volatile char *)rxbq->rx_ring)[i] = 0; + + rxbq->bufq1 = NULL; + rxbq->bufq2 = NULL; +} + static uint64_t cpfl_rx_offload_convert(uint64_t offload) { @@ -233,7 +306,10 @@ cpfl_rx_queue_release(void *rxq) /* Split queue */ if (!q->adapter->is_rx_singleq) { - if (q->bufq2) + /* the mz is shared between Tx/Rx hairpin, let Rx_release + * free the buf, q->bufq1->mz and q->mz. + */ + if (!cpfl_rxq->hairpin_info.hairpin_q && q->bufq2) cpfl_rx_split_bufq_release(q->bufq2); if (q->bufq1) @@ -384,6 +460,7 @@ cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, } } + cpfl_vport->nb_data_rxq++; rxq->q_set = true; dev->data->rx_queues[queue_idx] = cpfl_rxq; @@ -547,6 +624,7 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start + queue_idx * vport->chunks_info.tx_qtail_spacing); txq->ops = &def_txq_ops; + cpfl_vport->nb_data_txq++; txq->q_set = true; dev->data->tx_queues[queue_idx] = cpfl_txq; @@ -561,6 +639,297 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, return ret; } +static int +cpfl_rx_hairpin_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *bufq, + uint16_t logic_qid, uint16_t nb_desc) +{ + struct cpfl_vport *cpfl_vport = + (struct cpfl_vport *)dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; + struct idpf_adapter *adapter = vport->adapter; + struct rte_mempool *mp; + char pool_name[RTE_MEMPOOL_NAMESIZE]; + + mp = cpfl_vport->p2p_mp; + if (!mp) { + snprintf(pool_name, RTE_MEMPOOL_NAMESIZE, "p2p_mb_pool_%u", + dev->data->port_id); + mp = rte_pktmbuf_pool_create(pool_name, CPFL_P2P_NB_MBUF, CPFL_P2P_CACHE_SIZE, + 0, CPFL_P2P_MBUF_SIZE, dev->device->numa_node); + if (!mp) { + PMD_INIT_LOG(ERR, "Failed to allocate mbuf pool for p2p"); + return -ENOMEM; + } + cpfl_vport->p2p_mp = mp; + } + + bufq->mp = mp; + bufq->nb_rx_desc = nb_desc; + bufq->queue_id = cpfl_hw_qid_get(cpfl_vport->p2p_q_chunks_info.rx_buf_start_qid, logic_qid); + bufq->port_id = dev->data->port_id; + bufq->adapter = adapter; + bufq->rx_buf_len = CPFL_P2P_MBUF_SIZE - RTE_PKTMBUF_HEADROOM; + + bufq->sw_ring = rte_zmalloc("sw ring", + sizeof(struct rte_mbuf *) * nb_desc, + RTE_CACHE_LINE_SIZE); + if (!bufq->sw_ring) { + PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring"); + return -ENOMEM; + } + + bufq->q_set = true; + bufq->ops = &def_rxq_ops; + + return 0; +} + +int +cpfl_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, + uint16_t nb_desc, + const struct rte_eth_hairpin_conf *conf) +{ + struct cpfl_vport *cpfl_vport = (struct cpfl_vport *)dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; + struct idpf_adapter *adapter_base = vport->adapter; + uint16_t logic_qid = cpfl_vport->nb_p2p_rxq; + struct cpfl_rxq_hairpin_info *hairpin_info; + struct cpfl_rx_queue *cpfl_rxq; + struct idpf_rx_queue *bufq1 = NULL; + struct idpf_rx_queue *rxq; + uint16_t peer_port, peer_q; + uint16_t qid; + int ret; + + if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) { + PMD_INIT_LOG(ERR, "Only spilt queue model supports hairpin queue."); + return -EINVAL; + } + + if (conf->peer_count != 1) { + PMD_INIT_LOG(ERR, "Can't support Rx hairpin queue peer count %d", conf->peer_count); + return -EINVAL; + } + + peer_port = conf->peers[0].port; + peer_q = conf->peers[0].queue; + + if (nb_desc % CPFL_ALIGN_RING_DESC != 0 || + nb_desc > CPFL_MAX_RING_DESC || + nb_desc < CPFL_MIN_RING_DESC) { + PMD_INIT_LOG(ERR, "Number (%u) of receive descriptors is invalid", nb_desc); + return -EINVAL; + } + + /* Free memory if needed */ + if (dev->data->rx_queues[queue_idx]) { + cpfl_rx_queue_release(dev->data->rx_queues[queue_idx]); + dev->data->rx_queues[queue_idx] = NULL; + } + + /* Setup Rx description queue */ + cpfl_rxq = rte_zmalloc_socket("cpfl hairpin rxq", + sizeof(struct cpfl_rx_queue), + RTE_CACHE_LINE_SIZE, + SOCKET_ID_ANY); + if (!cpfl_rxq) { + PMD_INIT_LOG(ERR, "Failed to allocate memory for rx queue data structure"); + return -ENOMEM; + } + + rxq = &cpfl_rxq->base; + hairpin_info = &cpfl_rxq->hairpin_info; + rxq->nb_rx_desc = nb_desc * 2; + rxq->queue_id = cpfl_hw_qid_get(cpfl_vport->p2p_q_chunks_info.rx_start_qid, logic_qid); + rxq->port_id = dev->data->port_id; + rxq->adapter = adapter_base; + rxq->rx_buf_len = CPFL_P2P_MBUF_SIZE - RTE_PKTMBUF_HEADROOM; + hairpin_info->hairpin_q = true; + hairpin_info->peer_txp = peer_port; + hairpin_info->peer_txq_id = peer_q; + + if (conf->manual_bind != 0) + hairpin_info->manual_bind = true; + else + hairpin_info->manual_bind = false; + + /* setup 1 Rx buffer queue for the 1st hairpin rxq */ + if (logic_qid == 0) { + bufq1 = rte_zmalloc_socket("hairpin rx bufq1", + sizeof(struct idpf_rx_queue), + RTE_CACHE_LINE_SIZE, + SOCKET_ID_ANY); + if (!bufq1) { + PMD_INIT_LOG(ERR, "Failed to allocate memory for hairpin Rx buffer queue 1."); + ret = -ENOMEM; + goto err_alloc_bufq1; + } + qid = 2 * logic_qid; + ret = cpfl_rx_hairpin_bufq_setup(dev, bufq1, qid, nb_desc); + if (ret) { + PMD_INIT_LOG(ERR, "Failed to setup hairpin Rx buffer queue 1"); + ret = -EINVAL; + goto err_setup_bufq1; + } + cpfl_vport->p2p_rx_bufq = bufq1; + } + + rxq->bufq1 = cpfl_vport->p2p_rx_bufq; + rxq->bufq2 = NULL; + + cpfl_vport->nb_p2p_rxq++; + rxq->q_set = true; + dev->data->rx_queues[queue_idx] = cpfl_rxq; + + return 0; + +err_setup_bufq1: + rte_free(bufq1); +err_alloc_bufq1: + rte_free(rxq); + + return ret; +} + +int +cpfl_tx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, + uint16_t nb_desc, + const struct rte_eth_hairpin_conf *conf) +{ + struct cpfl_vport *cpfl_vport = + (struct cpfl_vport *)dev->data->dev_private; + + struct idpf_vport *vport = &cpfl_vport->base; + struct idpf_adapter *adapter_base = vport->adapter; + uint16_t logic_qid = cpfl_vport->nb_p2p_txq; + struct cpfl_txq_hairpin_info *hairpin_info; + struct idpf_hw *hw = &adapter_base->hw; + struct cpfl_tx_queue *cpfl_txq; + struct idpf_tx_queue *txq, *cq; + const struct rte_memzone *mz; + uint32_t ring_size; + uint16_t peer_port, peer_q; + + if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) { + PMD_INIT_LOG(ERR, "Only spilt queue model supports hairpin queue."); + return -EINVAL; + } + + if (conf->peer_count != 1) { + PMD_INIT_LOG(ERR, "Can't support Tx hairpin queue peer count %d", conf->peer_count); + return -EINVAL; + } + + peer_port = conf->peers[0].port; + peer_q = conf->peers[0].queue; + + if (nb_desc % CPFL_ALIGN_RING_DESC != 0 || + nb_desc > CPFL_MAX_RING_DESC || + nb_desc < CPFL_MIN_RING_DESC) { + PMD_INIT_LOG(ERR, "Number (%u) of transmit descriptors is invalid", + nb_desc); + return -EINVAL; + } + + /* Free memory if needed. */ + if (dev->data->tx_queues[queue_idx]) { + cpfl_tx_queue_release(dev->data->tx_queues[queue_idx]); + dev->data->tx_queues[queue_idx] = NULL; + } + + /* Allocate the TX queue data structure. */ + cpfl_txq = rte_zmalloc_socket("cpfl hairpin txq", + sizeof(struct cpfl_tx_queue), + RTE_CACHE_LINE_SIZE, + SOCKET_ID_ANY); + if (!cpfl_txq) { + PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure"); + return -ENOMEM; + } + + txq = &cpfl_txq->base; + hairpin_info = &cpfl_txq->hairpin_info; + /* Txq ring length should be 2 times of Tx completion queue size. */ + txq->nb_tx_desc = nb_desc * 2; + txq->queue_id = cpfl_hw_qid_get(cpfl_vport->p2p_q_chunks_info.tx_start_qid, logic_qid); + txq->port_id = dev->data->port_id; + hairpin_info->hairpin_q = true; + hairpin_info->peer_rxp = peer_port; + hairpin_info->peer_rxq_id = peer_q; + + if (conf->manual_bind != 0) + hairpin_info->manual_bind = true; + else + hairpin_info->manual_bind = false; + + /* Always Tx hairpin queue allocates Tx HW ring */ + ring_size = RTE_ALIGN(txq->nb_tx_desc * CPFL_P2P_DESC_LEN, + CPFL_DMA_MEM_ALIGN); + mz = rte_eth_dma_zone_reserve(dev, "hairpin_tx_ring", logic_qid, + ring_size + CPFL_P2P_RING_BUF, + CPFL_RING_BASE_ALIGN, + dev->device->numa_node); + if (!mz) { + PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX"); + rte_free(txq->sw_ring); + rte_free(txq); + return -ENOMEM; + } + + txq->tx_ring_phys_addr = mz->iova; + txq->desc_ring = mz->addr; + txq->mz = mz; + + cpfl_tx_hairpin_descq_reset(txq); + txq->qtx_tail = hw->hw_addr + + cpfl_hw_qtail_get(cpfl_vport->p2p_q_chunks_info.tx_qtail_start, + logic_qid, cpfl_vport->p2p_q_chunks_info.tx_qtail_spacing); + txq->ops = &def_txq_ops; + + if (cpfl_vport->p2p_tx_complq == NULL) { + cq = rte_zmalloc_socket("cpfl hairpin cq", + sizeof(struct idpf_tx_queue), + RTE_CACHE_LINE_SIZE, + dev->device->numa_node); + if (!cq) { + PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure"); + return -ENOMEM; + } + + cq->nb_tx_desc = nb_desc; + cq->queue_id = cpfl_hw_qid_get(cpfl_vport->p2p_q_chunks_info.tx_compl_start_qid, 0); + cq->port_id = dev->data->port_id; + + /* Tx completion queue always allocates the HW ring */ + ring_size = RTE_ALIGN(cq->nb_tx_desc * CPFL_P2P_DESC_LEN, + CPFL_DMA_MEM_ALIGN); + mz = rte_eth_dma_zone_reserve(dev, "hairpin_tx_compl_ring", logic_qid, + ring_size + CPFL_P2P_RING_BUF, + CPFL_RING_BASE_ALIGN, + dev->device->numa_node); + if (!mz) { + PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX completion queue"); + rte_free(txq->sw_ring); + rte_free(txq); + return -ENOMEM; + } + cq->tx_ring_phys_addr = mz->iova; + cq->compl_ring = mz->addr; + cq->mz = mz; + + cpfl_tx_hairpin_complq_reset(cq); + cpfl_vport->p2p_tx_complq = cq; + } + + txq->complq = cpfl_vport->p2p_tx_complq; + + cpfl_vport->nb_p2p_txq++; + txq->q_set = true; + dev->data->tx_queues[queue_idx] = cpfl_txq; + + return 0; +} + int cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) { @@ -864,6 +1233,8 @@ cpfl_set_rx_function(struct rte_eth_dev *dev) if (vport->rx_vec_allowed) { for (i = 0; i < dev->data->nb_rx_queues; i++) { cpfl_rxq = dev->data->rx_queues[i]; + if (cpfl_rxq->hairpin_info.hairpin_q) + continue; (void)idpf_qc_splitq_rx_vec_setup(&cpfl_rxq->base); } #ifdef CC_AVX512_SUPPORT diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h index 3a87a1f4b3..d844c9f057 100644 --- a/drivers/net/cpfl/cpfl_rxtx.h +++ b/drivers/net/cpfl/cpfl_rxtx.h @@ -13,6 +13,7 @@ #define CPFL_MIN_RING_DESC 32 #define CPFL_MAX_RING_DESC 4096 #define CPFL_DMA_MEM_ALIGN 4096 +#define CPFL_P2P_DESC_LEN 16 #define CPFL_MAX_HAIRPINQ_RX_2_TX 1 #define CPFL_MAX_HAIRPINQ_TX_2_RX 1 #define CPFL_MAX_HAIRPINQ_NB_DESC 1024 @@ -21,6 +22,10 @@ #define CPFL_P2P_NB_TX_COMPLQ 1 #define CPFL_P2P_NB_QUEUE_GRPS 1 #define CPFL_P2P_QUEUE_GRP_ID 1 +#define CPFL_P2P_NB_MBUF 4096 +#define CPFL_P2P_CACHE_SIZE 250 +#define CPFL_P2P_MBUF_SIZE 2048 +#define CPFL_P2P_RING_BUF 128 /* Base address of the HW descriptor ring should be 128B aligned. */ #define CPFL_RING_BASE_ALIGN 128 @@ -31,12 +36,28 @@ #define CPFL_SUPPORT_CHAIN_NUM 5 +struct cpfl_rxq_hairpin_info { + bool hairpin_q; /* if rx queue is a hairpin queue */ + bool manual_bind; /* for cross vport */ + uint16_t peer_txp; + uint16_t peer_txq_id; +}; + struct cpfl_rx_queue { struct idpf_rx_queue base; + struct cpfl_rxq_hairpin_info hairpin_info; +}; + +struct cpfl_txq_hairpin_info { + bool hairpin_q; /* if tx queue is a hairpin queue */ + bool manual_bind; /* for cross vport */ + uint16_t peer_rxp; + uint16_t peer_rxq_id; }; struct cpfl_tx_queue { struct idpf_tx_queue base; + struct cpfl_txq_hairpin_info hairpin_info; }; int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, @@ -57,4 +78,11 @@ void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid); void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid); void cpfl_set_rx_function(struct rte_eth_dev *dev); void cpfl_set_tx_function(struct rte_eth_dev *dev); +uint16_t cpfl_hw_qid_get(uint16_t start_qid, uint16_t offset); +uint64_t cpfl_hw_qtail_get(uint64_t tail_start, uint16_t offset, uint64_t tail_spacing); +int cpfl_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, + uint16_t nb_desc, const struct rte_eth_hairpin_conf *conf); +int cpfl_tx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, + uint16_t nb_desc, + const struct rte_eth_hairpin_conf *conf); #endif /* _CPFL_RXTX_H_ */ diff --git a/drivers/net/cpfl/cpfl_rxtx_vec_common.h b/drivers/net/cpfl/cpfl_rxtx_vec_common.h index 5690b17911..d8e9191196 100644 --- a/drivers/net/cpfl/cpfl_rxtx_vec_common.h +++ b/drivers/net/cpfl/cpfl_rxtx_vec_common.h @@ -85,6 +85,8 @@ cpfl_rx_vec_dev_check_default(struct rte_eth_dev *dev) cpfl_rxq = dev->data->rx_queues[i]; default_ret = cpfl_rx_vec_queue_default(&cpfl_rxq->base); if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) { + if (cpfl_rxq->hairpin_info.hairpin_q) + continue; splitq_ret = cpfl_rx_splitq_vec_default(&cpfl_rxq->base); ret = splitq_ret && default_ret; } else { @@ -106,6 +108,8 @@ cpfl_tx_vec_dev_check_default(struct rte_eth_dev *dev) for (i = 0; i < dev->data->nb_tx_queues; i++) { cpfl_txq = dev->data->tx_queues[i]; + if (cpfl_txq->hairpin_info.hairpin_q) + continue; ret = cpfl_tx_vec_queue_default(&cpfl_txq->base); if (ret == CPFL_SCALAR_PATH) return CPFL_SCALAR_PATH; From patchwork Fri Apr 21 06:50:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 126350 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 04AE0429A9; Fri, 21 Apr 2023 09:14:37 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B067E42D3E; Fri, 21 Apr 2023 09:14:04 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id 9558D42D33 for ; Fri, 21 Apr 2023 09:14:02 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1682061242; x=1713597242; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=TBkypkiixz/Jp8CLLonJzeaACMIs0bWCdHBJl2igADA=; b=j7pK/l7ppBWknz3i3OT/TkdCsnZ3lY6wGrfch3LSpTt4aJb62a5mrQ+K MB7NW93B3MAqsmUHiYoxld009g29sTy1xfEix8pdYCYOKpnXWzGtyWfHp 8fyFbjy66vIOKbKHyDLoNBn74bkCBRdfwbwLEcyZjTiaVMudsYdOET+f/ 7nx1u7Pop8RpOKm4tyYO6A0NFhDEQ2LrfO8PLAarXFUJ0/3y0mKT8sYAd 5Dhp1aF6r5FbYXjfJV8Kr1hCgy+yNOJYDrgC+BawgdjxlZalXSBweFfXO 5F3YyWJeOCA/TZNc6AOi600OKWMvwIfaa7R/F78J1BaR1o9xCmTsAlri4 Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10686"; a="326260077" X-IronPort-AV: E=Sophos;i="5.99,214,1677571200"; d="scan'208";a="326260077" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2023 00:14:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10686"; a="722669119" X-IronPort-AV: E=Sophos;i="5.99,214,1677571200"; d="scan'208";a="722669119" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga008.jf.intel.com with ESMTP; 21 Apr 2023 00:14:00 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing , Xiao Wang Subject: [PATCH 06/10] net/cpfl: support hairpin queue configuration Date: Fri, 21 Apr 2023 06:50:44 +0000 Message-Id: <20230421065048.106899-7-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230421065048.106899-1-beilei.xing@intel.com> References: <20230421065048.106899-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch supports Rx/Tx hairpin queue configuration. Signed-off-by: Xiao Wang Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/common/idpf/idpf_common_virtchnl.c | 70 +++++++++++ drivers/common/idpf/idpf_common_virtchnl.h | 6 + drivers/common/idpf/version.map | 2 + drivers/net/cpfl/cpfl_ethdev.c | 136 ++++++++++++++++++++- drivers/net/cpfl/cpfl_rxtx.c | 80 ++++++++++++ drivers/net/cpfl/cpfl_rxtx.h | 7 ++ 6 files changed, 297 insertions(+), 4 deletions(-) diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c index 76a658bb26..50cd43a8dd 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.c +++ b/drivers/common/idpf/idpf_common_virtchnl.c @@ -1050,6 +1050,41 @@ idpf_vc_rxq_config(struct idpf_vport *vport, struct idpf_rx_queue *rxq) return err; } +int idpf_vc_rxq_config_by_info(struct idpf_vport *vport, struct virtchnl2_rxq_info *rxq_info, + uint16_t num_qs) +{ + struct idpf_adapter *adapter = vport->adapter; + struct virtchnl2_config_rx_queues *vc_rxqs = NULL; + struct idpf_cmd_info args; + int size, err, i; + + size = sizeof(*vc_rxqs) + (num_qs - 1) * + sizeof(struct virtchnl2_rxq_info); + vc_rxqs = rte_zmalloc("cfg_rxqs", size, 0); + if (vc_rxqs == NULL) { + DRV_LOG(ERR, "Failed to allocate virtchnl2_config_rx_queues"); + err = -ENOMEM; + return err; + } + vc_rxqs->vport_id = vport->vport_id; + vc_rxqs->num_qinfo = num_qs; + memcpy(vc_rxqs->qinfo, rxq_info, num_qs * sizeof(struct virtchnl2_rxq_info)); + + memset(&args, 0, sizeof(args)); + args.ops = VIRTCHNL2_OP_CONFIG_RX_QUEUES; + args.in_args = (uint8_t *)vc_rxqs; + args.in_args_size = size; + args.out_buffer = adapter->mbx_resp; + args.out_size = IDPF_DFLT_MBX_BUF_SIZE; + + err = idpf_vc_cmd_execute(adapter, &args); + rte_free(vc_rxqs); + if (err != 0) + DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_RX_QUEUES"); + + return err; +} + int idpf_vc_txq_config(struct idpf_vport *vport, struct idpf_tx_queue *txq) { @@ -1121,6 +1156,41 @@ idpf_vc_txq_config(struct idpf_vport *vport, struct idpf_tx_queue *txq) return err; } +int +idpf_vc_txq_config_by_info(struct idpf_vport *vport, struct virtchnl2_txq_info *txq_info, + uint16_t num_qs) +{ + struct idpf_adapter *adapter = vport->adapter; + struct virtchnl2_config_tx_queues *vc_txqs = NULL; + struct idpf_cmd_info args; + int size, err; + + size = sizeof(*vc_txqs) + (num_qs - 1) * sizeof(struct virtchnl2_txq_info); + vc_txqs = rte_zmalloc("cfg_txqs", size, 0); + if (vc_txqs == NULL) { + DRV_LOG(ERR, "Failed to allocate virtchnl2_config_tx_queues"); + err = -ENOMEM; + return err; + } + vc_txqs->vport_id = vport->vport_id; + vc_txqs->num_qinfo = num_qs; + memcpy(vc_txqs->qinfo, txq_info, num_qs * sizeof(struct virtchnl2_txq_info)); + + memset(&args, 0, sizeof(args)); + args.ops = VIRTCHNL2_OP_CONFIG_TX_QUEUES; + args.in_args = (uint8_t *)vc_txqs; + args.in_args_size = size; + args.out_buffer = adapter->mbx_resp; + args.out_size = IDPF_DFLT_MBX_BUF_SIZE; + + err = idpf_vc_cmd_execute(adapter, &args); + rte_free(vc_txqs); + if (err != 0) + DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_TX_QUEUES"); + + return err; +} + int idpf_vc_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg, struct idpf_ctlq_msg *q_msg) diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h index bf1d014c8d..277235ba7d 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.h +++ b/drivers/common/idpf/idpf_common_virtchnl.h @@ -65,6 +65,12 @@ __rte_internal int idpf_vc_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq, u16 *buff_count, struct idpf_dma_mem **buffs); __rte_internal +int idpf_vc_rxq_config_by_info(struct idpf_vport *vport, struct virtchnl2_rxq_info *rxq_info, + uint16_t num_qs); +__rte_internal +int idpf_vc_txq_config_by_info(struct idpf_vport *vport, struct virtchnl2_txq_info *txq_info, + uint16_t num_qs); +__rte_internal int idpf_vc_queue_grps_del(struct idpf_vport *vport, uint16_t num_q_grps, struct virtchnl2_queue_group_id *qg_ids); diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map index aa67f7ee27..a339a4bf8e 100644 --- a/drivers/common/idpf/version.map +++ b/drivers/common/idpf/version.map @@ -59,8 +59,10 @@ INTERNAL { idpf_vc_rss_lut_get; idpf_vc_rss_lut_set; idpf_vc_rxq_config; + idpf_vc_rxq_config_by_info; idpf_vc_stats_query; idpf_vc_txq_config; + idpf_vc_txq_config_by_info; idpf_vc_vectors_alloc; idpf_vc_vectors_dealloc; idpf_vc_vport_create; diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index d3300f17cc..13edf2e706 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -737,32 +737,160 @@ cpfl_config_rx_queues_irqs(struct rte_eth_dev *dev) return idpf_vport_irq_map_config(vport, nb_rx_queues); } +/* Update hairpin_info for dev's tx hairpin queue */ +static int +cpfl_txq_hairpin_info_update(struct rte_eth_dev *dev, uint16_t rx_port) +{ + struct cpfl_vport *cpfl_tx_vport = dev->data->dev_private; + struct rte_eth_dev *peer_dev = &rte_eth_devices[rx_port]; + struct cpfl_vport *cpfl_rx_vport = peer_dev->data->dev_private; + struct cpfl_txq_hairpin_info *hairpin_info; + struct cpfl_tx_queue *cpfl_txq; + int i; + + for (i = cpfl_tx_vport->nb_data_txq; i < dev->data->nb_tx_queues; i++) { + cpfl_txq = dev->data->tx_queues[i]; + hairpin_info = &cpfl_txq->hairpin_info; + if (hairpin_info->peer_rxp != rx_port) { + PMD_DRV_LOG(ERR, "port %d is not the peer port", rx_port); + return -EINVAL; + } + hairpin_info->peer_rxq_id = + cpfl_hw_qid_get(cpfl_rx_vport->p2p_q_chunks_info.rx_start_qid, + hairpin_info->peer_rxq_id - cpfl_rx_vport->nb_data_rxq); + } + + return 0; +} + +/* Bind Rx hairpin queue's memory zone to peer Tx hairpin queue's memory zone */ +static void +cpfl_rxq_hairpin_mz_bind(struct rte_eth_dev *dev) +{ + struct cpfl_vport *cpfl_rx_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_rx_vport->base; + struct idpf_adapter *adapter = vport->adapter; + struct idpf_hw *hw = &adapter->hw; + struct cpfl_rx_queue *cpfl_rxq; + struct cpfl_tx_queue *cpfl_txq; + struct rte_eth_dev *peer_dev; + const struct rte_memzone *mz; + uint16_t peer_tx_port; + uint16_t peer_tx_qid; + int i; + + for (i = cpfl_rx_vport->nb_data_rxq; i < dev->data->nb_rx_queues; i++) { + cpfl_rxq = dev->data->rx_queues[i]; + peer_tx_port = cpfl_rxq->hairpin_info.peer_txp; + peer_tx_qid = cpfl_rxq->hairpin_info.peer_txq_id; + peer_dev = &rte_eth_devices[peer_tx_port]; + cpfl_txq = peer_dev->data->tx_queues[peer_tx_qid]; + + /* bind rx queue */ + mz = cpfl_txq->base.mz; + cpfl_rxq->base.rx_ring_phys_addr = mz->iova; + cpfl_rxq->base.rx_ring = mz->addr; + cpfl_rxq->base.mz = mz; + + /* bind rx buffer queue */ + mz = cpfl_txq->base.complq->mz; + cpfl_rxq->base.bufq1->rx_ring_phys_addr = mz->iova; + cpfl_rxq->base.bufq1->rx_ring = mz->addr; + cpfl_rxq->base.bufq1->mz = mz; + cpfl_rxq->base.bufq1->qrx_tail = hw->hw_addr + + cpfl_hw_qtail_get(cpfl_rx_vport->p2p_q_chunks_info.rx_buf_qtail_start, + 0, cpfl_rx_vport->p2p_q_chunks_info.rx_buf_qtail_spacing); + } +} + static int cpfl_start_queues(struct rte_eth_dev *dev) { + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct cpfl_rx_queue *cpfl_rxq; struct cpfl_tx_queue *cpfl_txq; + int tx_cmplq_flag = 0; + int rx_bufq_flag = 0; + int flag = 0; int err = 0; int i; + /* For normal data queues, configure, init and enale Txq. + * For non-cross vport hairpin queues, configure Txq. + */ for (i = 0; i < dev->data->nb_tx_queues; i++) { cpfl_txq = dev->data->tx_queues[i]; if (cpfl_txq == NULL || cpfl_txq->base.tx_deferred_start) continue; - err = cpfl_tx_queue_start(dev, i); + if (!cpfl_txq->hairpin_info.hairpin_q) { + err = cpfl_tx_queue_start(dev, i); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to start Tx queue %u", i); + return err; + } + } else if (!cpfl_txq->hairpin_info.manual_bind) { + if (flag == 0) { + err = cpfl_txq_hairpin_info_update(dev, + cpfl_txq->hairpin_info.peer_rxp); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to update Tx hairpin queue info"); + return err; + } + flag = 1; + } + err = cpfl_hairpin_txq_config(vport, cpfl_txq); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to configure hairpin Tx queue %u", i); + return err; + } + tx_cmplq_flag = 1; + } + } + + /* For non-cross vport hairpin queues, configure Tx completion queue first.*/ + if (tx_cmplq_flag == 1 && cpfl_vport->p2p_tx_complq != NULL) { + err = cpfl_hairpin_tx_complq_config(cpfl_vport); if (err != 0) { - PMD_DRV_LOG(ERR, "Fail to start Tx queue %u", i); + PMD_DRV_LOG(ERR, "Fail to config Tx completion queue"); return err; } } + /* For normal data queues, configure, init and enale Rxq. + * For non-cross vport hairpin queues, configure Rxq, and then init Rxq. + */ + cpfl_rxq_hairpin_mz_bind(dev); for (i = 0; i < dev->data->nb_rx_queues; i++) { cpfl_rxq = dev->data->rx_queues[i]; if (cpfl_rxq == NULL || cpfl_rxq->base.rx_deferred_start) continue; - err = cpfl_rx_queue_start(dev, i); + if (!cpfl_rxq->hairpin_info.hairpin_q) { + err = cpfl_rx_queue_start(dev, i); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to start Rx queue %u", i); + return err; + } + } else if (!cpfl_rxq->hairpin_info.manual_bind) { + err = cpfl_hairpin_rxq_config(vport, cpfl_rxq); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to configure hairpin Rx queue %u", i); + return err; + } + err = cpfl_rx_queue_init(dev, i); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to init hairpin Rx queue %u", i); + return err; + } + rx_bufq_flag = 1; + } + } + + /* For non-cross vport hairpin queues, configure Rx buffer queue.*/ + if (rx_bufq_flag == 1 && cpfl_vport->p2p_rx_bufq != NULL) { + err = cpfl_hairpin_rx_bufq_config(cpfl_vport); if (err != 0) { - PMD_DRV_LOG(ERR, "Fail to start Rx queue %u", i); + PMD_DRV_LOG(ERR, "Fail to config Rx buffer queue"); return err; } } diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c index 64ed331a6d..040beb5bac 100644 --- a/drivers/net/cpfl/cpfl_rxtx.c +++ b/drivers/net/cpfl/cpfl_rxtx.c @@ -930,6 +930,86 @@ cpfl_tx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, return 0; } +int +cpfl_hairpin_rx_bufq_config(struct cpfl_vport *cpfl_vport) +{ + struct idpf_rx_queue *rx_bufq = cpfl_vport->p2p_rx_bufq; + struct virtchnl2_rxq_info rxq_info[1] = {0}; + + rxq_info[0].type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER; + rxq_info[0].queue_id = rx_bufq->queue_id; + rxq_info[0].ring_len = rx_bufq->nb_rx_desc; + rxq_info[0].dma_ring_addr = rx_bufq->rx_ring_phys_addr; + rxq_info[0].desc_ids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M; + rxq_info[0].rx_buffer_low_watermark = CPFL_RXBUF_LOW_WATERMARK; + rxq_info[0].model = VIRTCHNL2_QUEUE_MODEL_SPLIT; + rxq_info[0].data_buffer_size = rx_bufq->rx_buf_len; + rxq_info[0].buffer_notif_stride = CPFL_RX_BUF_STRIDE; + + return idpf_vc_rxq_config_by_info(&cpfl_vport->base, rxq_info, 1); +} + +int +cpfl_hairpin_rxq_config(struct idpf_vport *vport, struct cpfl_rx_queue *cpfl_rxq) +{ + struct virtchnl2_rxq_info rxq_info[1] = {0}; + struct idpf_rx_queue *rxq = &cpfl_rxq->base; + + rxq_info[0].type = VIRTCHNL2_QUEUE_TYPE_RX; + rxq_info[0].queue_id = rxq->queue_id; + rxq_info[0].ring_len = rxq->nb_rx_desc; + rxq_info[0].dma_ring_addr = rxq->rx_ring_phys_addr; + rxq_info[0].rx_bufq1_id = rxq->bufq1->queue_id; + rxq_info[0].max_pkt_size = vport->max_pkt_len; + rxq_info[0].desc_ids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M; + rxq_info[0].qflags |= VIRTCHNL2_RX_DESC_SIZE_16BYTE; + + rxq_info[0].data_buffer_size = rxq->rx_buf_len; + rxq_info[0].model = VIRTCHNL2_QUEUE_MODEL_SPLIT; + rxq_info[0].rx_buffer_low_watermark = CPFL_RXBUF_LOW_WATERMARK; + + PMD_DRV_LOG(NOTICE, "hairpin: vport %u, Rxq id 0x%x", + vport->vport_id, rxq_info[0].queue_id); + + return idpf_vc_rxq_config_by_info(vport, rxq_info, 1); +} + +int +cpfl_hairpin_tx_complq_config(struct cpfl_vport *cpfl_vport) +{ + struct idpf_tx_queue *tx_complq = cpfl_vport->p2p_tx_complq; + struct virtchnl2_txq_info txq_info[1] = {0}; + + txq_info[0].dma_ring_addr = tx_complq->tx_ring_phys_addr; + txq_info[0].type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION; + txq_info[0].queue_id = tx_complq->queue_id; + txq_info[0].ring_len = tx_complq->nb_tx_desc; + txq_info[0].peer_rx_queue_id = cpfl_vport->p2p_rx_bufq->queue_id; + txq_info[0].model = VIRTCHNL2_QUEUE_MODEL_SPLIT; + txq_info[0].sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW; + + return idpf_vc_txq_config_by_info(&cpfl_vport->base, txq_info, 1); +} + +int +cpfl_hairpin_txq_config(struct idpf_vport *vport, struct cpfl_tx_queue *cpfl_txq) +{ + struct idpf_tx_queue *txq = &cpfl_txq->base; + struct virtchnl2_txq_info txq_info[1] = {0}; + + txq_info[0].dma_ring_addr = txq->tx_ring_phys_addr; + txq_info[0].type = VIRTCHNL2_QUEUE_TYPE_TX; + txq_info[0].queue_id = txq->queue_id; + txq_info[0].ring_len = txq->nb_tx_desc; + txq_info[0].tx_compl_queue_id = txq->complq->queue_id; + txq_info[0].relative_queue_id = txq->queue_id; + txq_info[0].peer_rx_queue_id = cpfl_txq->hairpin_info.peer_rxq_id; + txq_info[0].model = VIRTCHNL2_QUEUE_MODEL_SPLIT; + txq_info[0].sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW; + + return idpf_vc_txq_config_by_info(vport, txq_info, 1); +} + int cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) { diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h index d844c9f057..b01ce5edf9 100644 --- a/drivers/net/cpfl/cpfl_rxtx.h +++ b/drivers/net/cpfl/cpfl_rxtx.h @@ -30,12 +30,15 @@ #define CPFL_RING_BASE_ALIGN 128 #define CPFL_DEFAULT_RX_FREE_THRESH 32 +#define CPFL_RXBUF_LOW_WATERMARK 64 #define CPFL_DEFAULT_TX_RS_THRESH 32 #define CPFL_DEFAULT_TX_FREE_THRESH 32 #define CPFL_SUPPORT_CHAIN_NUM 5 +#define CPFL_RX_BUF_STRIDE 64 + struct cpfl_rxq_hairpin_info { bool hairpin_q; /* if rx queue is a hairpin queue */ bool manual_bind; /* for cross vport */ @@ -85,4 +88,8 @@ int cpfl_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, int cpfl_tx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc, const struct rte_eth_hairpin_conf *conf); +int cpfl_hairpin_tx_complq_config(struct cpfl_vport *cpfl_vport); +int cpfl_hairpin_txq_config(struct idpf_vport *vport, struct cpfl_tx_queue *cpfl_txq); +int cpfl_hairpin_rx_bufq_config(struct cpfl_vport *cpfl_vport); +int cpfl_hairpin_rxq_config(struct idpf_vport *vport, struct cpfl_rx_queue *cpfl_rxq); #endif /* _CPFL_RXTX_H_ */ From patchwork Fri Apr 21 06:50:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 126351 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BECE4429A9; Fri, 21 Apr 2023 09:14:43 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C9E0542D40; Fri, 21 Apr 2023 09:14:06 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id C9F3442D40 for ; Fri, 21 Apr 2023 09:14:04 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1682061245; x=1713597245; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=s5ZcdxqIaRzfPIeOWiPPB8ptUyN6uA4V4BSKp2mOKGg=; b=m2fBoGPvJ5tWzkxpaCWQFJjj1cYTuOEXNw5Qhnk1rzBaTqNsUc/G3snx yWNX/Z0IvKtUjDZQ8/FAdBqk4yv5OBoED7JaGCYoB6GEcYsdcoXJ9PvBM oTDMiYnv0XtOTtjH/Uv5Xv10hboUj4HcXLUvO1iJDPos4eAcaEgA6mXnl X5hnVMuKKK2kKozQu4nR3nn9no9lCg5OfzxzSMZc0qBNuNTtBnK4h4D13 22lFMzByXUjMJp93uKKHR12RpYrFb2Ur/kPMHtTMPloevVM9YTuPEkb6S KfX67H6eMpYwIRdDZ+usxSl0eq5kFlHF2/SeAg1hRqFF2pkvbNFzYOHtq A==; X-IronPort-AV: E=McAfee;i="6600,9927,10686"; a="326260084" X-IronPort-AV: E=Sophos;i="5.99,214,1677571200"; d="scan'208";a="326260084" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2023 00:14:04 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10686"; a="722669125" X-IronPort-AV: E=Sophos;i="5.99,214,1677571200"; d="scan'208";a="722669125" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga008.jf.intel.com with ESMTP; 21 Apr 2023 00:14:02 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing , Xiao Wang Subject: [PATCH 07/10] net/cpfl: support hairpin queue start/stop Date: Fri, 21 Apr 2023 06:50:45 +0000 Message-Id: <20230421065048.106899-8-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230421065048.106899-1-beilei.xing@intel.com> References: <20230421065048.106899-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch supports Rx/Tx hairpin queue start/stop. Signed-off-by: Xiao Wang Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/common/idpf/idpf_common_virtchnl.c | 2 +- drivers/common/idpf/idpf_common_virtchnl.h | 3 + drivers/common/idpf/version.map | 1 + drivers/net/cpfl/cpfl_ethdev.c | 39 ++++++ drivers/net/cpfl/cpfl_rxtx.c | 153 ++++++++++++++++++--- drivers/net/cpfl/cpfl_rxtx.h | 14 ++ 6 files changed, 193 insertions(+), 19 deletions(-) diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c index 50cd43a8dd..20a5bc085d 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.c +++ b/drivers/common/idpf/idpf_common_virtchnl.c @@ -733,7 +733,7 @@ idpf_vc_vectors_dealloc(struct idpf_vport *vport) return err; } -static int +int idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid, uint32_t type, bool on) { diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h index 277235ba7d..18db6cd8c8 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.h +++ b/drivers/common/idpf/idpf_common_virtchnl.h @@ -71,6 +71,9 @@ __rte_internal int idpf_vc_txq_config_by_info(struct idpf_vport *vport, struct virtchnl2_txq_info *txq_info, uint16_t num_qs); __rte_internal +int idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid, + uint32_t type, bool on); +__rte_internal int idpf_vc_queue_grps_del(struct idpf_vport *vport, uint16_t num_q_grps, struct virtchnl2_queue_group_id *qg_ids); diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map index a339a4bf8e..0e87dba2ae 100644 --- a/drivers/common/idpf/version.map +++ b/drivers/common/idpf/version.map @@ -45,6 +45,7 @@ INTERNAL { idpf_vc_cmd_execute; idpf_vc_ctlq_post_rx_buffs; idpf_vc_ctlq_recv; + idpf_vc_ena_dis_one_queue; idpf_vc_irq_map_unmap_config; idpf_vc_one_msg_read; idpf_vc_ptype_info_query; diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index 13edf2e706..f154c83f27 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -895,6 +895,45 @@ cpfl_start_queues(struct rte_eth_dev *dev) } } + /* For non-cross vport hairpin queues, enable Tx queue and Rx queue, + * then enable Tx completion queue and Rx buffer queue. + */ + for (i = 0; i < dev->data->nb_tx_queues; i++) { + cpfl_txq = dev->data->tx_queues[i]; + if (cpfl_txq->hairpin_info.hairpin_q && !cpfl_txq->hairpin_info.manual_bind) { + err = cpfl_switch_hairpin_rxtx_queue(cpfl_vport, + i - cpfl_vport->nb_data_txq, + false, true); + if (err) + PMD_DRV_LOG(ERR, "Failed to switch hairpin TX queue %u on", + i); + else + cpfl_txq->base.q_started = true; + } + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + cpfl_rxq = dev->data->rx_queues[i]; + if (cpfl_rxq->hairpin_info.hairpin_q && !cpfl_rxq->hairpin_info.manual_bind) { + err = cpfl_switch_hairpin_rxtx_queue(cpfl_vport, + i - cpfl_vport->nb_data_rxq, + true, true); + if (err) + PMD_DRV_LOG(ERR, "Failed to switch hairpin RX queue %u on", + i); + else + cpfl_rxq->base.q_started = true; + } + } + + if (tx_cmplq_flag == 1 && rx_bufq_flag == 1) { + err = cpfl_switch_hairpin_bufq_complq(cpfl_vport, true); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to switch hairpin Tx complq and Rx bufq"); + return err; + } + } + return err; } diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c index 040beb5bac..ed2d100c35 100644 --- a/drivers/net/cpfl/cpfl_rxtx.c +++ b/drivers/net/cpfl/cpfl_rxtx.c @@ -1010,6 +1010,83 @@ cpfl_hairpin_txq_config(struct idpf_vport *vport, struct cpfl_tx_queue *cpfl_txq return idpf_vc_txq_config_by_info(vport, txq_info, 1); } +int +cpfl_switch_hairpin_bufq_complq(struct cpfl_vport *cpfl_vport, bool on) +{ + struct idpf_vport *vport = &cpfl_vport->base; + uint32_t type; + int err, queue_id; + + type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION; + queue_id = cpfl_vport->p2p_tx_complq->queue_id; + err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on); + if (err) + return err; + + type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER; + queue_id = cpfl_vport->p2p_rx_bufq->queue_id; + err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on); + + return err; +} + +int +cpfl_switch_hairpin_rxtx_queue(struct cpfl_vport *cpfl_vport, uint16_t logic_qid, + bool rx, bool on) +{ + struct idpf_vport *vport = &cpfl_vport->base; + uint32_t type; + int err, queue_id; + + type = rx ? VIRTCHNL2_QUEUE_TYPE_RX : VIRTCHNL2_QUEUE_TYPE_TX; + + if (type == VIRTCHNL2_QUEUE_TYPE_RX) + queue_id = cpfl_hw_qid_get(cpfl_vport->p2p_q_chunks_info.rx_start_qid, logic_qid); + else + queue_id = cpfl_hw_qid_get(cpfl_vport->p2p_q_chunks_info.tx_start_qid, logic_qid); + err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on); + if (err) + return err; + + return err; +} + +static int +cpfl_alloc_split_p2p_rxq_mbufs(struct idpf_rx_queue *rxq) +{ + volatile struct virtchnl2_p2p_rx_buf_desc *rxd; + struct rte_mbuf *mbuf = NULL; + uint64_t dma_addr; + uint16_t i; + + for (i = 0; i < rxq->nb_rx_desc; i++) { + mbuf = rte_mbuf_raw_alloc(rxq->mp); + if (unlikely(!mbuf)) { + PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX"); + return -ENOMEM; + } + + rte_mbuf_refcnt_set(mbuf, 1); + mbuf->next = NULL; + mbuf->data_off = RTE_PKTMBUF_HEADROOM; + mbuf->nb_segs = 1; + mbuf->port = rxq->port_id; + dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf)); + + rxd = &((volatile struct virtchnl2_p2p_rx_buf_desc *)(rxq->rx_ring))[i]; + rxd->reserve0 = 0; + rxd->pkt_addr = dma_addr; + + rxq->sw_ring[i] = mbuf; + } + + rxq->nb_rx_hold = 0; + /* The value written in the RX buffer queue tail register, must be a multiple of 8.*/ + rxq->rx_tail = rxq->nb_rx_desc - CPFL_HAIRPIN_Q_TAIL_AUX_VALUE; + + return 0; +} + int cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) { @@ -1063,22 +1140,31 @@ cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) IDPF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1); } else { /* Split queue */ - err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq1); - if (err != 0) { - PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf"); - return err; - } - err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq2); - if (err != 0) { - PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf"); - return err; + if (cpfl_rxq->hairpin_info.hairpin_q) { + err = cpfl_alloc_split_p2p_rxq_mbufs(rxq->bufq1); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to allocate p2p RX buffer queue mbuf"); + return err; + } + } else { + err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq1); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf"); + return err; + } + err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq2); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf"); + return err; + } } rte_wmb(); /* Init the RX tail register. */ IDPF_PCI_REG_WRITE(rxq->bufq1->qrx_tail, rxq->bufq1->rx_tail); - IDPF_PCI_REG_WRITE(rxq->bufq2->qrx_tail, rxq->bufq2->rx_tail); + if (rxq->bufq2) + IDPF_PCI_REG_WRITE(rxq->bufq2->qrx_tail, rxq->bufq2->rx_tail); } return err; @@ -1185,7 +1271,12 @@ cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) return -EINVAL; cpfl_rxq = dev->data->rx_queues[rx_queue_id]; - err = idpf_vc_queue_switch(vport, rx_queue_id, true, false); + if (cpfl_rxq->hairpin_info.hairpin_q) + err = cpfl_switch_hairpin_rxtx_queue(cpfl_vport, + rx_queue_id - cpfl_vport->nb_data_txq, + true, false); + else + err = idpf_vc_queue_switch(vport, rx_queue_id, true, false); if (err != 0) { PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off", rx_queue_id); @@ -1199,10 +1290,17 @@ cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) idpf_qc_single_rx_queue_reset(rxq); } else { rxq->bufq1->ops->release_mbufs(rxq->bufq1); - rxq->bufq2->ops->release_mbufs(rxq->bufq2); - idpf_qc_split_rx_queue_reset(rxq); + if (rxq->bufq2) + rxq->bufq2->ops->release_mbufs(rxq->bufq2); + if (cpfl_rxq->hairpin_info.hairpin_q) { + cpfl_rx_hairpin_descq_reset(rxq); + cpfl_rx_hairpin_bufq_reset(rxq->bufq1); + } else { + idpf_qc_split_rx_queue_reset(rxq); + } } - dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + if (!cpfl_rxq->hairpin_info.hairpin_q) + dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; return 0; } @@ -1221,7 +1319,12 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) cpfl_txq = dev->data->tx_queues[tx_queue_id]; - err = idpf_vc_queue_switch(vport, tx_queue_id, false, false); + if (cpfl_txq->hairpin_info.hairpin_q) + err = cpfl_switch_hairpin_rxtx_queue(cpfl_vport, + tx_queue_id - cpfl_vport->nb_data_txq, + false, false); + else + err = idpf_vc_queue_switch(vport, tx_queue_id, false, false); if (err != 0) { PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off", tx_queue_id); @@ -1234,10 +1337,17 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) { idpf_qc_single_tx_queue_reset(txq); } else { - idpf_qc_split_tx_descq_reset(txq); - idpf_qc_split_tx_complq_reset(txq->complq); + if (cpfl_txq->hairpin_info.hairpin_q) { + cpfl_tx_hairpin_descq_reset(txq); + cpfl_tx_hairpin_complq_reset(txq->complq); + } else { + idpf_qc_split_tx_descq_reset(txq); + idpf_qc_split_tx_complq_reset(txq->complq); + } } - dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + + if (!cpfl_txq->hairpin_info.hairpin_q) + dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; return 0; } @@ -1257,10 +1367,17 @@ cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid) void cpfl_stop_queues(struct rte_eth_dev *dev) { + struct cpfl_vport *cpfl_vport = + (struct cpfl_vport *)dev->data->dev_private; struct cpfl_rx_queue *cpfl_rxq; struct cpfl_tx_queue *cpfl_txq; int i; + if (cpfl_vport->p2p_rx_bufq != NULL) { + if (cpfl_switch_hairpin_bufq_complq(cpfl_vport, false) != 0) + PMD_DRV_LOG(ERR, "Failed to stop hairpin Tx complq and Rx bufq"); + } + for (i = 0; i < dev->data->nb_rx_queues; i++) { cpfl_rxq = dev->data->rx_queues[i]; if (cpfl_rxq == NULL) diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h index b01ce5edf9..87603e161e 100644 --- a/drivers/net/cpfl/cpfl_rxtx.h +++ b/drivers/net/cpfl/cpfl_rxtx.h @@ -39,6 +39,17 @@ #define CPFL_RX_BUF_STRIDE 64 +/* The value written in the RX buffer queue tail register, + * and in WritePTR field in the TX completion queue context, + * must be a multiple of 8. + */ +#define CPFL_HAIRPIN_Q_TAIL_AUX_VALUE 8 + +struct virtchnl2_p2p_rx_buf_desc { + __le64 reserve0; + __le64 pkt_addr; /* Packet buffer address */ +}; + struct cpfl_rxq_hairpin_info { bool hairpin_q; /* if rx queue is a hairpin queue */ bool manual_bind; /* for cross vport */ @@ -92,4 +103,7 @@ int cpfl_hairpin_tx_complq_config(struct cpfl_vport *cpfl_vport); int cpfl_hairpin_txq_config(struct idpf_vport *vport, struct cpfl_tx_queue *cpfl_txq); int cpfl_hairpin_rx_bufq_config(struct cpfl_vport *cpfl_vport); int cpfl_hairpin_rxq_config(struct idpf_vport *vport, struct cpfl_rx_queue *cpfl_rxq); +int cpfl_switch_hairpin_bufq_complq(struct cpfl_vport *cpfl_vport, bool on); +int cpfl_switch_hairpin_rxtx_queue(struct cpfl_vport *cpfl_vport, uint16_t qid, + bool rx, bool on); #endif /* _CPFL_RXTX_H_ */ From patchwork Fri Apr 21 06:50:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 126352 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7C8A5429A9; Fri, 21 Apr 2023 09:14:52 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 64A1742D39; Fri, 21 Apr 2023 09:14:08 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id 778DF42D38 for ; Fri, 21 Apr 2023 09:14:06 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1682061246; x=1713597246; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=K/2dco6tliQWVgOQtwrepukr8ct/HeMrMG4yPI7uihQ=; b=lEqbEK3i7VMogu4MGRUUJiUrSDvwcHYHaYOdBb3c2wPIdKFxlw6cvJpR VLlEvj6nvRvHivUug1HTA2fRiT5t0QZdNovJMblDLeTxiDz92ctZQ/qyN 5ZJedDr3fSUM2IMF4QtzobnuXHeR+8U5/XQLKyz+bYTQq2w0zB3Y4pk0u cbHNYeA/g4d4zK9hNXhZ7d8Xm0YIFWISTWVQc2pp5GKKvomgNS7Glys2U D2if4VmPU4yBL54onJgHNVgk8dokRs1+z/c6H1wiMfZe99dRAbafdgCja 7LPEqcUz8hlSw9kVJhW+rII2KzeL/vUkEmWayGWyARRKrWy4cmj0JxPxk Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10686"; a="326260087" X-IronPort-AV: E=Sophos;i="5.99,214,1677571200"; d="scan'208";a="326260087" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2023 00:14:06 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10686"; a="722669137" X-IronPort-AV: E=Sophos;i="5.99,214,1677571200"; d="scan'208";a="722669137" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga008.jf.intel.com with ESMTP; 21 Apr 2023 00:14:04 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing Subject: [PATCH 08/10] net/cpfl: enable write back based on ITR expire Date: Fri, 21 Apr 2023 06:50:46 +0000 Message-Id: <20230421065048.106899-9-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230421065048.106899-1-beilei.xing@intel.com> References: <20230421065048.106899-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch enabls write back based on ITR expire (WR_ON_ITR) for hairpin queue. Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/common/idpf/idpf_common_device.c | 75 ++++++++++++++++++++++++ drivers/common/idpf/idpf_common_device.h | 4 ++ drivers/common/idpf/version.map | 1 + drivers/net/cpfl/cpfl_ethdev.c | 13 +++- 4 files changed, 92 insertions(+), 1 deletion(-) diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c index 3b58bdd41e..86a4a54f9b 100644 --- a/drivers/common/idpf/idpf_common_device.c +++ b/drivers/common/idpf/idpf_common_device.c @@ -559,6 +559,81 @@ idpf_vport_irq_map_config(struct idpf_vport *vport, uint16_t nb_rx_queues) return ret; } +int +idpf_vport_irq_map_config_by_qids(struct idpf_vport *vport, uint32_t *qids, uint16_t nb_rx_queues) +{ + struct idpf_adapter *adapter = vport->adapter; + struct virtchnl2_queue_vector *qv_map; + struct idpf_hw *hw = &adapter->hw; + uint32_t dynctl_val, itrn_val; + uint32_t dynctl_reg_start; + uint32_t itrn_reg_start; + uint16_t i; + int ret; + + qv_map = rte_zmalloc("qv_map", + nb_rx_queues * + sizeof(struct virtchnl2_queue_vector), 0); + if (qv_map == NULL) { + DRV_LOG(ERR, "Failed to allocate %d queue-vector map", + nb_rx_queues); + ret = -ENOMEM; + goto qv_map_alloc_err; + } + + /* Rx interrupt disabled, Map interrupt only for writeback */ + + /* The capability flags adapter->caps.other_caps should be + * compared with bit VIRTCHNL2_CAP_WB_ON_ITR here. The if + * condition should be updated when the FW can return the + * correct flag bits. + */ + dynctl_reg_start = + vport->recv_vectors->vchunks.vchunks->dynctl_reg_start; + itrn_reg_start = + vport->recv_vectors->vchunks.vchunks->itrn_reg_start; + dynctl_val = IDPF_READ_REG(hw, dynctl_reg_start); + DRV_LOG(DEBUG, "Value of dynctl_reg_start is 0x%x", dynctl_val); + itrn_val = IDPF_READ_REG(hw, itrn_reg_start); + DRV_LOG(DEBUG, "Value of itrn_reg_start is 0x%x", itrn_val); + /* Force write-backs by setting WB_ON_ITR bit in DYN_CTL + * register. WB_ON_ITR and INTENA are mutually exclusive + * bits. Setting WB_ON_ITR bits means TX and RX Descs + * are written back based on ITR expiration irrespective + * of INTENA setting. + */ + /* TBD: need to tune INTERVAL value for better performance. */ + itrn_val = (itrn_val == 0) ? IDPF_DFLT_INTERVAL : itrn_val; + dynctl_val = VIRTCHNL2_ITR_IDX_0 << + PF_GLINT_DYN_CTL_ITR_INDX_S | + PF_GLINT_DYN_CTL_WB_ON_ITR_M | + itrn_val << PF_GLINT_DYN_CTL_INTERVAL_S; + IDPF_WRITE_REG(hw, dynctl_reg_start, dynctl_val); + + for (i = 0; i < nb_rx_queues; i++) { + /* map all queues to the same vector */ + qv_map[i].queue_id = qids[i]; + qv_map[i].vector_id = + vport->recv_vectors->vchunks.vchunks->start_vector_id; + } + vport->qv_map = qv_map; + + ret = idpf_vc_irq_map_unmap_config(vport, nb_rx_queues, true); + if (ret != 0) { + DRV_LOG(ERR, "config interrupt mapping failed"); + goto config_irq_map_err; + } + + return 0; + +config_irq_map_err: + rte_free(vport->qv_map); + vport->qv_map = NULL; + +qv_map_alloc_err: + return ret; +} + int idpf_vport_irq_unmap_config(struct idpf_vport *vport, uint16_t nb_rx_queues) { diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h index 7cf2355bc9..1aa9d9516f 100644 --- a/drivers/common/idpf/idpf_common_device.h +++ b/drivers/common/idpf/idpf_common_device.h @@ -212,5 +212,9 @@ int idpf_vport_info_init(struct idpf_vport *vport, struct virtchnl2_create_vport *vport_info); __rte_internal void idpf_vport_stats_update(struct virtchnl2_vport_stats *oes, struct virtchnl2_vport_stats *nes); +__rte_internal +int idpf_vport_irq_map_config_by_qids(struct idpf_vport *vport, + uint32_t *qids, + uint16_t nb_rx_queues); #endif /* _IDPF_COMMON_DEVICE_H_ */ diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map index 0e87dba2ae..e3a7ef0daa 100644 --- a/drivers/common/idpf/version.map +++ b/drivers/common/idpf/version.map @@ -74,6 +74,7 @@ INTERNAL { idpf_vport_info_init; idpf_vport_init; idpf_vport_irq_map_config; + idpf_vport_irq_map_config_by_qids; idpf_vport_irq_unmap_config; idpf_vport_rss_config; idpf_vport_stats_update; diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index f154c83f27..008686bfd4 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -730,11 +730,22 @@ cpfl_dev_configure(struct rte_eth_dev *dev) static int cpfl_config_rx_queues_irqs(struct rte_eth_dev *dev) { + uint32_t qids[CPFL_MAX_P2P_NB_QUEUES + IDPF_DEFAULT_RXQ_NUM] = {0}; struct cpfl_vport *cpfl_vport = dev->data->dev_private; struct idpf_vport *vport = &cpfl_vport->base; uint16_t nb_rx_queues = dev->data->nb_rx_queues; + struct cpfl_rx_queue *cpfl_rxq; + int i; - return idpf_vport_irq_map_config(vport, nb_rx_queues); + for (i = 0; i < nb_rx_queues; i++) { + cpfl_rxq = dev->data->rx_queues[i]; + if (cpfl_rxq->hairpin_info.hairpin_q) + qids[i] = cpfl_hw_qid_get(cpfl_vport->p2p_q_chunks_info.rx_start_qid, + (i - cpfl_vport->nb_data_rxq)); + else + qids[i] = cpfl_hw_qid_get(vport->chunks_info.rx_start_qid, i); + } + return idpf_vport_irq_map_config_by_qids(vport, qids, nb_rx_queues); } /* Update hairpin_info for dev's tx hairpin queue */ From patchwork Fri Apr 21 06:50:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 126353 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8D28C429A9; Fri, 21 Apr 2023 09:14:59 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8485F42D51; Fri, 21 Apr 2023 09:14:09 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id 2418642D4D for ; Fri, 21 Apr 2023 09:14:07 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1682061248; x=1713597248; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Wtv1nDJj999aF3PIq0EjNvp1BiXlM2W69QfELoRpXeo=; b=cLcwOxzLMTM5fEbYdfqXng5HWm21GSDfZPkko9xmGlz6NQayH20qCNnS LCowNyx3YnuQNxgpuDXILiRUjD8wqhjnl0rkId9oxPY/XDKCzMPuDk+oJ 58MpKElxGPCYl0xDWjHxQKXiepFHbTouoN+B2RtRko2MYQIbQOCtf2NW7 jntyZP9GlBC448nt2z+pH/h3kwAwqPLXQmy7LnpxPyJwdszoM82Limym4 TJNRxVfU1jIylyIRAoH9D/h59NB4EkbP/nLgUt6RssNutFWF+ihMYQyy7 f1Xr18WdorM/L8dlpxuaxYRGpjWqoFZOw1CMzkjIHcLOgPae8EWTJk+g+ g==; X-IronPort-AV: E=McAfee;i="6600,9927,10686"; a="326260092" X-IronPort-AV: E=Sophos;i="5.99,214,1677571200"; d="scan'208";a="326260092" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2023 00:14:07 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10686"; a="722669142" X-IronPort-AV: E=Sophos;i="5.99,214,1677571200"; d="scan'208";a="722669142" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga008.jf.intel.com with ESMTP; 21 Apr 2023 00:14:06 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing , Xiao Wang Subject: [PATCH 09/10] net/cpfl: support peer ports get Date: Fri, 21 Apr 2023 06:50:47 +0000 Message-Id: <20230421065048.106899-10-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230421065048.106899-1-beilei.xing@intel.com> References: <20230421065048.106899-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch supports get hairpin peer ports. Signed-off-by: Xiao Wang Signed-off-by: Beilei Xing --- drivers/net/cpfl/cpfl_ethdev.c | 34 ++++++++++++++++++++++++++++++++++ 1 file changed, 34 insertions(+) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index 008686bfd4..52c4ab601f 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -1074,6 +1074,39 @@ cpfl_dev_close(struct rte_eth_dev *dev) return 0; } +static int +cpfl_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports, + __rte_unused size_t len, uint32_t tx) +{ + struct cpfl_vport *cpfl_vport = + (struct cpfl_vport *)dev->data->dev_private; + struct idpf_tx_queue *txq; + struct idpf_rx_queue *rxq; + struct cpfl_tx_queue *cpfl_txq; + struct cpfl_rx_queue *cpfl_rxq; + int i, j; + + if (tx > 0) { + for (i = cpfl_vport->nb_data_txq, j = 0; i < dev->data->nb_tx_queues; i++, j++) { + txq = dev->data->tx_queues[i]; + if (txq == NULL) + return -EINVAL; + cpfl_txq = (struct cpfl_tx_queue *)txq; + peer_ports[j] = cpfl_txq->hairpin_info.peer_rxp; + } + } else if (tx == 0) { + for (i = cpfl_vport->nb_data_rxq, j = 0; i < dev->data->nb_rx_queues; i++, j++) { + rxq = dev->data->rx_queues[i]; + if (rxq == NULL) + return -EINVAL; + cpfl_rxq = (struct cpfl_rx_queue *)rxq; + peer_ports[j] = cpfl_rxq->hairpin_info.peer_txp; + } + } + + return j; +} + static const struct eth_dev_ops cpfl_eth_dev_ops = { .dev_configure = cpfl_dev_configure, .dev_close = cpfl_dev_close, @@ -1103,6 +1136,7 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = { .hairpin_cap_get = cpfl_hairpin_cap_get, .rx_hairpin_queue_setup = cpfl_rx_hairpin_queue_setup, .tx_hairpin_queue_setup = cpfl_tx_hairpin_queue_setup, + .hairpin_get_peer_ports = cpfl_hairpin_get_peer_ports, }; static int From patchwork Fri Apr 21 06:50:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 126354 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0530A429A9; Fri, 21 Apr 2023 09:15:05 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9081042D53; Fri, 21 Apr 2023 09:14:11 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id 1B92A42D5A for ; Fri, 21 Apr 2023 09:14:09 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1682061250; x=1713597250; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=sbESs5t+E3FpWN47vka6v6FKJyjecGD2rZT8rP1aBpI=; b=VAGiiRXjW+XL177tPPjUlDJl1eUP+/AWp2cUXfQpPRTpFDm0VX4gvTGN N6RFqFCjImbeJoAQR4NdxF6Bc77JLB8mlSOD1dfp4Q3znu+MlI5txi8ek SJChjxRStoTVR0N63KKleZI1rnnzbASB9EoGe3c7yP2c5xP6pg4ONhm4/ 3tMCbD0BQ7x/GBG9hrPWMwRuZ1xezeLRl8uq+TuntyIs7E+5SfZO/KIHY AjlXgfGG4ZgVW6tCRiNl1FVeToiF9OlPEl/bZoXhbmp6yIFuasJNAFzE7 SzvlQJ4biB4EzjsCDSX53Uvj8Bx1Su0vnlj3A5japX/GPUNPxH8jrsEll g==; X-IronPort-AV: E=McAfee;i="6600,9927,10686"; a="326260096" X-IronPort-AV: E=Sophos;i="5.99,214,1677571200"; d="scan'208";a="326260096" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2023 00:14:09 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10686"; a="722669146" X-IronPort-AV: E=Sophos;i="5.99,214,1677571200"; d="scan'208";a="722669146" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga008.jf.intel.com with ESMTP; 21 Apr 2023 00:14:07 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing , Xiao Wang Subject: [PATCH 10/10] net/cpfl: support hairpin bind/unbind Date: Fri, 21 Apr 2023 06:50:48 +0000 Message-Id: <20230421065048.106899-11-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230421065048.106899-1-beilei.xing@intel.com> References: <20230421065048.106899-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch supports hairpin_bind/unbind ops. Signed-off-by: Xiao Wang Signed-off-by: Beilei Xing --- drivers/net/cpfl/cpfl_ethdev.c | 137 +++++++++++++++++++++++++++++++++ drivers/net/cpfl/cpfl_rxtx.c | 28 +++++++ drivers/net/cpfl/cpfl_rxtx.h | 2 + 3 files changed, 167 insertions(+) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index 52c4ab601f..ddafc2f9e5 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -1107,6 +1107,141 @@ cpfl_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports, return j; } +static int +cpfl_hairpin_bind(struct rte_eth_dev *dev, uint16_t rx_port) +{ + struct cpfl_vport *cpfl_tx_vport = dev->data->dev_private; + struct idpf_vport *tx_vport = &cpfl_tx_vport->base; + struct cpfl_vport *cpfl_rx_vport; + struct cpfl_tx_queue *cpfl_txq; + struct cpfl_rx_queue *cpfl_rxq; + struct rte_eth_dev *peer_dev; + struct idpf_vport *rx_vport; + int err = 0; + int i; + + err = cpfl_txq_hairpin_info_update(dev, rx_port); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to update Tx hairpin queue info."); + return err; + } + + /* configure hairpin queues */ + for (i = cpfl_tx_vport->nb_data_txq; i < dev->data->nb_tx_queues; i++) { + cpfl_txq = dev->data->tx_queues[i]; + err = cpfl_hairpin_txq_config(tx_vport, cpfl_txq); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to configure hairpin Tx queue %u", i); + return err; + } + } + + err = cpfl_hairpin_tx_complq_config(cpfl_tx_vport); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to config Tx completion queue"); + return err; + } + + peer_dev = &rte_eth_devices[rx_port]; + cpfl_rx_vport = (struct cpfl_vport *)peer_dev->data->dev_private; + rx_vport = &cpfl_rx_vport->base; + cpfl_rxq_hairpin_mz_bind(peer_dev); + + for (i = cpfl_rx_vport->nb_data_rxq; i < peer_dev->data->nb_rx_queues; i++) { + cpfl_rxq = peer_dev->data->rx_queues[i]; + err = cpfl_hairpin_rxq_config(rx_vport, cpfl_rxq); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to configure hairpin Rx queue %u", i); + return err; + } + err = cpfl_rx_queue_init(peer_dev, i); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to init hairpin Rx queue %u", i); + return err; + } + } + + err = cpfl_hairpin_rx_bufq_config(cpfl_rx_vport); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to config Rx buffer queue"); + return err; + } + + /* enable hairpin queues */ + for (i = cpfl_tx_vport->nb_data_txq; i < dev->data->nb_tx_queues; i++) { + cpfl_txq = dev->data->tx_queues[i]; + err = cpfl_switch_hairpin_rxtx_queue(cpfl_tx_vport, + i - cpfl_tx_vport->nb_data_txq, + false, true); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to switch hairpin TX queue %u on", + i); + return err; + } + cpfl_txq->base.q_started = true; + } + + err = cpfl_switch_hairpin_complq(cpfl_tx_vport, true); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to switch hairpin Tx complq"); + return err; + } + + for (i = cpfl_rx_vport->nb_data_rxq; i < peer_dev->data->nb_rx_queues; i++) { + cpfl_rxq = peer_dev->data->rx_queues[i]; + err = cpfl_switch_hairpin_rxtx_queue(cpfl_rx_vport, + i - cpfl_rx_vport->nb_data_rxq, + true, true); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to switch hairpin RX queue %u on", + i); + } + cpfl_rxq->base.q_started = true; + } + + err = cpfl_switch_hairpin_bufq(cpfl_rx_vport, true); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to switch hairpin Rx buffer queue"); + return err; + } + + return 0; +} + +static int +cpfl_hairpin_unbind(struct rte_eth_dev *dev, uint16_t rx_port) +{ + struct cpfl_vport *cpfl_tx_vport = dev->data->dev_private; + struct rte_eth_dev *peer_dev = &rte_eth_devices[rx_port]; + struct cpfl_vport *cpfl_rx_vport = peer_dev->data->dev_private; + struct cpfl_tx_queue *cpfl_txq; + struct cpfl_rx_queue *cpfl_rxq; + int i; + + /* disable hairpin queues */ + for (i = cpfl_tx_vport->nb_data_txq; i < dev->data->nb_tx_queues; i++) { + cpfl_txq = dev->data->tx_queues[i]; + cpfl_switch_hairpin_rxtx_queue(cpfl_tx_vport, + i - cpfl_tx_vport->nb_data_txq, + false, false); + cpfl_txq->base.q_started = false; + } + + cpfl_switch_hairpin_complq(cpfl_tx_vport, false); + + for (i = cpfl_rx_vport->nb_data_rxq; i < peer_dev->data->nb_rx_queues; i++) { + cpfl_rxq = peer_dev->data->rx_queues[i]; + cpfl_switch_hairpin_rxtx_queue(cpfl_rx_vport, + i - cpfl_rx_vport->nb_data_rxq, + true, false); + cpfl_rxq->base.q_started = false; + } + + cpfl_switch_hairpin_bufq(cpfl_rx_vport, false); + + return 0; +} + static const struct eth_dev_ops cpfl_eth_dev_ops = { .dev_configure = cpfl_dev_configure, .dev_close = cpfl_dev_close, @@ -1137,6 +1272,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = { .rx_hairpin_queue_setup = cpfl_rx_hairpin_queue_setup, .tx_hairpin_queue_setup = cpfl_tx_hairpin_queue_setup, .hairpin_get_peer_ports = cpfl_hairpin_get_peer_ports, + .hairpin_bind = cpfl_hairpin_bind, + .hairpin_unbind = cpfl_hairpin_unbind, }; static int diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c index ed2d100c35..e025bd014f 100644 --- a/drivers/net/cpfl/cpfl_rxtx.c +++ b/drivers/net/cpfl/cpfl_rxtx.c @@ -1030,6 +1030,34 @@ cpfl_switch_hairpin_bufq_complq(struct cpfl_vport *cpfl_vport, bool on) return err; } +int +cpfl_switch_hairpin_complq(struct cpfl_vport *cpfl_vport, bool on) +{ + struct idpf_vport *vport = &cpfl_vport->base; + uint32_t type; + int err, queue_id; + + type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION; + queue_id = cpfl_vport->p2p_tx_complq->queue_id; + err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on); + + return err; +} + +int +cpfl_switch_hairpin_bufq(struct cpfl_vport *cpfl_vport, bool on) +{ + struct idpf_vport *vport = &cpfl_vport->base; + uint32_t type; + int err, queue_id; + + type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER; + queue_id = cpfl_vport->p2p_rx_bufq->queue_id; + err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on); + + return err; +} + int cpfl_switch_hairpin_rxtx_queue(struct cpfl_vport *cpfl_vport, uint16_t logic_qid, bool rx, bool on) diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h index 87603e161e..60308e16b2 100644 --- a/drivers/net/cpfl/cpfl_rxtx.h +++ b/drivers/net/cpfl/cpfl_rxtx.h @@ -104,6 +104,8 @@ int cpfl_hairpin_txq_config(struct idpf_vport *vport, struct cpfl_tx_queue *cpfl int cpfl_hairpin_rx_bufq_config(struct cpfl_vport *cpfl_vport); int cpfl_hairpin_rxq_config(struct idpf_vport *vport, struct cpfl_rx_queue *cpfl_rxq); int cpfl_switch_hairpin_bufq_complq(struct cpfl_vport *cpfl_vport, bool on); +int cpfl_switch_hairpin_complq(struct cpfl_vport *cpfl_vport, bool on); +int cpfl_switch_hairpin_bufq(struct cpfl_vport *cpfl_vport, bool on); int cpfl_switch_hairpin_rxtx_queue(struct cpfl_vport *cpfl_vport, uint16_t qid, bool rx, bool on); #endif /* _CPFL_RXTX_H_ */