From patchwork Wed May 31 13:04:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 127788 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9567242BF4; Wed, 31 May 2023 15:29:17 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 31EBE42B8B; Wed, 31 May 2023 15:29:16 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 65946410D0 for ; Wed, 31 May 2023 15:29:13 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685539753; x=1717075753; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Z03PGnmb4IvqIT+p2fyA+nrckUU8ET1jTiQRpNyFFws=; b=Il6WUobIjORdF2Zfeh1KIKpWlgo6olfeCezzgFPYdEC9iGWcKRRqnxsE 4bhMclM+wK8Hq2A7TnRJLwp1cr4ZFLV9QzsW4tU7uc0eoaEcR3wuCC8ko 1kNK4jvP2MdzDVK/PUztwvW4AIVn53hWSePNgPrqeMB1PPzenddTl7xg+ LhoBVnTwzXcZmcIyHUM6XoXvWOdlYdvfJh0KQpMHpjxJ7GtRC6Ye4KAef 3U8v2Ha7JywbgFLnRDUszrYzuahCcpjdGK1Ud+dEB4ShI+zFOA082vCCA AznH3BRI/ZBUjNp958zri2kHLEsGZKseLAA9jylnQYcRYC0NGJGlDaOOy w==; X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="358497839" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="358497839" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2023 06:29:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="657325510" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="657325510" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga003.jf.intel.com with ESMTP; 31 May 2023 06:29:11 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing Subject: [PATCH v7 01/14] net/cpfl: refine structures Date: Wed, 31 May 2023 13:04:37 +0000 Message-Id: <20230531130450.26380-2-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230531130450.26380-1-beilei.xing@intel.com> References: <20230531102551.20936-1-beilei.xing@intel.com> <20230531130450.26380-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch refines some structures to support hairpin queue, cpfl_rx_queue/cpfl_tx_queue/cpfl_vport. Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/net/cpfl/cpfl_ethdev.c | 85 +++++++----- drivers/net/cpfl/cpfl_ethdev.h | 6 +- drivers/net/cpfl/cpfl_rxtx.c | 175 +++++++++++++++++------- drivers/net/cpfl/cpfl_rxtx.h | 8 ++ drivers/net/cpfl/cpfl_rxtx_vec_common.h | 17 +-- 5 files changed, 196 insertions(+), 95 deletions(-) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index 7528a14d05..e587155db6 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -124,7 +124,8 @@ static int cpfl_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct rte_eth_link new_link; unsigned int i; @@ -156,7 +157,8 @@ cpfl_dev_link_update(struct rte_eth_dev *dev, static int cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; dev_info->max_rx_queues = base->caps.max_rx_q; @@ -216,7 +218,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) static int cpfl_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; /* mtu setting is forbidden if port is start */ if (dev->data->dev_started) { @@ -256,12 +259,12 @@ static uint64_t cpfl_get_mbuf_alloc_failed_stats(struct rte_eth_dev *dev) { uint64_t mbuf_alloc_failed = 0; - struct idpf_rx_queue *rxq; + struct cpfl_rx_queue *cpfl_rxq; int i = 0; for (i = 0; i < dev->data->nb_rx_queues; i++) { - rxq = dev->data->rx_queues[i]; - mbuf_alloc_failed += __atomic_load_n(&rxq->rx_stats.mbuf_alloc_failed, + cpfl_rxq = dev->data->rx_queues[i]; + mbuf_alloc_failed += __atomic_load_n(&cpfl_rxq->base.rx_stats.mbuf_alloc_failed, __ATOMIC_RELAXED); } @@ -271,8 +274,8 @@ cpfl_get_mbuf_alloc_failed_stats(struct rte_eth_dev *dev) static int cpfl_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { - struct idpf_vport *vport = - (struct idpf_vport *)dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct virtchnl2_vport_stats *pstats = NULL; int ret; @@ -305,20 +308,20 @@ cpfl_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) static void cpfl_reset_mbuf_alloc_failed_stats(struct rte_eth_dev *dev) { - struct idpf_rx_queue *rxq; + struct cpfl_rx_queue *cpfl_rxq; int i; for (i = 0; i < dev->data->nb_rx_queues; i++) { - rxq = dev->data->rx_queues[i]; - __atomic_store_n(&rxq->rx_stats.mbuf_alloc_failed, 0, __ATOMIC_RELAXED); + cpfl_rxq = dev->data->rx_queues[i]; + __atomic_store_n(&cpfl_rxq->base.rx_stats.mbuf_alloc_failed, 0, __ATOMIC_RELAXED); } } static int cpfl_dev_stats_reset(struct rte_eth_dev *dev) { - struct idpf_vport *vport = - (struct idpf_vport *)dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct virtchnl2_vport_stats *pstats = NULL; int ret; @@ -343,8 +346,8 @@ static int cpfl_dev_xstats_reset(struct rte_eth_dev *dev) static int cpfl_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, unsigned int n) { - struct idpf_vport *vport = - (struct idpf_vport *)dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct virtchnl2_vport_stats *pstats = NULL; unsigned int i; int ret; @@ -459,7 +462,8 @@ cpfl_rss_reta_update(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; uint16_t idx, shift; int ret = 0; @@ -498,7 +502,8 @@ cpfl_rss_reta_query(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; uint16_t idx, shift; int ret = 0; @@ -536,7 +541,8 @@ static int cpfl_rss_hash_update(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; int ret = 0; @@ -601,7 +607,8 @@ static int cpfl_rss_hash_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; int ret = 0; @@ -638,7 +645,8 @@ cpfl_rss_hash_conf_get(struct rte_eth_dev *dev, static int cpfl_dev_configure(struct rte_eth_dev *dev) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct rte_eth_conf *conf = &dev->data->dev_conf; struct idpf_adapter *base = vport->adapter; int ret; @@ -710,7 +718,8 @@ cpfl_dev_configure(struct rte_eth_dev *dev) static int cpfl_config_rx_queues_irqs(struct rte_eth_dev *dev) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; uint16_t nb_rx_queues = dev->data->nb_rx_queues; return idpf_vport_irq_map_config(vport, nb_rx_queues); @@ -719,14 +728,14 @@ cpfl_config_rx_queues_irqs(struct rte_eth_dev *dev) static int cpfl_start_queues(struct rte_eth_dev *dev) { - struct idpf_rx_queue *rxq; - struct idpf_tx_queue *txq; + struct cpfl_rx_queue *cpfl_rxq; + struct cpfl_tx_queue *cpfl_txq; int err = 0; int i; for (i = 0; i < dev->data->nb_tx_queues; i++) { - txq = dev->data->tx_queues[i]; - if (txq == NULL || txq->tx_deferred_start) + cpfl_txq = dev->data->tx_queues[i]; + if (cpfl_txq == NULL || cpfl_txq->base.tx_deferred_start) continue; err = cpfl_tx_queue_start(dev, i); if (err != 0) { @@ -736,8 +745,8 @@ cpfl_start_queues(struct rte_eth_dev *dev) } for (i = 0; i < dev->data->nb_rx_queues; i++) { - rxq = dev->data->rx_queues[i]; - if (rxq == NULL || rxq->rx_deferred_start) + cpfl_rxq = dev->data->rx_queues[i]; + if (cpfl_rxq == NULL || cpfl_rxq->base.rx_deferred_start) continue; err = cpfl_rx_queue_start(dev, i); if (err != 0) { @@ -752,7 +761,8 @@ cpfl_start_queues(struct rte_eth_dev *dev) static int cpfl_dev_start(struct rte_eth_dev *dev) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(base); uint16_t num_allocated_vectors = base->caps.num_allocated_vectors; @@ -813,7 +823,8 @@ cpfl_dev_start(struct rte_eth_dev *dev) static int cpfl_dev_stop(struct rte_eth_dev *dev) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; if (dev->data->dev_started == 0) return 0; @@ -832,7 +843,8 @@ cpfl_dev_stop(struct rte_eth_dev *dev) static int cpfl_dev_close(struct rte_eth_dev *dev) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(vport->adapter); cpfl_dev_stop(dev); @@ -842,7 +854,7 @@ cpfl_dev_close(struct rte_eth_dev *dev) adapter->cur_vport_nb--; dev->data->dev_private = NULL; adapter->vports[vport->sw_idx] = NULL; - rte_free(vport); + rte_free(cpfl_vport); return 0; } @@ -1047,7 +1059,7 @@ cpfl_find_vport(struct cpfl_adapter_ext *adapter, uint32_t vport_id) int i; for (i = 0; i < adapter->cur_vport_nb; i++) { - vport = adapter->vports[i]; + vport = &adapter->vports[i]->base; if (vport->vport_id != vport_id) continue; else @@ -1275,7 +1287,8 @@ cpfl_vport_idx_alloc(struct cpfl_adapter_ext *adapter) static int cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct cpfl_vport_param *param = init_params; struct cpfl_adapter_ext *adapter = param->adapter; /* for sending create vport virtchnl msg prepare */ @@ -1300,7 +1313,7 @@ cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params) goto err; } - adapter->vports[param->idx] = vport; + adapter->vports[param->idx] = cpfl_vport; adapter->cur_vports |= RTE_BIT32(param->devarg_id); adapter->cur_vport_nb++; @@ -1415,7 +1428,7 @@ cpfl_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, snprintf(name, sizeof(name), "cpfl_%s_vport_0", pci_dev->device.name); retval = rte_eth_dev_create(&pci_dev->device, name, - sizeof(struct idpf_vport), + sizeof(struct cpfl_vport), NULL, NULL, cpfl_dev_vport_init, &vport_param); if (retval != 0) @@ -1433,7 +1446,7 @@ cpfl_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, pci_dev->device.name, devargs.req_vports[i]); retval = rte_eth_dev_create(&pci_dev->device, name, - sizeof(struct idpf_vport), + sizeof(struct cpfl_vport), NULL, NULL, cpfl_dev_vport_init, &vport_param); if (retval != 0) diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h index 200dfcac02..81fe9ac4c3 100644 --- a/drivers/net/cpfl/cpfl_ethdev.h +++ b/drivers/net/cpfl/cpfl_ethdev.h @@ -69,13 +69,17 @@ struct cpfl_devargs { uint16_t req_vport_nb; }; +struct cpfl_vport { + struct idpf_vport base; +}; + struct cpfl_adapter_ext { TAILQ_ENTRY(cpfl_adapter_ext) next; struct idpf_adapter base; char name[CPFL_ADAPTER_NAME_LEN]; - struct idpf_vport **vports; + struct cpfl_vport **vports; uint16_t max_vport_nb; uint16_t cur_vports; /* bit mask of created vport */ diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c index 75021c3c54..04a51b8d15 100644 --- a/drivers/net/cpfl/cpfl_rxtx.c +++ b/drivers/net/cpfl/cpfl_rxtx.c @@ -128,7 +128,8 @@ cpfl_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *rxq, uint16_t nb_desc, unsigned int socket_id, struct rte_mempool *mp, uint8_t bufq_id) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; struct idpf_hw *hw = &base->hw; const struct rte_memzone *mz; @@ -220,15 +221,69 @@ cpfl_rx_split_bufq_release(struct idpf_rx_queue *bufq) rte_free(bufq); } +static void +cpfl_rx_queue_release(void *rxq) +{ + struct cpfl_rx_queue *cpfl_rxq = rxq; + struct idpf_rx_queue *q = NULL; + + if (cpfl_rxq == NULL) + return; + + q = &cpfl_rxq->base; + + /* Split queue */ + if (!q->adapter->is_rx_singleq) { + if (q->bufq2) + cpfl_rx_split_bufq_release(q->bufq2); + + if (q->bufq1) + cpfl_rx_split_bufq_release(q->bufq1); + + rte_free(cpfl_rxq); + return; + } + + /* Single queue */ + q->ops->release_mbufs(q); + rte_free(q->sw_ring); + rte_memzone_free(q->mz); + rte_free(cpfl_rxq); +} + +static void +cpfl_tx_queue_release(void *txq) +{ + struct cpfl_tx_queue *cpfl_txq = txq; + struct idpf_tx_queue *q = NULL; + + if (cpfl_txq == NULL) + return; + + q = &cpfl_txq->base; + + if (q->complq) { + rte_memzone_free(q->complq->mz); + rte_free(q->complq); + } + + q->ops->release_mbufs(q); + rte_free(q->sw_ring); + rte_memzone_free(q->mz); + rte_free(cpfl_txq); +} + int cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mp) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; struct idpf_hw *hw = &base->hw; + struct cpfl_rx_queue *cpfl_rxq; const struct rte_memzone *mz; struct idpf_rx_queue *rxq; uint16_t rx_free_thresh; @@ -248,21 +303,23 @@ cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, /* Free memory if needed */ if (dev->data->rx_queues[queue_idx] != NULL) { - idpf_qc_rx_queue_release(dev->data->rx_queues[queue_idx]); + cpfl_rx_queue_release(dev->data->rx_queues[queue_idx]); dev->data->rx_queues[queue_idx] = NULL; } /* Setup Rx queue */ - rxq = rte_zmalloc_socket("cpfl rxq", - sizeof(struct idpf_rx_queue), + cpfl_rxq = rte_zmalloc_socket("cpfl rxq", + sizeof(struct cpfl_rx_queue), RTE_CACHE_LINE_SIZE, socket_id); - if (rxq == NULL) { + if (cpfl_rxq == NULL) { PMD_INIT_LOG(ERR, "Failed to allocate memory for rx queue data structure"); ret = -ENOMEM; goto err_rxq_alloc; } + rxq = &cpfl_rxq->base; + is_splitq = !!(vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT); rxq->mp = mp; @@ -329,7 +386,7 @@ cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, } rxq->q_set = true; - dev->data->rx_queues[queue_idx] = rxq; + dev->data->rx_queues[queue_idx] = cpfl_rxq; return 0; @@ -349,7 +406,8 @@ cpfl_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq, uint16_t queue_idx, uint16_t nb_desc, unsigned int socket_id) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; const struct rte_memzone *mz; struct idpf_tx_queue *cq; int ret; @@ -397,9 +455,11 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_txconf *tx_conf) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct idpf_adapter *base = vport->adapter; uint16_t tx_rs_thresh, tx_free_thresh; + struct cpfl_tx_queue *cpfl_txq; struct idpf_hw *hw = &base->hw; const struct rte_memzone *mz; struct idpf_tx_queue *txq; @@ -419,21 +479,23 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, /* Free memory if needed. */ if (dev->data->tx_queues[queue_idx] != NULL) { - idpf_qc_tx_queue_release(dev->data->tx_queues[queue_idx]); + cpfl_tx_queue_release(dev->data->tx_queues[queue_idx]); dev->data->tx_queues[queue_idx] = NULL; } /* Allocate the TX queue data structure. */ - txq = rte_zmalloc_socket("cpfl txq", - sizeof(struct idpf_tx_queue), + cpfl_txq = rte_zmalloc_socket("cpfl txq", + sizeof(struct cpfl_tx_queue), RTE_CACHE_LINE_SIZE, socket_id); - if (txq == NULL) { + if (cpfl_txq == NULL) { PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure"); ret = -ENOMEM; goto err_txq_alloc; } + txq = &cpfl_txq->base; + is_splitq = !!(vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT); txq->nb_tx_desc = nb_desc; @@ -487,7 +549,7 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, queue_idx * vport->chunks_info.tx_qtail_spacing); txq->ops = &def_txq_ops; txq->q_set = true; - dev->data->tx_queues[queue_idx] = txq; + dev->data->tx_queues[queue_idx] = cpfl_txq; return 0; @@ -503,6 +565,7 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, int cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) { + struct cpfl_rx_queue *cpfl_rxq; struct idpf_rx_queue *rxq; uint16_t max_pkt_len; uint32_t frame_size; @@ -511,7 +574,8 @@ cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) if (rx_queue_id >= dev->data->nb_rx_queues) return -EINVAL; - rxq = dev->data->rx_queues[rx_queue_id]; + cpfl_rxq = dev->data->rx_queues[rx_queue_id]; + rxq = &cpfl_rxq->base; if (rxq == NULL || !rxq->q_set) { PMD_DRV_LOG(ERR, "RX queue %u not available or setup", @@ -575,9 +639,10 @@ cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) int cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) { - struct idpf_vport *vport = dev->data->dev_private; - struct idpf_rx_queue *rxq = - dev->data->rx_queues[rx_queue_id]; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; + struct cpfl_rx_queue *cpfl_rxq = dev->data->rx_queues[rx_queue_id]; + struct idpf_rx_queue *rxq = &cpfl_rxq->base; int err = 0; err = idpf_vc_rxq_config(vport, rxq); @@ -610,15 +675,15 @@ cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) int cpfl_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id) { - struct idpf_tx_queue *txq; + struct cpfl_tx_queue *cpfl_txq; if (tx_queue_id >= dev->data->nb_tx_queues) return -EINVAL; - txq = dev->data->tx_queues[tx_queue_id]; + cpfl_txq = dev->data->tx_queues[tx_queue_id]; /* Init the RX tail register. */ - IDPF_PCI_REG_WRITE(txq->qtx_tail, 0); + IDPF_PCI_REG_WRITE(cpfl_txq->base.qtx_tail, 0); return 0; } @@ -626,12 +691,13 @@ cpfl_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id) int cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) { - struct idpf_vport *vport = dev->data->dev_private; - struct idpf_tx_queue *txq = + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; + struct cpfl_tx_queue *cpfl_txq = dev->data->tx_queues[tx_queue_id]; int err = 0; - err = idpf_vc_txq_config(vport, txq); + err = idpf_vc_txq_config(vport, &cpfl_txq->base); if (err != 0) { PMD_DRV_LOG(ERR, "Fail to configure Tx queue %u", tx_queue_id); return err; @@ -650,7 +716,7 @@ cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on", tx_queue_id); } else { - txq->q_started = true; + cpfl_txq->base.q_started = true; dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; } @@ -661,13 +727,16 @@ cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; + struct cpfl_rx_queue *cpfl_rxq; struct idpf_rx_queue *rxq; int err; if (rx_queue_id >= dev->data->nb_rx_queues) return -EINVAL; + cpfl_rxq = dev->data->rx_queues[rx_queue_id]; err = idpf_vc_queue_switch(vport, rx_queue_id, true, false); if (err != 0) { PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off", @@ -675,7 +744,7 @@ cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) return err; } - rxq = dev->data->rx_queues[rx_queue_id]; + rxq = &cpfl_rxq->base; rxq->q_started = false; if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) { rxq->ops->release_mbufs(rxq); @@ -693,13 +762,17 @@ cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) int cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; + struct cpfl_tx_queue *cpfl_txq; struct idpf_tx_queue *txq; int err; if (tx_queue_id >= dev->data->nb_tx_queues) return -EINVAL; + cpfl_txq = dev->data->tx_queues[tx_queue_id]; + err = idpf_vc_queue_switch(vport, tx_queue_id, false, false); if (err != 0) { PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off", @@ -707,7 +780,7 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) return err; } - txq = dev->data->tx_queues[tx_queue_id]; + txq = &cpfl_txq->base; txq->q_started = false; txq->ops->release_mbufs(txq); if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) { @@ -724,25 +797,25 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid) { - idpf_qc_rx_queue_release(dev->data->rx_queues[qid]); + cpfl_rx_queue_release(dev->data->rx_queues[qid]); } void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid) { - idpf_qc_tx_queue_release(dev->data->tx_queues[qid]); + cpfl_tx_queue_release(dev->data->tx_queues[qid]); } void cpfl_stop_queues(struct rte_eth_dev *dev) { - struct idpf_rx_queue *rxq; - struct idpf_tx_queue *txq; + struct cpfl_rx_queue *cpfl_rxq; + struct cpfl_tx_queue *cpfl_txq; int i; for (i = 0; i < dev->data->nb_rx_queues; i++) { - rxq = dev->data->rx_queues[i]; - if (rxq == NULL) + cpfl_rxq = dev->data->rx_queues[i]; + if (cpfl_rxq == NULL) continue; if (cpfl_rx_queue_stop(dev, i) != 0) @@ -750,8 +823,8 @@ cpfl_stop_queues(struct rte_eth_dev *dev) } for (i = 0; i < dev->data->nb_tx_queues; i++) { - txq = dev->data->tx_queues[i]; - if (txq == NULL) + cpfl_txq = dev->data->tx_queues[i]; + if (cpfl_txq == NULL) continue; if (cpfl_tx_queue_stop(dev, i) != 0) @@ -762,9 +835,10 @@ cpfl_stop_queues(struct rte_eth_dev *dev) void cpfl_set_rx_function(struct rte_eth_dev *dev) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; #ifdef RTE_ARCH_X86 - struct idpf_rx_queue *rxq; + struct cpfl_rx_queue *cpfl_rxq; int i; if (cpfl_rx_vec_dev_check_default(dev) == CPFL_VECTOR_PATH && @@ -790,8 +864,8 @@ cpfl_set_rx_function(struct rte_eth_dev *dev) if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) { if (vport->rx_vec_allowed) { for (i = 0; i < dev->data->nb_rx_queues; i++) { - rxq = dev->data->rx_queues[i]; - (void)idpf_qc_splitq_rx_vec_setup(rxq); + cpfl_rxq = dev->data->rx_queues[i]; + (void)idpf_qc_splitq_rx_vec_setup(&cpfl_rxq->base); } #ifdef CC_AVX512_SUPPORT if (vport->rx_use_avx512) { @@ -810,8 +884,8 @@ cpfl_set_rx_function(struct rte_eth_dev *dev) } else { if (vport->rx_vec_allowed) { for (i = 0; i < dev->data->nb_rx_queues; i++) { - rxq = dev->data->rx_queues[i]; - (void)idpf_qc_singleq_rx_vec_setup(rxq); + cpfl_rxq = dev->data->rx_queues[i]; + (void)idpf_qc_singleq_rx_vec_setup(&cpfl_rxq->base); } #ifdef CC_AVX512_SUPPORT if (vport->rx_use_avx512) { @@ -860,10 +934,11 @@ cpfl_set_rx_function(struct rte_eth_dev *dev) void cpfl_set_tx_function(struct rte_eth_dev *dev) { - struct idpf_vport *vport = dev->data->dev_private; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; #ifdef RTE_ARCH_X86 #ifdef CC_AVX512_SUPPORT - struct idpf_tx_queue *txq; + struct cpfl_tx_queue *cpfl_txq; int i; #endif /* CC_AVX512_SUPPORT */ @@ -878,8 +953,8 @@ cpfl_set_tx_function(struct rte_eth_dev *dev) vport->tx_use_avx512 = true; if (vport->tx_use_avx512) { for (i = 0; i < dev->data->nb_tx_queues; i++) { - txq = dev->data->tx_queues[i]; - idpf_qc_tx_vec_avx512_setup(txq); + cpfl_txq = dev->data->tx_queues[i]; + idpf_qc_tx_vec_avx512_setup(&cpfl_txq->base); } } } @@ -916,10 +991,10 @@ cpfl_set_tx_function(struct rte_eth_dev *dev) #ifdef CC_AVX512_SUPPORT if (vport->tx_use_avx512) { for (i = 0; i < dev->data->nb_tx_queues; i++) { - txq = dev->data->tx_queues[i]; - if (txq == NULL) + cpfl_txq = dev->data->tx_queues[i]; + if (cpfl_txq == NULL) continue; - idpf_qc_tx_vec_avx512_setup(txq); + idpf_qc_tx_vec_avx512_setup(&cpfl_txq->base); } PMD_DRV_LOG(NOTICE, "Using Single AVX512 Vector Tx (port %d).", diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h index fb267d38c8..bfb9ad97bd 100644 --- a/drivers/net/cpfl/cpfl_rxtx.h +++ b/drivers/net/cpfl/cpfl_rxtx.h @@ -23,6 +23,14 @@ #define CPFL_SUPPORT_CHAIN_NUM 5 +struct cpfl_rx_queue { + struct idpf_rx_queue base; +}; + +struct cpfl_tx_queue { + struct idpf_tx_queue base; +}; + int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_txconf *tx_conf); diff --git a/drivers/net/cpfl/cpfl_rxtx_vec_common.h b/drivers/net/cpfl/cpfl_rxtx_vec_common.h index 665418d27d..5690b17911 100644 --- a/drivers/net/cpfl/cpfl_rxtx_vec_common.h +++ b/drivers/net/cpfl/cpfl_rxtx_vec_common.h @@ -76,15 +76,16 @@ cpfl_rx_splitq_vec_default(struct idpf_rx_queue *rxq) static inline int cpfl_rx_vec_dev_check_default(struct rte_eth_dev *dev) { - struct idpf_vport *vport = dev->data->dev_private; - struct idpf_rx_queue *rxq; + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; + struct cpfl_rx_queue *cpfl_rxq; int i, default_ret, splitq_ret, ret = CPFL_SCALAR_PATH; for (i = 0; i < dev->data->nb_rx_queues; i++) { - rxq = dev->data->rx_queues[i]; - default_ret = cpfl_rx_vec_queue_default(rxq); + cpfl_rxq = dev->data->rx_queues[i]; + default_ret = cpfl_rx_vec_queue_default(&cpfl_rxq->base); if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) { - splitq_ret = cpfl_rx_splitq_vec_default(rxq); + splitq_ret = cpfl_rx_splitq_vec_default(&cpfl_rxq->base); ret = splitq_ret && default_ret; } else { ret = default_ret; @@ -100,12 +101,12 @@ static inline int cpfl_tx_vec_dev_check_default(struct rte_eth_dev *dev) { int i; - struct idpf_tx_queue *txq; + struct cpfl_tx_queue *cpfl_txq; int ret = 0; for (i = 0; i < dev->data->nb_tx_queues; i++) { - txq = dev->data->tx_queues[i]; - ret = cpfl_tx_vec_queue_default(txq); + cpfl_txq = dev->data->tx_queues[i]; + ret = cpfl_tx_vec_queue_default(&cpfl_txq->base); if (ret == CPFL_SCALAR_PATH) return CPFL_SCALAR_PATH; } From patchwork Wed May 31 13:04:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 127789 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4FB6842BF4; Wed, 31 May 2023 15:29:26 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B937A42D32; Wed, 31 May 2023 15:29:19 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id ECAAA410D0 for ; Wed, 31 May 2023 15:29:14 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685539755; x=1717075755; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=82N8O95wtbEHqZYor1Qdr1XbDZ5VBYUiACvRUiM8voI=; b=N7VTcpCMunGNyjo+hjxyFfofydQ2YFHgoRyRsbGv++9fwBrtdVdBspVj MaGrk7gFO0dUZ73nXVSd2xvAUkH1c1qt1x1OZGxvik3IPz3Vn2dz2Qnv4 K1Ta7RVODEqvpfiW6YTS8UaOTCpKz+R8Ns9avxLAo/CIdb7za0aQG1q19 KNoheF4TAypBXWFoRRJYGv8dUPsSQhjeMY2GgLW6Np0H35c7xzxJo7UFx FEB1qLMLnJfhpo/mZ0YhWUlKzphYWfbi2rjuURrMyIPGugJbTjv2dVWaf Vq61CzqbVnUN9Doqg4xAZ4YU3XoUHEFzv1uJ58HY1Jp2YPOqF3e+Q3zCY g==; X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="358497849" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="358497849" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2023 06:29:14 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="657325514" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="657325514" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga003.jf.intel.com with ESMTP; 31 May 2023 06:29:13 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing Subject: [PATCH v7 02/14] common/idpf: support queue groups add/delete Date: Wed, 31 May 2023 13:04:38 +0000 Message-Id: <20230531130450.26380-3-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230531130450.26380-1-beilei.xing@intel.com> References: <20230531102551.20936-1-beilei.xing@intel.com> <20230531130450.26380-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch adds queue group add/delete virtual channel support. Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/common/idpf/idpf_common_virtchnl.c | 66 ++++++++++++++++++++++ drivers/common/idpf/idpf_common_virtchnl.h | 9 +++ drivers/common/idpf/version.map | 2 + 3 files changed, 77 insertions(+) diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c index b713678634..a3fe55c897 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.c +++ b/drivers/common/idpf/idpf_common_virtchnl.c @@ -359,6 +359,72 @@ idpf_vc_vport_destroy(struct idpf_vport *vport) return err; } +int +idpf_vc_queue_grps_add(struct idpf_vport *vport, + struct virtchnl2_add_queue_groups *p2p_queue_grps_info, + uint8_t *p2p_queue_grps_out) +{ + struct idpf_adapter *adapter = vport->adapter; + struct idpf_cmd_info args; + int size, qg_info_size; + int err = -1; + + size = sizeof(*p2p_queue_grps_info) + + (p2p_queue_grps_info->qg_info.num_queue_groups - 1) * + sizeof(struct virtchnl2_queue_group_info); + + memset(&args, 0, sizeof(args)); + args.ops = VIRTCHNL2_OP_ADD_QUEUE_GROUPS; + args.in_args = (uint8_t *)p2p_queue_grps_info; + args.in_args_size = size; + args.out_buffer = adapter->mbx_resp; + args.out_size = IDPF_DFLT_MBX_BUF_SIZE; + + err = idpf_vc_cmd_execute(adapter, &args); + if (err != 0) { + DRV_LOG(ERR, + "Failed to execute command of VIRTCHNL2_OP_ADD_QUEUE_GROUPS"); + return err; + } + + rte_memcpy(p2p_queue_grps_out, args.out_buffer, IDPF_DFLT_MBX_BUF_SIZE); + return 0; +} + +int idpf_vc_queue_grps_del(struct idpf_vport *vport, + uint16_t num_q_grps, + struct virtchnl2_queue_group_id *qg_ids) +{ + struct idpf_adapter *adapter = vport->adapter; + struct virtchnl2_delete_queue_groups *vc_del_q_grps; + struct idpf_cmd_info args; + int size; + int err; + + size = sizeof(*vc_del_q_grps) + + (num_q_grps - 1) * sizeof(struct virtchnl2_queue_group_id); + vc_del_q_grps = rte_zmalloc("vc_del_q_grps", size, 0); + + vc_del_q_grps->vport_id = vport->vport_id; + vc_del_q_grps->num_queue_groups = num_q_grps; + memcpy(vc_del_q_grps->qg_ids, qg_ids, + num_q_grps * sizeof(struct virtchnl2_queue_group_id)); + + memset(&args, 0, sizeof(args)); + args.ops = VIRTCHNL2_OP_DEL_QUEUE_GROUPS; + args.in_args = (uint8_t *)vc_del_q_grps; + args.in_args_size = size; + args.out_buffer = adapter->mbx_resp; + args.out_size = IDPF_DFLT_MBX_BUF_SIZE; + + err = idpf_vc_cmd_execute(adapter, &args); + if (err != 0) + DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_DEL_QUEUE_GROUPS"); + + rte_free(vc_del_q_grps); + return err; +} + int idpf_vc_rss_key_set(struct idpf_vport *vport) { diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h index c45295290e..58b16e1c5d 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.h +++ b/drivers/common/idpf/idpf_common_virtchnl.h @@ -64,4 +64,13 @@ int idpf_vc_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg, __rte_internal int idpf_vc_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq, u16 *buff_count, struct idpf_dma_mem **buffs); +__rte_internal +int idpf_vc_queue_grps_del(struct idpf_vport *vport, + uint16_t num_q_grps, + struct virtchnl2_queue_group_id *qg_ids); +__rte_internal +int +idpf_vc_queue_grps_add(struct idpf_vport *vport, + struct virtchnl2_add_queue_groups *ptp_queue_grps_info, + uint8_t *ptp_queue_grps_out); #endif /* _IDPF_COMMON_VIRTCHNL_H_ */ diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map index 70334a1b03..01d18f3f3f 100644 --- a/drivers/common/idpf/version.map +++ b/drivers/common/idpf/version.map @@ -43,6 +43,8 @@ INTERNAL { idpf_vc_irq_map_unmap_config; idpf_vc_one_msg_read; idpf_vc_ptype_info_query; + idpf_vc_queue_grps_add; + idpf_vc_queue_grps_del; idpf_vc_queue_switch; idpf_vc_queues_ena_dis; idpf_vc_rss_hash_get; From patchwork Wed May 31 13:04:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 127790 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A15D342BF4; Wed, 31 May 2023 15:29:32 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DEC1E42D36; Wed, 31 May 2023 15:29:20 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id ABB3042BDA for ; Wed, 31 May 2023 15:29:16 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685539756; x=1717075756; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9l5RakS5ufJgfDHn5377KDzpGV4wppBz3vvs7m4f8Jk=; b=IpWW+CKn+bDWDyXnFeyW4Wcrd0kbEsPtN8P6wig7/1fpnZ0tFqYRoxkT QoNRbSDVSzBITb27uwZP8U2hW1UQVP8cK+wk+sBOhz05xiQwcQmfFTjcS HBOa7B9HbmNo7ixKQ7tl/kGsEJB+eubl87eszWp/Pf+1m+tr1z6z/X3dS EoIJSHdWaGyqP6OpFGXJCKUPDSsiYRSsJhQGjY+uJGAbgnMLd1UccXW1D hhVMkBxoSqgDgdObMzOj8yfySv++FAPlazUS71oZdejz18SjyNe0qKdR2 ASL2RV6f3sPYf+J3C6eremaVAzJUaFSY+EL2nksEdjVBLYXQ2vi1nXtx0 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="358497859" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="358497859" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2023 06:29:16 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="657325518" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="657325518" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga003.jf.intel.com with ESMTP; 31 May 2023 06:29:14 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing Subject: [PATCH v7 03/14] net/cpfl: add haipin queue group during vport init Date: Wed, 31 May 2023 13:04:39 +0000 Message-Id: <20230531130450.26380-4-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230531130450.26380-1-beilei.xing@intel.com> References: <20230531102551.20936-1-beilei.xing@intel.com> <20230531130450.26380-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch adds haipin queue group during vport init. Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/net/cpfl/cpfl_ethdev.c | 133 +++++++++++++++++++++++++++++++++ drivers/net/cpfl/cpfl_ethdev.h | 18 +++++ drivers/net/cpfl/cpfl_rxtx.h | 7 ++ 3 files changed, 158 insertions(+) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index e587155db6..c1273a7478 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -840,6 +840,20 @@ cpfl_dev_stop(struct rte_eth_dev *dev) return 0; } +static int +cpfl_p2p_queue_grps_del(struct idpf_vport *vport) +{ + struct virtchnl2_queue_group_id qg_ids[CPFL_P2P_NB_QUEUE_GRPS] = {0}; + int ret = 0; + + qg_ids[0].queue_group_id = CPFL_P2P_QUEUE_GRP_ID; + qg_ids[0].queue_group_type = VIRTCHNL2_QUEUE_GROUP_P2P; + ret = idpf_vc_queue_grps_del(vport, CPFL_P2P_NB_QUEUE_GRPS, qg_ids); + if (ret) + PMD_DRV_LOG(ERR, "Failed to delete p2p queue groups"); + return ret; +} + static int cpfl_dev_close(struct rte_eth_dev *dev) { @@ -848,7 +862,12 @@ cpfl_dev_close(struct rte_eth_dev *dev) struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(vport->adapter); cpfl_dev_stop(dev); + + if (!adapter->base.is_rx_singleq && !adapter->base.is_tx_singleq) + cpfl_p2p_queue_grps_del(vport); + idpf_vport_deinit(vport); + rte_free(cpfl_vport->p2p_q_chunks_info); adapter->cur_vports &= ~RTE_BIT32(vport->devarg_id); adapter->cur_vport_nb--; @@ -1284,6 +1303,96 @@ cpfl_vport_idx_alloc(struct cpfl_adapter_ext *adapter) return vport_idx; } +static int +cpfl_p2p_q_grps_add(struct idpf_vport *vport, + struct virtchnl2_add_queue_groups *p2p_queue_grps_info, + uint8_t *p2p_q_vc_out_info) +{ + int ret; + + p2p_queue_grps_info->vport_id = vport->vport_id; + p2p_queue_grps_info->qg_info.num_queue_groups = CPFL_P2P_NB_QUEUE_GRPS; + p2p_queue_grps_info->qg_info.groups[0].num_rx_q = CPFL_MAX_P2P_NB_QUEUES; + p2p_queue_grps_info->qg_info.groups[0].num_rx_bufq = CPFL_P2P_NB_RX_BUFQ; + p2p_queue_grps_info->qg_info.groups[0].num_tx_q = CPFL_MAX_P2P_NB_QUEUES; + p2p_queue_grps_info->qg_info.groups[0].num_tx_complq = CPFL_P2P_NB_TX_COMPLQ; + p2p_queue_grps_info->qg_info.groups[0].qg_id.queue_group_id = CPFL_P2P_QUEUE_GRP_ID; + p2p_queue_grps_info->qg_info.groups[0].qg_id.queue_group_type = VIRTCHNL2_QUEUE_GROUP_P2P; + p2p_queue_grps_info->qg_info.groups[0].rx_q_grp_info.rss_lut_size = 0; + p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.tx_tc = 0; + p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.priority = 0; + p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.is_sp = 0; + p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.pir_weight = 0; + + ret = idpf_vc_queue_grps_add(vport, p2p_queue_grps_info, p2p_q_vc_out_info); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to add p2p queue groups."); + return ret; + } + + return ret; +} + +static int +cpfl_p2p_queue_info_init(struct cpfl_vport *cpfl_vport, + struct virtchnl2_add_queue_groups *p2p_q_vc_out_info) +{ + struct p2p_queue_chunks_info *p2p_q_chunks_info = cpfl_vport->p2p_q_chunks_info; + struct virtchnl2_queue_reg_chunks *vc_chunks_out; + int i, type; + + if (p2p_q_vc_out_info->qg_info.groups[0].qg_id.queue_group_type != + VIRTCHNL2_QUEUE_GROUP_P2P) { + PMD_DRV_LOG(ERR, "Add queue group response mismatch."); + return -EINVAL; + } + + vc_chunks_out = &p2p_q_vc_out_info->qg_info.groups[0].chunks; + + for (i = 0; i < vc_chunks_out->num_chunks; i++) { + type = vc_chunks_out->chunks[i].type; + switch (type) { + case VIRTCHNL2_QUEUE_TYPE_TX: + p2p_q_chunks_info->tx_start_qid = + vc_chunks_out->chunks[i].start_queue_id; + p2p_q_chunks_info->tx_qtail_start = + vc_chunks_out->chunks[i].qtail_reg_start; + p2p_q_chunks_info->tx_qtail_spacing = + vc_chunks_out->chunks[i].qtail_reg_spacing; + break; + case VIRTCHNL2_QUEUE_TYPE_RX: + p2p_q_chunks_info->rx_start_qid = + vc_chunks_out->chunks[i].start_queue_id; + p2p_q_chunks_info->rx_qtail_start = + vc_chunks_out->chunks[i].qtail_reg_start; + p2p_q_chunks_info->rx_qtail_spacing = + vc_chunks_out->chunks[i].qtail_reg_spacing; + break; + case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION: + p2p_q_chunks_info->tx_compl_start_qid = + vc_chunks_out->chunks[i].start_queue_id; + p2p_q_chunks_info->tx_compl_qtail_start = + vc_chunks_out->chunks[i].qtail_reg_start; + p2p_q_chunks_info->tx_compl_qtail_spacing = + vc_chunks_out->chunks[i].qtail_reg_spacing; + break; + case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER: + p2p_q_chunks_info->rx_buf_start_qid = + vc_chunks_out->chunks[i].start_queue_id; + p2p_q_chunks_info->rx_buf_qtail_start = + vc_chunks_out->chunks[i].qtail_reg_start; + p2p_q_chunks_info->rx_buf_qtail_spacing = + vc_chunks_out->chunks[i].qtail_reg_spacing; + break; + default: + PMD_DRV_LOG(ERR, "Unsupported queue type"); + break; + } + } + + return 0; +} + static int cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params) { @@ -1293,6 +1402,8 @@ cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params) struct cpfl_adapter_ext *adapter = param->adapter; /* for sending create vport virtchnl msg prepare */ struct virtchnl2_create_vport create_vport_info; + struct virtchnl2_add_queue_groups p2p_queue_grps_info; + uint8_t p2p_q_vc_out_info[IDPF_DFLT_MBX_BUF_SIZE] = {0}; int ret = 0; dev->dev_ops = &cpfl_eth_dev_ops; @@ -1327,6 +1438,28 @@ cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params) rte_ether_addr_copy((struct rte_ether_addr *)vport->default_mac_addr, &dev->data->mac_addrs[0]); + if (!adapter->base.is_rx_singleq && !adapter->base.is_tx_singleq) { + memset(&p2p_queue_grps_info, 0, sizeof(p2p_queue_grps_info)); + ret = cpfl_p2p_q_grps_add(vport, &p2p_queue_grps_info, p2p_q_vc_out_info); + if (ret != 0) { + PMD_INIT_LOG(ERR, "Failed to add p2p queue group."); + return 0; + } + cpfl_vport->p2p_q_chunks_info = rte_zmalloc(NULL, + sizeof(struct p2p_queue_chunks_info), 0); + if (cpfl_vport->p2p_q_chunks_info == NULL) { + PMD_INIT_LOG(ERR, "Failed to allocate p2p queue info."); + cpfl_p2p_queue_grps_del(vport); + return 0; + } + ret = cpfl_p2p_queue_info_init(cpfl_vport, + (struct virtchnl2_add_queue_groups *)p2p_q_vc_out_info); + if (ret != 0) { + PMD_INIT_LOG(ERR, "Failed to init p2p queue info."); + cpfl_p2p_queue_grps_del(vport); + } + } + return 0; err_mac_addrs: diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h index 81fe9ac4c3..666d46a44a 100644 --- a/drivers/net/cpfl/cpfl_ethdev.h +++ b/drivers/net/cpfl/cpfl_ethdev.h @@ -56,6 +56,7 @@ /* Device IDs */ #define IDPF_DEV_ID_CPF 0x1453 +#define VIRTCHNL2_QUEUE_GROUP_P2P 0x100 struct cpfl_vport_param { struct cpfl_adapter_ext *adapter; @@ -69,8 +70,25 @@ struct cpfl_devargs { uint16_t req_vport_nb; }; +struct p2p_queue_chunks_info { + uint32_t tx_start_qid; + uint32_t rx_start_qid; + uint32_t tx_compl_start_qid; + uint32_t rx_buf_start_qid; + + uint64_t tx_qtail_start; + uint32_t tx_qtail_spacing; + uint64_t rx_qtail_start; + uint32_t rx_qtail_spacing; + uint64_t tx_compl_qtail_start; + uint32_t tx_compl_qtail_spacing; + uint64_t rx_buf_qtail_start; + uint32_t rx_buf_qtail_spacing; +}; + struct cpfl_vport { struct idpf_vport base; + struct p2p_queue_chunks_info *p2p_q_chunks_info; }; struct cpfl_adapter_ext { diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h index bfb9ad97bd..1fe65778f0 100644 --- a/drivers/net/cpfl/cpfl_rxtx.h +++ b/drivers/net/cpfl/cpfl_rxtx.h @@ -13,6 +13,13 @@ #define CPFL_MIN_RING_DESC 32 #define CPFL_MAX_RING_DESC 4096 #define CPFL_DMA_MEM_ALIGN 4096 + +#define CPFL_MAX_P2P_NB_QUEUES 16 +#define CPFL_P2P_NB_RX_BUFQ 1 +#define CPFL_P2P_NB_TX_COMPLQ 1 +#define CPFL_P2P_NB_QUEUE_GRPS 1 +#define CPFL_P2P_QUEUE_GRP_ID 1 + /* Base address of the HW descriptor ring should be 128B aligned. */ #define CPFL_RING_BASE_ALIGN 128 From patchwork Wed May 31 13:04:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 127791 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AB5F742BF4; Wed, 31 May 2023 15:29:38 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1DCEA42D49; Wed, 31 May 2023 15:29:22 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 4560842D33 for ; Wed, 31 May 2023 15:29:18 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685539758; x=1717075758; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=0J3kCWXQyej1pfHQj5Q0eZprYBPaCR3/PoNyi/pb6ZU=; b=IiKqKdAJYIIsxOioUazAQgTJmllgwLMlaqnwDsSlLXRWXCEpUhlxz47t UJzdhFokS6WwLK1zwyXiqNABapdrKQ3V0OOzrMP8x5nI88eoBKR0ho9J0 2rjZid3gkw6MuPREDbxFI2qtTs4Bdk1FVd4PQAKk0/iKWbIG/OdOm4uEb FIoB49J3+6Vr9pXtt10B0FOvKXBJmhrKGUNCbPV6c/eTU21TtmBFl7sZH goZ9bXhG2WDEgd/5lLLFokf3k/ykITuuWew7p9P64VTT6C0zOSlbUFhpd ywoGz9ZoQ8EY1+QWgd8H7wnZlO/G4l0WrUdF7M+LTi3vY0qM3d4b6TIhA w==; X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="358497864" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="358497864" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2023 06:29:17 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="657325523" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="657325523" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga003.jf.intel.com with ESMTP; 31 May 2023 06:29:16 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing , Xiao Wang Subject: [PATCH v7 04/14] net/cpfl: support hairpin queue capbility get Date: Wed, 31 May 2023 13:04:40 +0000 Message-Id: <20230531130450.26380-5-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230531130450.26380-1-beilei.xing@intel.com> References: <20230531102551.20936-1-beilei.xing@intel.com> <20230531130450.26380-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch adds hairpin_cap_get ops support. Signed-off-by: Xiao Wang Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/net/cpfl/cpfl_ethdev.c | 18 ++++++++++++++++++ drivers/net/cpfl/cpfl_rxtx.h | 3 +++ 2 files changed, 21 insertions(+) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index c1273a7478..40b4515539 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -154,6 +154,23 @@ cpfl_dev_link_update(struct rte_eth_dev *dev, return rte_eth_linkstatus_set(dev, &new_link); } +static int +cpfl_hairpin_cap_get(struct rte_eth_dev *dev, + struct rte_eth_hairpin_cap *cap) +{ + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + + if (cpfl_vport->p2p_q_chunks_info == NULL) + return -ENOTSUP; + + cap->max_nb_queues = CPFL_MAX_P2P_NB_QUEUES; + cap->max_rx_2_tx = CPFL_MAX_HAIRPINQ_RX_2_TX; + cap->max_tx_2_rx = CPFL_MAX_HAIRPINQ_TX_2_RX; + cap->max_nb_desc = CPFL_MAX_HAIRPINQ_NB_DESC; + + return 0; +} + static int cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) { @@ -904,6 +921,7 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = { .xstats_get = cpfl_dev_xstats_get, .xstats_get_names = cpfl_dev_xstats_get_names, .xstats_reset = cpfl_dev_xstats_reset, + .hairpin_cap_get = cpfl_hairpin_cap_get, }; static int diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h index 1fe65778f0..a4a164d462 100644 --- a/drivers/net/cpfl/cpfl_rxtx.h +++ b/drivers/net/cpfl/cpfl_rxtx.h @@ -14,6 +14,9 @@ #define CPFL_MAX_RING_DESC 4096 #define CPFL_DMA_MEM_ALIGN 4096 +#define CPFL_MAX_HAIRPINQ_RX_2_TX 1 +#define CPFL_MAX_HAIRPINQ_TX_2_RX 1 +#define CPFL_MAX_HAIRPINQ_NB_DESC 1024 #define CPFL_MAX_P2P_NB_QUEUES 16 #define CPFL_P2P_NB_RX_BUFQ 1 #define CPFL_P2P_NB_TX_COMPLQ 1 From patchwork Wed May 31 13:04:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 127792 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 49CE642BF4; Wed, 31 May 2023 15:29:45 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 560F542D4B; Wed, 31 May 2023 15:29:24 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 7689342C76 for ; Wed, 31 May 2023 15:29:20 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685539760; x=1717075760; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=iBIY9rZGgZkeDH+ES+KN1u0xwVGl6PWzqaQXnPE9Vhk=; b=nfIJjthqjAfHVFt+chdYAJiXAESt+ta6LygxsBFnX5dfG1JDKPF/md9r y5XsWqlKJ5e0RNmXSSCzE5dHQ2cA12TlOstXicz/Rzt8fCPjxLJkxIyyJ U0jd+n+PQQqv0fIxJwzkcsCUpaSCrwbHGe05siwPDLOIfdbIOFTux4i6J PT9BEKxz1XO0+7dYXiGYkp6OlyT3qp8fMSspifbJvrsT5bZ0XjNFTjA2W xO7By3Tldt/MqAUUMP/bLr+4C7Z3ePcxGqmNkFYIGpZ0h4oHRC/207O41 2DmYVMkuAjFH6raGcj5ZYkKgn8KDOughkfKAPQpES5F2AVszrPgSb10kC g==; X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="358497877" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="358497877" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2023 06:29:20 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="657325530" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="657325530" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga003.jf.intel.com with ESMTP; 31 May 2023 06:29:17 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing , Xiao Wang Subject: [PATCH v7 05/14] net/cpfl: support hairpin queue setup and release Date: Wed, 31 May 2023 13:04:41 +0000 Message-Id: <20230531130450.26380-6-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230531130450.26380-1-beilei.xing@intel.com> References: <20230531102551.20936-1-beilei.xing@intel.com> <20230531130450.26380-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing Support hairpin Rx/Tx queue setup and release. Signed-off-by: Xiao Wang Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/net/cpfl/cpfl_ethdev.c | 6 + drivers/net/cpfl/cpfl_ethdev.h | 11 + drivers/net/cpfl/cpfl_rxtx.c | 364 +++++++++++++++++++++++- drivers/net/cpfl/cpfl_rxtx.h | 36 +++ drivers/net/cpfl/cpfl_rxtx_vec_common.h | 4 + 5 files changed, 420 insertions(+), 1 deletion(-) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index 40b4515539..b17c538ec2 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -879,6 +879,10 @@ cpfl_dev_close(struct rte_eth_dev *dev) struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(vport->adapter); cpfl_dev_stop(dev); + if (cpfl_vport->p2p_mp) { + rte_mempool_free(cpfl_vport->p2p_mp); + cpfl_vport->p2p_mp = NULL; + } if (!adapter->base.is_rx_singleq && !adapter->base.is_tx_singleq) cpfl_p2p_queue_grps_del(vport); @@ -922,6 +926,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = { .xstats_get_names = cpfl_dev_xstats_get_names, .xstats_reset = cpfl_dev_xstats_reset, .hairpin_cap_get = cpfl_hairpin_cap_get, + .rx_hairpin_queue_setup = cpfl_rx_hairpin_queue_setup, + .tx_hairpin_queue_setup = cpfl_tx_hairpin_queue_setup, }; static int diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h index 666d46a44a..2e42354f70 100644 --- a/drivers/net/cpfl/cpfl_ethdev.h +++ b/drivers/net/cpfl/cpfl_ethdev.h @@ -89,6 +89,17 @@ struct p2p_queue_chunks_info { struct cpfl_vport { struct idpf_vport base; struct p2p_queue_chunks_info *p2p_q_chunks_info; + + struct rte_mempool *p2p_mp; + + uint16_t nb_data_rxq; + uint16_t nb_data_txq; + uint16_t nb_p2p_rxq; + uint16_t nb_p2p_txq; + + struct idpf_rx_queue *p2p_rx_bufq; + struct idpf_tx_queue *p2p_tx_complq; + bool p2p_manual_bind; }; struct cpfl_adapter_ext { diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c index 04a51b8d15..90b408d1f4 100644 --- a/drivers/net/cpfl/cpfl_rxtx.c +++ b/drivers/net/cpfl/cpfl_rxtx.c @@ -10,6 +10,67 @@ #include "cpfl_rxtx.h" #include "cpfl_rxtx_vec_common.h" +static inline void +cpfl_tx_hairpin_descq_reset(struct idpf_tx_queue *txq) +{ + uint32_t i, size; + + if (!txq) { + PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL"); + return; + } + + size = txq->nb_tx_desc * CPFL_P2P_DESC_LEN; + for (i = 0; i < size; i++) + ((volatile char *)txq->desc_ring)[i] = 0; +} + +static inline void +cpfl_tx_hairpin_complq_reset(struct idpf_tx_queue *cq) +{ + uint32_t i, size; + + if (!cq) { + PMD_DRV_LOG(DEBUG, "Pointer to complq is NULL"); + return; + } + + size = cq->nb_tx_desc * CPFL_P2P_DESC_LEN; + for (i = 0; i < size; i++) + ((volatile char *)cq->compl_ring)[i] = 0; +} + +static inline void +cpfl_rx_hairpin_descq_reset(struct idpf_rx_queue *rxq) +{ + uint16_t len; + uint32_t i; + + if (!rxq) + return; + + len = rxq->nb_rx_desc; + for (i = 0; i < len * CPFL_P2P_DESC_LEN; i++) + ((volatile char *)rxq->rx_ring)[i] = 0; +} + +static inline void +cpfl_rx_hairpin_bufq_reset(struct idpf_rx_queue *rxbq) +{ + uint16_t len; + uint32_t i; + + if (!rxbq) + return; + + len = rxbq->nb_rx_desc; + for (i = 0; i < len * CPFL_P2P_DESC_LEN; i++) + ((volatile char *)rxbq->rx_ring)[i] = 0; + + rxbq->bufq1 = NULL; + rxbq->bufq2 = NULL; +} + static uint64_t cpfl_rx_offload_convert(uint64_t offload) { @@ -234,7 +295,10 @@ cpfl_rx_queue_release(void *rxq) /* Split queue */ if (!q->adapter->is_rx_singleq) { - if (q->bufq2) + /* the mz is shared between Tx/Rx hairpin, let Rx_release + * free the buf, q->bufq1->mz and q->mz. + */ + if (!cpfl_rxq->hairpin_info.hairpin_q && q->bufq2) cpfl_rx_split_bufq_release(q->bufq2); if (q->bufq1) @@ -385,6 +449,7 @@ cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, } } + cpfl_vport->nb_data_rxq++; rxq->q_set = true; dev->data->rx_queues[queue_idx] = cpfl_rxq; @@ -548,6 +613,7 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start + queue_idx * vport->chunks_info.tx_qtail_spacing); txq->ops = &def_txq_ops; + cpfl_vport->nb_data_txq++; txq->q_set = true; dev->data->tx_queues[queue_idx] = cpfl_txq; @@ -562,6 +628,300 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, return ret; } +static int +cpfl_rx_hairpin_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *bufq, + uint16_t logic_qid, uint16_t nb_desc) +{ + struct cpfl_vport *cpfl_vport = + (struct cpfl_vport *)dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; + struct idpf_adapter *adapter = vport->adapter; + struct rte_mempool *mp; + char pool_name[RTE_MEMPOOL_NAMESIZE]; + + mp = cpfl_vport->p2p_mp; + if (!mp) { + snprintf(pool_name, RTE_MEMPOOL_NAMESIZE, "p2p_mb_pool_%u", + dev->data->port_id); + mp = rte_pktmbuf_pool_create(pool_name, CPFL_P2P_NB_MBUF * CPFL_MAX_P2P_NB_QUEUES, + CPFL_P2P_CACHE_SIZE, 0, CPFL_P2P_MBUF_SIZE, + dev->device->numa_node); + if (!mp) { + PMD_INIT_LOG(ERR, "Failed to allocate mbuf pool for p2p"); + return -ENOMEM; + } + cpfl_vport->p2p_mp = mp; + } + + bufq->mp = mp; + bufq->nb_rx_desc = nb_desc; + bufq->queue_id = cpfl_hw_qid_get(cpfl_vport->p2p_q_chunks_info->rx_buf_start_qid, + logic_qid); + bufq->port_id = dev->data->port_id; + bufq->adapter = adapter; + bufq->rx_buf_len = CPFL_P2P_MBUF_SIZE - RTE_PKTMBUF_HEADROOM; + + bufq->q_set = true; + bufq->ops = &def_rxq_ops; + + return 0; +} + +int +cpfl_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, + uint16_t nb_desc, + const struct rte_eth_hairpin_conf *conf) +{ + struct cpfl_vport *cpfl_vport = (struct cpfl_vport *)dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; + struct idpf_adapter *adapter_base = vport->adapter; + uint16_t logic_qid = cpfl_vport->nb_p2p_rxq; + struct cpfl_rxq_hairpin_info *hairpin_info; + struct cpfl_rx_queue *cpfl_rxq; + struct idpf_rx_queue *bufq1 = NULL; + struct idpf_rx_queue *rxq; + uint16_t peer_port, peer_q; + uint16_t qid; + int ret; + + if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) { + PMD_INIT_LOG(ERR, "Only spilt queue model supports hairpin queue."); + return -EINVAL; + } + + if (conf->peer_count != 1) { + PMD_INIT_LOG(ERR, "Can't support Rx hairpin queue peer count %d", conf->peer_count); + return -EINVAL; + } + + peer_port = conf->peers[0].port; + peer_q = conf->peers[0].queue; + + if (nb_desc % CPFL_ALIGN_RING_DESC != 0 || + nb_desc > CPFL_MAX_RING_DESC || + nb_desc < CPFL_MIN_RING_DESC) { + PMD_INIT_LOG(ERR, "Number (%u) of receive descriptors is invalid", nb_desc); + return -EINVAL; + } + + /* Free memory if needed */ + if (dev->data->rx_queues[queue_idx]) { + cpfl_rx_queue_release(dev->data->rx_queues[queue_idx]); + dev->data->rx_queues[queue_idx] = NULL; + } + + /* Setup Rx description queue */ + cpfl_rxq = rte_zmalloc_socket("cpfl hairpin rxq", + sizeof(struct cpfl_rx_queue), + RTE_CACHE_LINE_SIZE, + SOCKET_ID_ANY); + if (!cpfl_rxq) { + PMD_INIT_LOG(ERR, "Failed to allocate memory for rx queue data structure"); + return -ENOMEM; + } + + rxq = &cpfl_rxq->base; + hairpin_info = &cpfl_rxq->hairpin_info; + rxq->nb_rx_desc = nb_desc * 2; + rxq->queue_id = cpfl_hw_qid_get(cpfl_vport->p2p_q_chunks_info->rx_start_qid, logic_qid); + rxq->port_id = dev->data->port_id; + rxq->adapter = adapter_base; + rxq->rx_buf_len = CPFL_P2P_MBUF_SIZE - RTE_PKTMBUF_HEADROOM; + hairpin_info->hairpin_q = true; + hairpin_info->peer_txp = peer_port; + hairpin_info->peer_txq_id = peer_q; + + if (conf->manual_bind != 0) + cpfl_vport->p2p_manual_bind = true; + else + cpfl_vport->p2p_manual_bind = false; + + if (cpfl_vport->p2p_rx_bufq == NULL) { + bufq1 = rte_zmalloc_socket("hairpin rx bufq1", + sizeof(struct idpf_rx_queue), + RTE_CACHE_LINE_SIZE, + SOCKET_ID_ANY); + if (!bufq1) { + PMD_INIT_LOG(ERR, "Failed to allocate memory for hairpin Rx buffer queue 1."); + ret = -ENOMEM; + goto err_alloc_bufq1; + } + qid = 2 * logic_qid; + ret = cpfl_rx_hairpin_bufq_setup(dev, bufq1, qid, nb_desc); + if (ret) { + PMD_INIT_LOG(ERR, "Failed to setup hairpin Rx buffer queue 1"); + ret = -EINVAL; + goto err_setup_bufq1; + } + cpfl_vport->p2p_rx_bufq = bufq1; + } + + rxq->bufq1 = cpfl_vport->p2p_rx_bufq; + rxq->bufq2 = NULL; + + cpfl_vport->nb_p2p_rxq++; + rxq->q_set = true; + dev->data->rx_queues[queue_idx] = cpfl_rxq; + + return 0; + +err_setup_bufq1: + rte_mempool_free(cpfl_vport->p2p_mp); + rte_free(bufq1); +err_alloc_bufq1: + rte_free(cpfl_rxq); + + return ret; +} + +int +cpfl_tx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, + uint16_t nb_desc, + const struct rte_eth_hairpin_conf *conf) +{ + struct cpfl_vport *cpfl_vport = + (struct cpfl_vport *)dev->data->dev_private; + + struct idpf_vport *vport = &cpfl_vport->base; + struct idpf_adapter *adapter_base = vport->adapter; + uint16_t logic_qid = cpfl_vport->nb_p2p_txq; + struct cpfl_txq_hairpin_info *hairpin_info; + struct idpf_hw *hw = &adapter_base->hw; + struct cpfl_tx_queue *cpfl_txq; + struct idpf_tx_queue *txq, *cq; + const struct rte_memzone *mz; + uint32_t ring_size; + uint16_t peer_port, peer_q; + int ret; + + if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) { + PMD_INIT_LOG(ERR, "Only spilt queue model supports hairpin queue."); + return -EINVAL; + } + + if (conf->peer_count != 1) { + PMD_INIT_LOG(ERR, "Can't support Tx hairpin queue peer count %d", conf->peer_count); + return -EINVAL; + } + + peer_port = conf->peers[0].port; + peer_q = conf->peers[0].queue; + + if (nb_desc % CPFL_ALIGN_RING_DESC != 0 || + nb_desc > CPFL_MAX_RING_DESC || + nb_desc < CPFL_MIN_RING_DESC) { + PMD_INIT_LOG(ERR, "Number (%u) of transmit descriptors is invalid", + nb_desc); + return -EINVAL; + } + + /* Free memory if needed. */ + if (dev->data->tx_queues[queue_idx]) { + cpfl_tx_queue_release(dev->data->tx_queues[queue_idx]); + dev->data->tx_queues[queue_idx] = NULL; + } + + /* Allocate the TX queue data structure. */ + cpfl_txq = rte_zmalloc_socket("cpfl hairpin txq", + sizeof(struct cpfl_tx_queue), + RTE_CACHE_LINE_SIZE, + SOCKET_ID_ANY); + if (!cpfl_txq) { + PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure"); + return -ENOMEM; + } + + txq = &cpfl_txq->base; + hairpin_info = &cpfl_txq->hairpin_info; + /* Txq ring length should be 2 times of Tx completion queue size. */ + txq->nb_tx_desc = nb_desc * 2; + txq->queue_id = cpfl_hw_qid_get(cpfl_vport->p2p_q_chunks_info->tx_start_qid, logic_qid); + txq->port_id = dev->data->port_id; + hairpin_info->hairpin_q = true; + hairpin_info->peer_rxp = peer_port; + hairpin_info->peer_rxq_id = peer_q; + + if (conf->manual_bind != 0) + cpfl_vport->p2p_manual_bind = true; + else + cpfl_vport->p2p_manual_bind = false; + + /* Always Tx hairpin queue allocates Tx HW ring */ + ring_size = RTE_ALIGN(txq->nb_tx_desc * CPFL_P2P_DESC_LEN, + CPFL_DMA_MEM_ALIGN); + mz = rte_eth_dma_zone_reserve(dev, "hairpin_tx_ring", logic_qid, + ring_size + CPFL_P2P_RING_BUF, + CPFL_RING_BASE_ALIGN, + dev->device->numa_node); + if (!mz) { + PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX"); + ret = -ENOMEM; + goto err_txq_mz_rsv; + } + + txq->tx_ring_phys_addr = mz->iova; + txq->desc_ring = mz->addr; + txq->mz = mz; + + cpfl_tx_hairpin_descq_reset(txq); + txq->qtx_tail = hw->hw_addr + + cpfl_hw_qtail_get(cpfl_vport->p2p_q_chunks_info->tx_qtail_start, + logic_qid, cpfl_vport->p2p_q_chunks_info->tx_qtail_spacing); + txq->ops = &def_txq_ops; + + if (cpfl_vport->p2p_tx_complq == NULL) { + cq = rte_zmalloc_socket("cpfl hairpin cq", + sizeof(struct idpf_tx_queue), + RTE_CACHE_LINE_SIZE, + dev->device->numa_node); + if (!cq) { + PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure"); + ret = -ENOMEM; + goto err_cq_alloc; + } + + cq->nb_tx_desc = nb_desc; + cq->queue_id = cpfl_hw_qid_get(cpfl_vport->p2p_q_chunks_info->tx_compl_start_qid, + 0); + cq->port_id = dev->data->port_id; + + /* Tx completion queue always allocates the HW ring */ + ring_size = RTE_ALIGN(cq->nb_tx_desc * CPFL_P2P_DESC_LEN, + CPFL_DMA_MEM_ALIGN); + mz = rte_eth_dma_zone_reserve(dev, "hairpin_tx_compl_ring", logic_qid, + ring_size + CPFL_P2P_RING_BUF, + CPFL_RING_BASE_ALIGN, + dev->device->numa_node); + if (!mz) { + PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX completion queue"); + ret = -ENOMEM; + goto err_cq_mz_rsv; + } + cq->tx_ring_phys_addr = mz->iova; + cq->compl_ring = mz->addr; + cq->mz = mz; + + cpfl_tx_hairpin_complq_reset(cq); + cpfl_vport->p2p_tx_complq = cq; + } + + txq->complq = cpfl_vport->p2p_tx_complq; + + cpfl_vport->nb_p2p_txq++; + txq->q_set = true; + dev->data->tx_queues[queue_idx] = cpfl_txq; + + return 0; + +err_cq_mz_rsv: + rte_free(cq); +err_cq_alloc: + cpfl_dma_zone_release(mz); +err_txq_mz_rsv: + rte_free(cpfl_txq); + return ret; +} + int cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) { @@ -865,6 +1225,8 @@ cpfl_set_rx_function(struct rte_eth_dev *dev) if (vport->rx_vec_allowed) { for (i = 0; i < dev->data->nb_rx_queues; i++) { cpfl_rxq = dev->data->rx_queues[i]; + if (cpfl_rxq->hairpin_info.hairpin_q) + continue; (void)idpf_qc_splitq_rx_vec_setup(&cpfl_rxq->base); } #ifdef CC_AVX512_SUPPORT diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h index a4a164d462..06198d4aad 100644 --- a/drivers/net/cpfl/cpfl_rxtx.h +++ b/drivers/net/cpfl/cpfl_rxtx.h @@ -22,6 +22,11 @@ #define CPFL_P2P_NB_TX_COMPLQ 1 #define CPFL_P2P_NB_QUEUE_GRPS 1 #define CPFL_P2P_QUEUE_GRP_ID 1 +#define CPFL_P2P_DESC_LEN 16 +#define CPFL_P2P_NB_MBUF 4096 +#define CPFL_P2P_CACHE_SIZE 250 +#define CPFL_P2P_MBUF_SIZE 2048 +#define CPFL_P2P_RING_BUF 128 /* Base address of the HW descriptor ring should be 128B aligned. */ #define CPFL_RING_BASE_ALIGN 128 @@ -33,14 +38,40 @@ #define CPFL_SUPPORT_CHAIN_NUM 5 +struct cpfl_rxq_hairpin_info { + bool hairpin_q; /* if rx queue is a hairpin queue */ + uint16_t peer_txp; + uint16_t peer_txq_id; +}; + struct cpfl_rx_queue { struct idpf_rx_queue base; + struct cpfl_rxq_hairpin_info hairpin_info; +}; + +struct cpfl_txq_hairpin_info { + bool hairpin_q; /* if tx queue is a hairpin queue */ + uint16_t peer_rxp; + uint16_t peer_rxq_id; }; struct cpfl_tx_queue { struct idpf_tx_queue base; + struct cpfl_txq_hairpin_info hairpin_info; }; +static inline uint16_t +cpfl_hw_qid_get(uint16_t start_qid, uint16_t offset) +{ + return start_qid + offset; +} + +static inline uint64_t +cpfl_hw_qtail_get(uint64_t tail_start, uint16_t offset, uint64_t tail_spacing) +{ + return tail_start + offset * tail_spacing; +} + int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_txconf *tx_conf); @@ -59,4 +90,9 @@ void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid); void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid); void cpfl_set_rx_function(struct rte_eth_dev *dev); void cpfl_set_tx_function(struct rte_eth_dev *dev); +int cpfl_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, + uint16_t nb_desc, const struct rte_eth_hairpin_conf *conf); +int cpfl_tx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, + uint16_t nb_desc, + const struct rte_eth_hairpin_conf *conf); #endif /* _CPFL_RXTX_H_ */ diff --git a/drivers/net/cpfl/cpfl_rxtx_vec_common.h b/drivers/net/cpfl/cpfl_rxtx_vec_common.h index 5690b17911..d8e9191196 100644 --- a/drivers/net/cpfl/cpfl_rxtx_vec_common.h +++ b/drivers/net/cpfl/cpfl_rxtx_vec_common.h @@ -85,6 +85,8 @@ cpfl_rx_vec_dev_check_default(struct rte_eth_dev *dev) cpfl_rxq = dev->data->rx_queues[i]; default_ret = cpfl_rx_vec_queue_default(&cpfl_rxq->base); if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) { + if (cpfl_rxq->hairpin_info.hairpin_q) + continue; splitq_ret = cpfl_rx_splitq_vec_default(&cpfl_rxq->base); ret = splitq_ret && default_ret; } else { @@ -106,6 +108,8 @@ cpfl_tx_vec_dev_check_default(struct rte_eth_dev *dev) for (i = 0; i < dev->data->nb_tx_queues; i++) { cpfl_txq = dev->data->tx_queues[i]; + if (cpfl_txq->hairpin_info.hairpin_q) + continue; ret = cpfl_tx_vec_queue_default(&cpfl_txq->base); if (ret == CPFL_SCALAR_PATH) return CPFL_SCALAR_PATH; From patchwork Wed May 31 13:04:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 127793 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D052042BF4; Wed, 31 May 2023 15:29:51 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6F73142D50; Wed, 31 May 2023 15:29:25 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 3012E42D4B for ; Wed, 31 May 2023 15:29:22 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685539762; x=1717075762; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=CvMN8NOITqEoPij/b8a7zIGb2DvQk1boRsk9Q2X9GsI=; b=Z4rt4Lj2jL3+mx4HsnXlwnac0xvP1jn9qH3WHh7L0AETWQom8AKH0KLq n1ZszAcHM3Th3GZx3b0jwKlaXX/NrI3cxCHuhN6H3FC1HO31zAM1UIRoP 9J18gCPhjk2MHbu+scil1TPZcFp49OdwyJ0JphW2DBFrK+5tTSZPYZFI1 VnZFOuHfFIaXitjzMBrjj9dqaLd6UPbPPvnhJaTMWjpoecNe+EMPykbNg h5TA0+YOYryE4kclVvhzX7ak6UvrflsWNlub5rzvCMFP5x3BEVLYkfMP5 6XXqX1OaXAsqpOlwwZf6NZx7NRVF3v3dUPUOlNNbe92XqmaYKvRbGrtXE g==; X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="358497888" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="358497888" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2023 06:29:21 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="657325534" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="657325534" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga003.jf.intel.com with ESMTP; 31 May 2023 06:29:20 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing Subject: [PATCH v7 06/14] common/idpf: add queue config API Date: Wed, 31 May 2023 13:04:42 +0000 Message-Id: <20230531130450.26380-7-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230531130450.26380-1-beilei.xing@intel.com> References: <20230531102551.20936-1-beilei.xing@intel.com> <20230531130450.26380-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch supports Rx/Tx queue configuration APIs. Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/common/idpf/idpf_common_virtchnl.c | 70 ++++++++++++++++++++++ drivers/common/idpf/idpf_common_virtchnl.h | 6 ++ drivers/common/idpf/version.map | 2 + 3 files changed, 78 insertions(+) diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c index a3fe55c897..211b44a88e 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.c +++ b/drivers/common/idpf/idpf_common_virtchnl.c @@ -1050,6 +1050,41 @@ idpf_vc_rxq_config(struct idpf_vport *vport, struct idpf_rx_queue *rxq) return err; } +int idpf_vc_rxq_config_by_info(struct idpf_vport *vport, struct virtchnl2_rxq_info *rxq_info, + uint16_t num_qs) +{ + struct idpf_adapter *adapter = vport->adapter; + struct virtchnl2_config_rx_queues *vc_rxqs = NULL; + struct idpf_cmd_info args; + int size, err, i; + + size = sizeof(*vc_rxqs) + (num_qs - 1) * + sizeof(struct virtchnl2_rxq_info); + vc_rxqs = rte_zmalloc("cfg_rxqs", size, 0); + if (vc_rxqs == NULL) { + DRV_LOG(ERR, "Failed to allocate virtchnl2_config_rx_queues"); + err = -ENOMEM; + return err; + } + vc_rxqs->vport_id = vport->vport_id; + vc_rxqs->num_qinfo = num_qs; + memcpy(vc_rxqs->qinfo, rxq_info, num_qs * sizeof(struct virtchnl2_rxq_info)); + + memset(&args, 0, sizeof(args)); + args.ops = VIRTCHNL2_OP_CONFIG_RX_QUEUES; + args.in_args = (uint8_t *)vc_rxqs; + args.in_args_size = size; + args.out_buffer = adapter->mbx_resp; + args.out_size = IDPF_DFLT_MBX_BUF_SIZE; + + err = idpf_vc_cmd_execute(adapter, &args); + rte_free(vc_rxqs); + if (err != 0) + DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_RX_QUEUES"); + + return err; +} + int idpf_vc_txq_config(struct idpf_vport *vport, struct idpf_tx_queue *txq) { @@ -1121,6 +1156,41 @@ idpf_vc_txq_config(struct idpf_vport *vport, struct idpf_tx_queue *txq) return err; } +int +idpf_vc_txq_config_by_info(struct idpf_vport *vport, struct virtchnl2_txq_info *txq_info, + uint16_t num_qs) +{ + struct idpf_adapter *adapter = vport->adapter; + struct virtchnl2_config_tx_queues *vc_txqs = NULL; + struct idpf_cmd_info args; + int size, err; + + size = sizeof(*vc_txqs) + (num_qs - 1) * sizeof(struct virtchnl2_txq_info); + vc_txqs = rte_zmalloc("cfg_txqs", size, 0); + if (vc_txqs == NULL) { + DRV_LOG(ERR, "Failed to allocate virtchnl2_config_tx_queues"); + err = -ENOMEM; + return err; + } + vc_txqs->vport_id = vport->vport_id; + vc_txqs->num_qinfo = num_qs; + memcpy(vc_txqs->qinfo, txq_info, num_qs * sizeof(struct virtchnl2_txq_info)); + + memset(&args, 0, sizeof(args)); + args.ops = VIRTCHNL2_OP_CONFIG_TX_QUEUES; + args.in_args = (uint8_t *)vc_txqs; + args.in_args_size = size; + args.out_buffer = adapter->mbx_resp; + args.out_size = IDPF_DFLT_MBX_BUF_SIZE; + + err = idpf_vc_cmd_execute(adapter, &args); + rte_free(vc_txqs); + if (err != 0) + DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_TX_QUEUES"); + + return err; +} + int idpf_vc_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg, struct idpf_ctlq_msg *q_msg) diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h index 58b16e1c5d..db83761a5e 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.h +++ b/drivers/common/idpf/idpf_common_virtchnl.h @@ -65,6 +65,12 @@ __rte_internal int idpf_vc_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq, u16 *buff_count, struct idpf_dma_mem **buffs); __rte_internal +int idpf_vc_rxq_config_by_info(struct idpf_vport *vport, struct virtchnl2_rxq_info *rxq_info, + uint16_t num_qs); +__rte_internal +int idpf_vc_txq_config_by_info(struct idpf_vport *vport, struct virtchnl2_txq_info *txq_info, + uint16_t num_qs); +__rte_internal int idpf_vc_queue_grps_del(struct idpf_vport *vport, uint16_t num_q_grps, struct virtchnl2_queue_group_id *qg_ids); diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map index 01d18f3f3f..17e77884ce 100644 --- a/drivers/common/idpf/version.map +++ b/drivers/common/idpf/version.map @@ -54,8 +54,10 @@ INTERNAL { idpf_vc_rss_lut_get; idpf_vc_rss_lut_set; idpf_vc_rxq_config; + idpf_vc_rxq_config_by_info; idpf_vc_stats_query; idpf_vc_txq_config; + idpf_vc_txq_config_by_info; idpf_vc_vectors_alloc; idpf_vc_vectors_dealloc; idpf_vc_vport_create; From patchwork Wed May 31 13:04:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 127794 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 757C442BF4; Wed, 31 May 2023 15:29:59 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EAAF842D64; Wed, 31 May 2023 15:29:26 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 1B58A42D0C for ; Wed, 31 May 2023 15:29:23 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685539764; x=1717075764; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/VNQepTd1nenXVqfSsTKtCJV/ZVYKFv5oqROR8nczLM=; b=lsuNiEWZuqrsOm3O7H5l5xklOhVYf2hDZPvTC1q5Yc8ejksf1p5Mxg4h w7eOtCYxZgtB5QWUnngDol9XQqywEhCFqd5jQB18koVnGGD0LLK/Z64WP 7CYH6r73lVWuOOjfHPJeMZ0JN0n/JhOlONHsGPhEp1KzRP68zeFurA7vD wvQhgtJjrY1ofN6JlhBRxJVIQ6gso4saRpcM/ddYIAoEBe68dxOemmEVD H2QdQHBASqpA1atMf6ANDZyceGCUOa2V/sLfgnpQtvoikghDUb8Vot+Kt xGRljclo0dXWHGj1Mdenj8DgZNKdYRvYvPXwse+UOFKbnWKAdrmmN/ZqG A==; X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="358497896" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="358497896" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2023 06:29:23 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="657325538" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="657325538" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga003.jf.intel.com with ESMTP; 31 May 2023 06:29:21 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing , Xiao Wang Subject: [PATCH v7 07/14] net/cpfl: support hairpin queue configuration Date: Wed, 31 May 2023 13:04:43 +0000 Message-Id: <20230531130450.26380-8-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230531130450.26380-1-beilei.xing@intel.com> References: <20230531102551.20936-1-beilei.xing@intel.com> <20230531130450.26380-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch supports Rx/Tx hairpin queue configuration. Signed-off-by: Xiao Wang Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/net/cpfl/cpfl_ethdev.c | 136 +++++++++++++++++++++++++++++++-- drivers/net/cpfl/cpfl_rxtx.c | 80 +++++++++++++++++++ drivers/net/cpfl/cpfl_rxtx.h | 7 ++ 3 files changed, 217 insertions(+), 6 deletions(-) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index b17c538ec2..a06def06d0 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -742,33 +742,157 @@ cpfl_config_rx_queues_irqs(struct rte_eth_dev *dev) return idpf_vport_irq_map_config(vport, nb_rx_queues); } +/* Update hairpin_info for dev's tx hairpin queue */ +static int +cpfl_txq_hairpin_info_update(struct rte_eth_dev *dev, uint16_t rx_port) +{ + struct cpfl_vport *cpfl_tx_vport = dev->data->dev_private; + struct rte_eth_dev *peer_dev = &rte_eth_devices[rx_port]; + struct cpfl_vport *cpfl_rx_vport = peer_dev->data->dev_private; + struct cpfl_txq_hairpin_info *hairpin_info; + struct cpfl_tx_queue *cpfl_txq; + int i; + + for (i = cpfl_tx_vport->nb_data_txq; i < dev->data->nb_tx_queues; i++) { + cpfl_txq = dev->data->tx_queues[i]; + hairpin_info = &cpfl_txq->hairpin_info; + if (hairpin_info->peer_rxp != rx_port) { + PMD_DRV_LOG(ERR, "port %d is not the peer port", rx_port); + return -EINVAL; + } + hairpin_info->peer_rxq_id = + cpfl_hw_qid_get(cpfl_rx_vport->p2p_q_chunks_info->rx_start_qid, + hairpin_info->peer_rxq_id - cpfl_rx_vport->nb_data_rxq); + } + + return 0; +} + +/* Bind Rx hairpin queue's memory zone to peer Tx hairpin queue's memory zone */ +static void +cpfl_rxq_hairpin_mz_bind(struct rte_eth_dev *dev) +{ + struct cpfl_vport *cpfl_rx_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_rx_vport->base; + struct idpf_adapter *adapter = vport->adapter; + struct idpf_hw *hw = &adapter->hw; + struct cpfl_rx_queue *cpfl_rxq; + struct cpfl_tx_queue *cpfl_txq; + struct rte_eth_dev *peer_dev; + const struct rte_memzone *mz; + uint16_t peer_tx_port; + uint16_t peer_tx_qid; + int i; + + for (i = cpfl_rx_vport->nb_data_rxq; i < dev->data->nb_rx_queues; i++) { + cpfl_rxq = dev->data->rx_queues[i]; + peer_tx_port = cpfl_rxq->hairpin_info.peer_txp; + peer_tx_qid = cpfl_rxq->hairpin_info.peer_txq_id; + peer_dev = &rte_eth_devices[peer_tx_port]; + cpfl_txq = peer_dev->data->tx_queues[peer_tx_qid]; + + /* bind rx queue */ + mz = cpfl_txq->base.mz; + cpfl_rxq->base.rx_ring_phys_addr = mz->iova; + cpfl_rxq->base.rx_ring = mz->addr; + cpfl_rxq->base.mz = mz; + + /* bind rx buffer queue */ + mz = cpfl_txq->base.complq->mz; + cpfl_rxq->base.bufq1->rx_ring_phys_addr = mz->iova; + cpfl_rxq->base.bufq1->rx_ring = mz->addr; + cpfl_rxq->base.bufq1->mz = mz; + cpfl_rxq->base.bufq1->qrx_tail = hw->hw_addr + + cpfl_hw_qtail_get(cpfl_rx_vport->p2p_q_chunks_info->rx_buf_qtail_start, + 0, cpfl_rx_vport->p2p_q_chunks_info->rx_buf_qtail_spacing); + } +} + static int cpfl_start_queues(struct rte_eth_dev *dev) { + struct cpfl_vport *cpfl_vport = dev->data->dev_private; + struct idpf_vport *vport = &cpfl_vport->base; struct cpfl_rx_queue *cpfl_rxq; struct cpfl_tx_queue *cpfl_txq; + int update_flag = 0; int err = 0; int i; + /* For normal data queues, configure, init and enale Txq. + * For non-manual bind hairpin queues, configure Txq. + */ for (i = 0; i < dev->data->nb_tx_queues; i++) { cpfl_txq = dev->data->tx_queues[i]; if (cpfl_txq == NULL || cpfl_txq->base.tx_deferred_start) continue; - err = cpfl_tx_queue_start(dev, i); + if (!cpfl_txq->hairpin_info.hairpin_q) { + err = cpfl_tx_queue_start(dev, i); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to start Tx queue %u", i); + return err; + } + } else if (!cpfl_vport->p2p_manual_bind) { + if (update_flag == 0) { + err = cpfl_txq_hairpin_info_update(dev, + cpfl_txq->hairpin_info.peer_rxp); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to update Tx hairpin queue info"); + return err; + } + update_flag = 1; + } + err = cpfl_hairpin_txq_config(vport, cpfl_txq); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to configure hairpin Tx queue %u", i); + return err; + } + } + } + + /* For non-manual bind hairpin queues, configure Tx completion queue first.*/ + if (!cpfl_vport->p2p_manual_bind && cpfl_vport->p2p_tx_complq != NULL) { + err = cpfl_hairpin_tx_complq_config(cpfl_vport); if (err != 0) { - PMD_DRV_LOG(ERR, "Fail to start Tx queue %u", i); + PMD_DRV_LOG(ERR, "Fail to config Tx completion queue"); return err; } } + /* For non-manual bind hairpin queues, configure Rx buffer queue.*/ + if (!cpfl_vport->p2p_manual_bind && cpfl_vport->p2p_rx_bufq != NULL) { + cpfl_rxq_hairpin_mz_bind(dev); + err = cpfl_hairpin_rx_bufq_config(cpfl_vport); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to config Rx buffer queue"); + return err; + } + } + + /* For normal data queues, configure, init and enale Rxq. + * For non-manual bind hairpin queues, configure Rxq, and then init Rxq. + */ for (i = 0; i < dev->data->nb_rx_queues; i++) { cpfl_rxq = dev->data->rx_queues[i]; if (cpfl_rxq == NULL || cpfl_rxq->base.rx_deferred_start) continue; - err = cpfl_rx_queue_start(dev, i); - if (err != 0) { - PMD_DRV_LOG(ERR, "Fail to start Rx queue %u", i); - return err; + if (!cpfl_rxq->hairpin_info.hairpin_q) { + err = cpfl_rx_queue_start(dev, i); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to start Rx queue %u", i); + return err; + } + } else if (!cpfl_vport->p2p_manual_bind) { + err = cpfl_hairpin_rxq_config(vport, cpfl_rxq); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to configure hairpin Rx queue %u", i); + return err; + } + err = cpfl_rx_queue_init(dev, i); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to init hairpin Rx queue %u", i); + return err; + } } } diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c index 90b408d1f4..9408c6e1a4 100644 --- a/drivers/net/cpfl/cpfl_rxtx.c +++ b/drivers/net/cpfl/cpfl_rxtx.c @@ -922,6 +922,86 @@ cpfl_tx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, return ret; } +int +cpfl_hairpin_rx_bufq_config(struct cpfl_vport *cpfl_vport) +{ + struct idpf_rx_queue *rx_bufq = cpfl_vport->p2p_rx_bufq; + struct virtchnl2_rxq_info rxq_info[1] = {0}; + + rxq_info[0].type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER; + rxq_info[0].queue_id = rx_bufq->queue_id; + rxq_info[0].ring_len = rx_bufq->nb_rx_desc; + rxq_info[0].dma_ring_addr = rx_bufq->rx_ring_phys_addr; + rxq_info[0].desc_ids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M; + rxq_info[0].rx_buffer_low_watermark = CPFL_RXBUF_LOW_WATERMARK; + rxq_info[0].model = VIRTCHNL2_QUEUE_MODEL_SPLIT; + rxq_info[0].data_buffer_size = rx_bufq->rx_buf_len; + rxq_info[0].buffer_notif_stride = CPFL_RX_BUF_STRIDE; + + return idpf_vc_rxq_config_by_info(&cpfl_vport->base, rxq_info, 1); +} + +int +cpfl_hairpin_rxq_config(struct idpf_vport *vport, struct cpfl_rx_queue *cpfl_rxq) +{ + struct virtchnl2_rxq_info rxq_info[1] = {0}; + struct idpf_rx_queue *rxq = &cpfl_rxq->base; + + rxq_info[0].type = VIRTCHNL2_QUEUE_TYPE_RX; + rxq_info[0].queue_id = rxq->queue_id; + rxq_info[0].ring_len = rxq->nb_rx_desc; + rxq_info[0].dma_ring_addr = rxq->rx_ring_phys_addr; + rxq_info[0].rx_bufq1_id = rxq->bufq1->queue_id; + rxq_info[0].max_pkt_size = vport->max_pkt_len; + rxq_info[0].desc_ids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M; + rxq_info[0].qflags |= VIRTCHNL2_RX_DESC_SIZE_16BYTE; + + rxq_info[0].data_buffer_size = rxq->rx_buf_len; + rxq_info[0].model = VIRTCHNL2_QUEUE_MODEL_SPLIT; + rxq_info[0].rx_buffer_low_watermark = CPFL_RXBUF_LOW_WATERMARK; + + PMD_DRV_LOG(NOTICE, "hairpin: vport %u, Rxq id 0x%x", + vport->vport_id, rxq_info[0].queue_id); + + return idpf_vc_rxq_config_by_info(vport, rxq_info, 1); +} + +int +cpfl_hairpin_tx_complq_config(struct cpfl_vport *cpfl_vport) +{ + struct idpf_tx_queue *tx_complq = cpfl_vport->p2p_tx_complq; + struct virtchnl2_txq_info txq_info[1] = {0}; + + txq_info[0].dma_ring_addr = tx_complq->tx_ring_phys_addr; + txq_info[0].type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION; + txq_info[0].queue_id = tx_complq->queue_id; + txq_info[0].ring_len = tx_complq->nb_tx_desc; + txq_info[0].peer_rx_queue_id = cpfl_vport->p2p_rx_bufq->queue_id; + txq_info[0].model = VIRTCHNL2_QUEUE_MODEL_SPLIT; + txq_info[0].sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW; + + return idpf_vc_txq_config_by_info(&cpfl_vport->base, txq_info, 1); +} + +int +cpfl_hairpin_txq_config(struct idpf_vport *vport, struct cpfl_tx_queue *cpfl_txq) +{ + struct idpf_tx_queue *txq = &cpfl_txq->base; + struct virtchnl2_txq_info txq_info[1] = {0}; + + txq_info[0].dma_ring_addr = txq->tx_ring_phys_addr; + txq_info[0].type = VIRTCHNL2_QUEUE_TYPE_TX; + txq_info[0].queue_id = txq->queue_id; + txq_info[0].ring_len = txq->nb_tx_desc; + txq_info[0].tx_compl_queue_id = txq->complq->queue_id; + txq_info[0].relative_queue_id = txq->queue_id; + txq_info[0].peer_rx_queue_id = cpfl_txq->hairpin_info.peer_rxq_id; + txq_info[0].model = VIRTCHNL2_QUEUE_MODEL_SPLIT; + txq_info[0].sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW; + + return idpf_vc_txq_config_by_info(vport, txq_info, 1); +} + int cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) { diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h index 06198d4aad..872ebc1bfd 100644 --- a/drivers/net/cpfl/cpfl_rxtx.h +++ b/drivers/net/cpfl/cpfl_rxtx.h @@ -32,12 +32,15 @@ #define CPFL_RING_BASE_ALIGN 128 #define CPFL_DEFAULT_RX_FREE_THRESH 32 +#define CPFL_RXBUF_LOW_WATERMARK 64 #define CPFL_DEFAULT_TX_RS_THRESH 32 #define CPFL_DEFAULT_TX_FREE_THRESH 32 #define CPFL_SUPPORT_CHAIN_NUM 5 +#define CPFL_RX_BUF_STRIDE 64 + struct cpfl_rxq_hairpin_info { bool hairpin_q; /* if rx queue is a hairpin queue */ uint16_t peer_txp; @@ -95,4 +98,8 @@ int cpfl_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, int cpfl_tx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc, const struct rte_eth_hairpin_conf *conf); +int cpfl_hairpin_tx_complq_config(struct cpfl_vport *cpfl_vport); +int cpfl_hairpin_txq_config(struct idpf_vport *vport, struct cpfl_tx_queue *cpfl_txq); +int cpfl_hairpin_rx_bufq_config(struct cpfl_vport *cpfl_vport); +int cpfl_hairpin_rxq_config(struct idpf_vport *vport, struct cpfl_rx_queue *cpfl_rxq); #endif /* _CPFL_RXTX_H_ */ From patchwork Wed May 31 13:04:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 127795 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 65D5042BF4; Wed, 31 May 2023 15:30:05 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0F71542D75; Wed, 31 May 2023 15:29:28 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id D72A342D52 for ; Wed, 31 May 2023 15:29:25 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685539766; x=1717075766; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=a2LfBNxAPScF49xIh5G9q18lGBZR3G+zBXOEzuYHjJE=; b=JSY6uXoV3AfZaChl0UlferGdAwaP354Yq4y+N2co69BFiqgBxpjdmP9F /QHkG4GY5Eq0X6czXlpesn4+9NCPpFGjALl+Z2ZbEdWj6nGov3xKAKixm n8unblJVuMmoWMNPpuW5VvnjX3dbkBUBdlMOmejBej23f57ETrgDedhxF veX659jB6PHkiN7SPJ4hl7wMp4EmMkE8gd9cSnW7nJ5YeUBTRivAP3Ap6 URkVa9iekgBHl1UZoLlyZ3g0AHogf0Xu/PQDdTRPqP7Xz9xQPI9vLsfj0 Kl81fSu5wv1ZHh4Sf6fbVJbO+yjaOzNind+LrYf1112Iis4nitOYEjvd7 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="358497903" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="358497903" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2023 06:29:25 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="657325542" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="657325542" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga003.jf.intel.com with ESMTP; 31 May 2023 06:29:23 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing Subject: [PATCH v7 08/14] common/idpf: add switch queue API Date: Wed, 31 May 2023 13:04:44 +0000 Message-Id: <20230531130450.26380-9-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230531130450.26380-1-beilei.xing@intel.com> References: <20230531102551.20936-1-beilei.xing@intel.com> <20230531130450.26380-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch adds idpf_vc_ena_dis_one_queue API. Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/common/idpf/idpf_common_virtchnl.c | 2 +- drivers/common/idpf/idpf_common_virtchnl.h | 3 +++ drivers/common/idpf/version.map | 1 + 3 files changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c index 211b44a88e..6455f640da 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.c +++ b/drivers/common/idpf/idpf_common_virtchnl.c @@ -733,7 +733,7 @@ idpf_vc_vectors_dealloc(struct idpf_vport *vport) return err; } -static int +int idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid, uint32_t type, bool on) { diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h index db83761a5e..9ff5c38c26 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.h +++ b/drivers/common/idpf/idpf_common_virtchnl.h @@ -71,6 +71,9 @@ __rte_internal int idpf_vc_txq_config_by_info(struct idpf_vport *vport, struct virtchnl2_txq_info *txq_info, uint16_t num_qs); __rte_internal +int idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid, + uint32_t type, bool on); +__rte_internal int idpf_vc_queue_grps_del(struct idpf_vport *vport, uint16_t num_q_grps, struct virtchnl2_queue_group_id *qg_ids); diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map index 17e77884ce..25624732b0 100644 --- a/drivers/common/idpf/version.map +++ b/drivers/common/idpf/version.map @@ -40,6 +40,7 @@ INTERNAL { idpf_vc_cmd_execute; idpf_vc_ctlq_post_rx_buffs; idpf_vc_ctlq_recv; + idpf_vc_ena_dis_one_queue; idpf_vc_irq_map_unmap_config; idpf_vc_one_msg_read; idpf_vc_ptype_info_query; From patchwork Wed May 31 13:04:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 127796 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 04D7B42BF4; Wed, 31 May 2023 15:30:11 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3C3F742D55; Wed, 31 May 2023 15:29:38 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 89B2F42D6D for ; Wed, 31 May 2023 15:29:27 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685539767; x=1717075767; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=au7iRmJmrfSeMraOXLqiGH2EDUdJMB+znqL+nRerzWY=; b=NbZzTwOCihb5ij6IIKxEADPlJwGbFHqHJq+s6cVK+adY+xmzqt4cL9qs 6527c2niQBzpKkkABUfUWDh97BlSMBmodkhkbU3WndLJfVCKIYfwJDock QuLSzxF+L1G/+niZna9lBX/lUUt9hm4xt5yYP2ZhmvmAg605/KLWhCzLg KNEd9v9/G6hkyd0WS6rKPriySqzP1bg6aYKgNAAZSAtQ0caJdBVZSG//E GgNaYL/NrOuRk1iGOJbFG4qpLL39p6i79HZ05WjYEPE92YnvEYkyzEJF/ inJLZfW9kfibJat5gOiX74AZmMyykzTOhRLiuf+jyL+e98LN27ipjW6x+ g==; X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="358497908" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="358497908" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2023 06:29:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="657325546" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="657325546" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga003.jf.intel.com with ESMTP; 31 May 2023 06:29:25 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing , Xiao Wang Subject: [PATCH v7 09/14] net/cpfl: support hairpin queue start/stop Date: Wed, 31 May 2023 13:04:45 +0000 Message-Id: <20230531130450.26380-10-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230531130450.26380-1-beilei.xing@intel.com> References: <20230531102551.20936-1-beilei.xing@intel.com> <20230531130450.26380-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch supports Rx/Tx hairpin queue start/stop. Signed-off-by: Xiao Wang Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/net/cpfl/cpfl_ethdev.c | 46 +++++++++ drivers/net/cpfl/cpfl_rxtx.c | 164 +++++++++++++++++++++++++++++---- drivers/net/cpfl/cpfl_rxtx.h | 15 +++ 3 files changed, 207 insertions(+), 18 deletions(-) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index a06def06d0..2b99e58341 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -896,6 +896,52 @@ cpfl_start_queues(struct rte_eth_dev *dev) } } + /* For non-manual bind hairpin queues, enable Tx queue and Rx queue, + * then enable Tx completion queue and Rx buffer queue. + */ + for (i = cpfl_vport->nb_data_txq; i < dev->data->nb_tx_queues; i++) { + cpfl_txq = dev->data->tx_queues[i]; + if (cpfl_txq->hairpin_info.hairpin_q && !cpfl_vport->p2p_manual_bind) { + err = cpfl_switch_hairpin_rxtx_queue(cpfl_vport, + i - cpfl_vport->nb_data_txq, + false, true); + if (err) + PMD_DRV_LOG(ERR, "Failed to switch hairpin TX queue %u on", + i); + else + cpfl_txq->base.q_started = true; + } + } + + for (i = cpfl_vport->nb_data_rxq; i < dev->data->nb_rx_queues; i++) { + cpfl_rxq = dev->data->rx_queues[i]; + if (cpfl_rxq->hairpin_info.hairpin_q && !cpfl_vport->p2p_manual_bind) { + err = cpfl_switch_hairpin_rxtx_queue(cpfl_vport, + i - cpfl_vport->nb_data_rxq, + true, true); + if (err) + PMD_DRV_LOG(ERR, "Failed to switch hairpin RX queue %u on", + i); + else + cpfl_rxq->base.q_started = true; + } + } + + if (!cpfl_vport->p2p_manual_bind && + cpfl_vport->p2p_tx_complq != NULL && + cpfl_vport->p2p_rx_bufq != NULL) { + err = cpfl_switch_hairpin_complq(cpfl_vport, true); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to switch hairpin Tx complq"); + return err; + } + err = cpfl_switch_hairpin_bufq(cpfl_vport, true); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to switch hairpin Rx bufq"); + return err; + } + } + return err; } diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c index 9408c6e1a4..8d1f8a560b 100644 --- a/drivers/net/cpfl/cpfl_rxtx.c +++ b/drivers/net/cpfl/cpfl_rxtx.c @@ -1002,6 +1002,89 @@ cpfl_hairpin_txq_config(struct idpf_vport *vport, struct cpfl_tx_queue *cpfl_txq return idpf_vc_txq_config_by_info(vport, txq_info, 1); } +int +cpfl_switch_hairpin_complq(struct cpfl_vport *cpfl_vport, bool on) +{ + struct idpf_vport *vport = &cpfl_vport->base; + uint32_t type; + int err, queue_id; + + type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION; + queue_id = cpfl_vport->p2p_tx_complq->queue_id; + err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on); + + return err; +} + +int +cpfl_switch_hairpin_bufq(struct cpfl_vport *cpfl_vport, bool on) +{ + struct idpf_vport *vport = &cpfl_vport->base; + uint32_t type; + int err, queue_id; + + type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER; + queue_id = cpfl_vport->p2p_rx_bufq->queue_id; + err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on); + + return err; +} + +int +cpfl_switch_hairpin_rxtx_queue(struct cpfl_vport *cpfl_vport, uint16_t logic_qid, + bool rx, bool on) +{ + struct idpf_vport *vport = &cpfl_vport->base; + uint32_t type; + int err, queue_id; + + type = rx ? VIRTCHNL2_QUEUE_TYPE_RX : VIRTCHNL2_QUEUE_TYPE_TX; + + if (type == VIRTCHNL2_QUEUE_TYPE_RX) + queue_id = cpfl_hw_qid_get(cpfl_vport->p2p_q_chunks_info->rx_start_qid, logic_qid); + else + queue_id = cpfl_hw_qid_get(cpfl_vport->p2p_q_chunks_info->tx_start_qid, logic_qid); + err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on); + if (err) + return err; + + return err; +} + +static int +cpfl_alloc_split_p2p_rxq_mbufs(struct idpf_rx_queue *rxq) +{ + volatile struct virtchnl2_p2p_rx_buf_desc *rxd; + struct rte_mbuf *mbuf = NULL; + uint64_t dma_addr; + uint16_t i; + + for (i = 0; i < rxq->nb_rx_desc; i++) { + mbuf = rte_mbuf_raw_alloc(rxq->mp); + if (unlikely(!mbuf)) { + PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX"); + return -ENOMEM; + } + + rte_mbuf_refcnt_set(mbuf, 1); + mbuf->next = NULL; + mbuf->data_off = RTE_PKTMBUF_HEADROOM; + mbuf->nb_segs = 1; + mbuf->port = rxq->port_id; + dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf)); + + rxd = &((volatile struct virtchnl2_p2p_rx_buf_desc *)(rxq->rx_ring))[i]; + rxd->reserve0 = 0; + rxd->pkt_addr = dma_addr; + } + + rxq->nb_rx_hold = 0; + /* The value written in the RX buffer queue tail register, must be a multiple of 8.*/ + rxq->rx_tail = rxq->nb_rx_desc - CPFL_HAIRPIN_Q_TAIL_AUX_VALUE; + + return 0; +} + int cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) { @@ -1055,22 +1138,31 @@ cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) IDPF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1); } else { /* Split queue */ - err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq1); - if (err != 0) { - PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf"); - return err; - } - err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq2); - if (err != 0) { - PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf"); - return err; + if (cpfl_rxq->hairpin_info.hairpin_q) { + err = cpfl_alloc_split_p2p_rxq_mbufs(rxq->bufq1); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to allocate p2p RX buffer queue mbuf"); + return err; + } + } else { + err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq1); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf"); + return err; + } + err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq2); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf"); + return err; + } } rte_wmb(); /* Init the RX tail register. */ IDPF_PCI_REG_WRITE(rxq->bufq1->qrx_tail, rxq->bufq1->rx_tail); - IDPF_PCI_REG_WRITE(rxq->bufq2->qrx_tail, rxq->bufq2->rx_tail); + if (rxq->bufq2) + IDPF_PCI_REG_WRITE(rxq->bufq2->qrx_tail, rxq->bufq2->rx_tail); } return err; @@ -1177,7 +1269,12 @@ cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) return -EINVAL; cpfl_rxq = dev->data->rx_queues[rx_queue_id]; - err = idpf_vc_queue_switch(vport, rx_queue_id, true, false); + if (cpfl_rxq->hairpin_info.hairpin_q) + err = cpfl_switch_hairpin_rxtx_queue(cpfl_vport, + rx_queue_id - cpfl_vport->nb_data_txq, + true, false); + else + err = idpf_vc_queue_switch(vport, rx_queue_id, true, false); if (err != 0) { PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off", rx_queue_id); @@ -1191,10 +1288,17 @@ cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) idpf_qc_single_rx_queue_reset(rxq); } else { rxq->bufq1->ops->release_mbufs(rxq->bufq1); - rxq->bufq2->ops->release_mbufs(rxq->bufq2); - idpf_qc_split_rx_queue_reset(rxq); + if (rxq->bufq2) + rxq->bufq2->ops->release_mbufs(rxq->bufq2); + if (cpfl_rxq->hairpin_info.hairpin_q) { + cpfl_rx_hairpin_descq_reset(rxq); + cpfl_rx_hairpin_bufq_reset(rxq->bufq1); + } else { + idpf_qc_split_rx_queue_reset(rxq); + } } - dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + if (!cpfl_rxq->hairpin_info.hairpin_q) + dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; return 0; } @@ -1213,7 +1317,12 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) cpfl_txq = dev->data->tx_queues[tx_queue_id]; - err = idpf_vc_queue_switch(vport, tx_queue_id, false, false); + if (cpfl_txq->hairpin_info.hairpin_q) + err = cpfl_switch_hairpin_rxtx_queue(cpfl_vport, + tx_queue_id - cpfl_vport->nb_data_txq, + false, false); + else + err = idpf_vc_queue_switch(vport, tx_queue_id, false, false); if (err != 0) { PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off", tx_queue_id); @@ -1226,10 +1335,17 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) { idpf_qc_single_tx_queue_reset(txq); } else { - idpf_qc_split_tx_descq_reset(txq); - idpf_qc_split_tx_complq_reset(txq->complq); + if (cpfl_txq->hairpin_info.hairpin_q) { + cpfl_tx_hairpin_descq_reset(txq); + cpfl_tx_hairpin_complq_reset(txq->complq); + } else { + idpf_qc_split_tx_descq_reset(txq); + idpf_qc_split_tx_complq_reset(txq->complq); + } } - dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + + if (!cpfl_txq->hairpin_info.hairpin_q) + dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; return 0; } @@ -1249,10 +1365,22 @@ cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid) void cpfl_stop_queues(struct rte_eth_dev *dev) { + struct cpfl_vport *cpfl_vport = + (struct cpfl_vport *)dev->data->dev_private; struct cpfl_rx_queue *cpfl_rxq; struct cpfl_tx_queue *cpfl_txq; int i; + if (cpfl_vport->p2p_tx_complq != NULL) { + if (cpfl_switch_hairpin_complq(cpfl_vport, false) != 0) + PMD_DRV_LOG(ERR, "Failed to stop hairpin Tx complq"); + } + + if (cpfl_vport->p2p_rx_bufq != NULL) { + if (cpfl_switch_hairpin_bufq(cpfl_vport, false) != 0) + PMD_DRV_LOG(ERR, "Failed to stop hairpin Rx bufq"); + } + for (i = 0; i < dev->data->nb_rx_queues; i++) { cpfl_rxq = dev->data->rx_queues[i]; if (cpfl_rxq == NULL) diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h index 872ebc1bfd..aacd087b56 100644 --- a/drivers/net/cpfl/cpfl_rxtx.h +++ b/drivers/net/cpfl/cpfl_rxtx.h @@ -41,6 +41,17 @@ #define CPFL_RX_BUF_STRIDE 64 +/* The value written in the RX buffer queue tail register, + * and in WritePTR field in the TX completion queue context, + * must be a multiple of 8. + */ +#define CPFL_HAIRPIN_Q_TAIL_AUX_VALUE 8 + +struct virtchnl2_p2p_rx_buf_desc { + __le64 reserve0; + __le64 pkt_addr; /* Packet buffer address */ +}; + struct cpfl_rxq_hairpin_info { bool hairpin_q; /* if rx queue is a hairpin queue */ uint16_t peer_txp; @@ -102,4 +113,8 @@ int cpfl_hairpin_tx_complq_config(struct cpfl_vport *cpfl_vport); int cpfl_hairpin_txq_config(struct idpf_vport *vport, struct cpfl_tx_queue *cpfl_txq); int cpfl_hairpin_rx_bufq_config(struct cpfl_vport *cpfl_vport); int cpfl_hairpin_rxq_config(struct idpf_vport *vport, struct cpfl_rx_queue *cpfl_rxq); +int cpfl_switch_hairpin_complq(struct cpfl_vport *cpfl_vport, bool on); +int cpfl_switch_hairpin_bufq(struct cpfl_vport *cpfl_vport, bool on); +int cpfl_switch_hairpin_rxtx_queue(struct cpfl_vport *cpfl_vport, uint16_t qid, + bool rx, bool on); #endif /* _CPFL_RXTX_H_ */ From patchwork Wed May 31 13:04:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 127797 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C8B6942BF4; Wed, 31 May 2023 15:30:16 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6C22742D85; Wed, 31 May 2023 15:29:39 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 26A1B42D5A for ; Wed, 31 May 2023 15:29:29 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685539769; x=1717075769; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=AgdOhh+iIeT2R0RNmGL0vg8twvdrqp6FQBooT3cjfX8=; b=juX4g+PCATnAoiMApPQ5qDjEIA70CIsvQRMknfr2jXbSYBYR1/rabker C9SVqNWtWk6Q9BCNmg8Uqlj/N++I2ak5mC3IC+Ix+mvALWMbxmJDLosTG go3G8i6hjAKM/qyyYJP9qC4pevF63YJZvh/Yg8W2QYpq/Qjans9Wi0oH9 yBv66n6kOfL7rAVhig8wDbVjNfux3QWsZTUhB8wIGI7OPAFX7HbGTT0XV YBk7VEI0cWnyNEzzP2UgQRMP19pLKOsyia38DgPdvAwn8Hjg0SVLl42HW tOa6CC5ZHgzvho8Dou751oHuTi+U1BjjHr/P6omIzXS/pmYZ2XUKNOR/t g==; X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="358497913" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="358497913" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2023 06:29:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="657325551" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="657325551" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga003.jf.intel.com with ESMTP; 31 May 2023 06:29:27 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing Subject: [PATCH v7 10/14] common/idpf: add irq map config API Date: Wed, 31 May 2023 13:04:46 +0000 Message-Id: <20230531130450.26380-11-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230531130450.26380-1-beilei.xing@intel.com> References: <20230531102551.20936-1-beilei.xing@intel.com> <20230531130450.26380-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch supports idpf_vport_irq_map_config_by_qids API. Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/common/idpf/idpf_common_device.c | 75 ++++++++++++++++++++++++ drivers/common/idpf/idpf_common_device.h | 4 ++ drivers/common/idpf/version.map | 1 + 3 files changed, 80 insertions(+) diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c index dc47551b17..cc4207a46e 100644 --- a/drivers/common/idpf/idpf_common_device.c +++ b/drivers/common/idpf/idpf_common_device.c @@ -667,6 +667,81 @@ idpf_vport_irq_map_config(struct idpf_vport *vport, uint16_t nb_rx_queues) return ret; } +int +idpf_vport_irq_map_config_by_qids(struct idpf_vport *vport, uint32_t *qids, uint16_t nb_rx_queues) +{ + struct idpf_adapter *adapter = vport->adapter; + struct virtchnl2_queue_vector *qv_map; + struct idpf_hw *hw = &adapter->hw; + uint32_t dynctl_val, itrn_val; + uint32_t dynctl_reg_start; + uint32_t itrn_reg_start; + uint16_t i; + int ret; + + qv_map = rte_zmalloc("qv_map", + nb_rx_queues * + sizeof(struct virtchnl2_queue_vector), 0); + if (qv_map == NULL) { + DRV_LOG(ERR, "Failed to allocate %d queue-vector map", + nb_rx_queues); + ret = -ENOMEM; + goto qv_map_alloc_err; + } + + /* Rx interrupt disabled, Map interrupt only for writeback */ + + /* The capability flags adapter->caps.other_caps should be + * compared with bit VIRTCHNL2_CAP_WB_ON_ITR here. The if + * condition should be updated when the FW can return the + * correct flag bits. + */ + dynctl_reg_start = + vport->recv_vectors->vchunks.vchunks->dynctl_reg_start; + itrn_reg_start = + vport->recv_vectors->vchunks.vchunks->itrn_reg_start; + dynctl_val = IDPF_READ_REG(hw, dynctl_reg_start); + DRV_LOG(DEBUG, "Value of dynctl_reg_start is 0x%x", dynctl_val); + itrn_val = IDPF_READ_REG(hw, itrn_reg_start); + DRV_LOG(DEBUG, "Value of itrn_reg_start is 0x%x", itrn_val); + /* Force write-backs by setting WB_ON_ITR bit in DYN_CTL + * register. WB_ON_ITR and INTENA are mutually exclusive + * bits. Setting WB_ON_ITR bits means TX and RX Descs + * are written back based on ITR expiration irrespective + * of INTENA setting. + */ + /* TBD: need to tune INTERVAL value for better performance. */ + itrn_val = (itrn_val == 0) ? IDPF_DFLT_INTERVAL : itrn_val; + dynctl_val = VIRTCHNL2_ITR_IDX_0 << + PF_GLINT_DYN_CTL_ITR_INDX_S | + PF_GLINT_DYN_CTL_WB_ON_ITR_M | + itrn_val << PF_GLINT_DYN_CTL_INTERVAL_S; + IDPF_WRITE_REG(hw, dynctl_reg_start, dynctl_val); + + for (i = 0; i < nb_rx_queues; i++) { + /* map all queues to the same vector */ + qv_map[i].queue_id = qids[i]; + qv_map[i].vector_id = + vport->recv_vectors->vchunks.vchunks->start_vector_id; + } + vport->qv_map = qv_map; + + ret = idpf_vc_irq_map_unmap_config(vport, nb_rx_queues, true); + if (ret != 0) { + DRV_LOG(ERR, "config interrupt mapping failed"); + goto config_irq_map_err; + } + + return 0; + +config_irq_map_err: + rte_free(vport->qv_map); + vport->qv_map = NULL; + +qv_map_alloc_err: + return ret; +} + int idpf_vport_irq_unmap_config(struct idpf_vport *vport, uint16_t nb_rx_queues) { diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h index 112367dae8..f767ea7cec 100644 --- a/drivers/common/idpf/idpf_common_device.h +++ b/drivers/common/idpf/idpf_common_device.h @@ -200,5 +200,9 @@ int idpf_vport_info_init(struct idpf_vport *vport, struct virtchnl2_create_vport *vport_info); __rte_internal void idpf_vport_stats_update(struct virtchnl2_vport_stats *oes, struct virtchnl2_vport_stats *nes); +__rte_internal +int idpf_vport_irq_map_config_by_qids(struct idpf_vport *vport, + uint32_t *qids, + uint16_t nb_rx_queues); #endif /* _IDPF_COMMON_DEVICE_H_ */ diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map index 25624732b0..0729f6b912 100644 --- a/drivers/common/idpf/version.map +++ b/drivers/common/idpf/version.map @@ -69,6 +69,7 @@ INTERNAL { idpf_vport_info_init; idpf_vport_init; idpf_vport_irq_map_config; + idpf_vport_irq_map_config_by_qids; idpf_vport_irq_unmap_config; idpf_vport_rss_config; idpf_vport_stats_update; From patchwork Wed May 31 13:04:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 127798 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0AC3A42BF4; Wed, 31 May 2023 15:30:22 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A65CA42D93; Wed, 31 May 2023 15:29:40 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 134FA42D77 for ; Wed, 31 May 2023 15:29:30 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685539771; x=1717075771; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=TuGfAAb42Qj4k04bSwpC8oW9c8uSveokzJrVzbfXP1s=; b=nhK2mCoP1Kh064BLpoK4JDqDfodBxwJptuUJ8SiPyrosPh4QUdjSbQFz +Z6IWSugwWCz5QdVLxZKeKEQOnwr4BA3+RiPAyvYOeGm7jOsXd92hiET9 eUKJfod19B6O60ONz1e3tJD4L894mHgol6EL35ZR8UvD3PGjRJl3zMDb9 ZruAPB50RjCcns1VFAiNGIgXqxZxc6ivs2mjTtV3A3pYI22YOnRTQTE0U yj8G8Cc6ZbSebY+/5ncMZRZIKq8AqZVWzpkHj/OZsOtVgzeVD6h213Gwx LbwN217UljPLkM9MyujmRBK5Phv2im3mYiEdemY58kCg6Oq80z6jGGs3L g==; X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="358497916" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="358497916" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2023 06:29:30 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="657325557" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="657325557" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga003.jf.intel.com with ESMTP; 31 May 2023 06:29:28 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing Subject: [PATCH v7 11/14] net/cpfl: enable write back based on ITR expire Date: Wed, 31 May 2023 13:04:47 +0000 Message-Id: <20230531130450.26380-12-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230531130450.26380-1-beilei.xing@intel.com> References: <20230531102551.20936-1-beilei.xing@intel.com> <20230531130450.26380-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch enables write back based on ITR expire (WR_ON_ITR) for hairpin queues. Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/net/cpfl/cpfl_ethdev.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index 2b99e58341..850f1c0bc6 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -735,11 +735,22 @@ cpfl_dev_configure(struct rte_eth_dev *dev) static int cpfl_config_rx_queues_irqs(struct rte_eth_dev *dev) { + uint32_t qids[CPFL_MAX_P2P_NB_QUEUES + IDPF_DEFAULT_RXQ_NUM] = {0}; struct cpfl_vport *cpfl_vport = dev->data->dev_private; struct idpf_vport *vport = &cpfl_vport->base; uint16_t nb_rx_queues = dev->data->nb_rx_queues; + struct cpfl_rx_queue *cpfl_rxq; + int i; - return idpf_vport_irq_map_config(vport, nb_rx_queues); + for (i = 0; i < nb_rx_queues; i++) { + cpfl_rxq = dev->data->rx_queues[i]; + if (cpfl_rxq->hairpin_info.hairpin_q) + qids[i] = cpfl_hw_qid_get(cpfl_vport->p2p_q_chunks_info->rx_start_qid, + (i - cpfl_vport->nb_data_rxq)); + else + qids[i] = cpfl_hw_qid_get(vport->chunks_info.rx_start_qid, i); + } + return idpf_vport_irq_map_config_by_qids(vport, qids, nb_rx_queues); } /* Update hairpin_info for dev's tx hairpin queue */ From patchwork Wed May 31 13:04:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 127799 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 65DB342BF4; Wed, 31 May 2023 15:30:27 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C3FE042D9B; Wed, 31 May 2023 15:29:41 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id A6EBD42D55 for ; Wed, 31 May 2023 15:29:32 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685539772; x=1717075772; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lvfUIfxFrq74/11z85tmSj06FRDW2G6ma7q0EZpeg/U=; b=bPZy3VEEh9Tx/YSiRMihn1nGYdqiXPXjmunj36jKKA1oTxl89eVaDpEx TlXSAw9yiJv7hd1d17zclRIfDcJPMdK6VBV5rMtFvH2u7G0/NUyKHs/hL /Ne7q9dolyNljYBlZMBiKa1Ol6PWV2QVTws88AHobIzdVyoll2G7oE06a w+EYtIhjMKXWcIdzvV7RZZz49ad7ATWQq7dvMepDbSzCeL2V38P0F3cF/ oQih1vPa8jCIpsCGJZUF5mVFe94gtu9fiBItQBFya4I+nKJ1Vk1ldVked HHKFIJqVOqKETz4/ljKCUX/xJvxiHoPRzidvickZTmG42/PnBpGVtb1cy w==; X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="358497922" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="358497922" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2023 06:29:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="657325563" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="657325563" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga003.jf.intel.com with ESMTP; 31 May 2023 06:29:30 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing , Xiao Wang Subject: [PATCH v7 12/14] net/cpfl: support peer ports get Date: Wed, 31 May 2023 13:04:48 +0000 Message-Id: <20230531130450.26380-13-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230531130450.26380-1-beilei.xing@intel.com> References: <20230531102551.20936-1-beilei.xing@intel.com> <20230531130450.26380-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch supports get hairpin peer ports. Signed-off-by: Xiao Wang Signed-off-by: Beilei Xing --- drivers/net/cpfl/cpfl_ethdev.c | 40 ++++++++++++++++++++++++++++++++++ 1 file changed, 40 insertions(+) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index 850f1c0bc6..9fc7d3401f 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -1080,6 +1080,45 @@ cpfl_dev_close(struct rte_eth_dev *dev) return 0; } +static int +cpfl_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports, + size_t len, uint32_t tx) +{ + struct cpfl_vport *cpfl_vport = + (struct cpfl_vport *)dev->data->dev_private; + struct idpf_tx_queue *txq; + struct idpf_rx_queue *rxq; + struct cpfl_tx_queue *cpfl_txq; + struct cpfl_rx_queue *cpfl_rxq; + int i, j; + + if (len <= 0) + return -EINVAL; + + if (cpfl_vport->p2p_q_chunks_info == NULL) + return -ENOTSUP; + + if (tx > 0) { + for (i = cpfl_vport->nb_data_txq, j = 0; i < dev->data->nb_tx_queues; i++, j++) { + txq = dev->data->tx_queues[i]; + if (txq == NULL) + return -EINVAL; + cpfl_txq = (struct cpfl_tx_queue *)txq; + peer_ports[j] = cpfl_txq->hairpin_info.peer_rxp; + } + } else if (tx == 0) { + for (i = cpfl_vport->nb_data_rxq, j = 0; i < dev->data->nb_rx_queues; i++, j++) { + rxq = dev->data->rx_queues[i]; + if (rxq == NULL) + return -EINVAL; + cpfl_rxq = (struct cpfl_rx_queue *)rxq; + peer_ports[j] = cpfl_rxq->hairpin_info.peer_txp; + } + } + + return j; +} + static const struct eth_dev_ops cpfl_eth_dev_ops = { .dev_configure = cpfl_dev_configure, .dev_close = cpfl_dev_close, @@ -1109,6 +1148,7 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = { .hairpin_cap_get = cpfl_hairpin_cap_get, .rx_hairpin_queue_setup = cpfl_rx_hairpin_queue_setup, .tx_hairpin_queue_setup = cpfl_tx_hairpin_queue_setup, + .hairpin_get_peer_ports = cpfl_hairpin_get_peer_ports, }; static int From patchwork Wed May 31 13:04:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 127800 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D2EE842BF4; Wed, 31 May 2023 15:30:32 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C31AA42DA8; Wed, 31 May 2023 15:29:42 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 049E842D43 for ; Wed, 31 May 2023 15:29:33 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685539774; x=1717075774; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=HDlV8ug6AhtR9WODeC6bcVQCms9w0VeVj26p3MvI+Z0=; b=Yr28j0VtL5dl6D43o+/AVRe3J197tjUgPMAp1wWQCip12saSjccgtf7+ MbF3ekUavx5vugK4Cyh14XKhir46LBslgWtLk+lmrZXMcN1Nu58R1IL0r Tb8O8lh8ExyQbquOuiSZmJ2aB+5eugkjRzjVaeXQuNdXrFydpALQNglqt 76sSRSqZLChbHEUlRg6tN53RONAi7dhtAL1QK4W0C8MKVLKlwF7x6AYd8 pQZi4Z7/WRu+x8GiEbPlB9v/5mRqbYEtFAnyq+WTwmsTmsGS6hAqmtS/4 pILB7lis181EKdlFBMzGmenTCISeEgFQpsR/C+uaHwSAtRQHO03JHfdqk Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="358497928" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="358497928" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2023 06:29:33 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="657325566" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="657325566" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga003.jf.intel.com with ESMTP; 31 May 2023 06:29:31 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing , Xiao Wang Subject: [PATCH v7 13/14] net/cpfl: support hairpin bind/unbind Date: Wed, 31 May 2023 13:04:49 +0000 Message-Id: <20230531130450.26380-14-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230531130450.26380-1-beilei.xing@intel.com> References: <20230531102551.20936-1-beilei.xing@intel.com> <20230531130450.26380-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch supports hairpin_bind/unbind ops. Signed-off-by: Xiao Wang Signed-off-by: Beilei Xing --- drivers/net/cpfl/cpfl_ethdev.c | 137 +++++++++++++++++++++++++++++++++ 1 file changed, 137 insertions(+) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index 9fc7d3401f..ff36f02b11 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -1119,6 +1119,141 @@ cpfl_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports, return j; } +static int +cpfl_hairpin_bind(struct rte_eth_dev *dev, uint16_t rx_port) +{ + struct cpfl_vport *cpfl_tx_vport = dev->data->dev_private; + struct idpf_vport *tx_vport = &cpfl_tx_vport->base; + struct cpfl_vport *cpfl_rx_vport; + struct cpfl_tx_queue *cpfl_txq; + struct cpfl_rx_queue *cpfl_rxq; + struct rte_eth_dev *peer_dev; + struct idpf_vport *rx_vport; + int err = 0; + int i; + + err = cpfl_txq_hairpin_info_update(dev, rx_port); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to update Tx hairpin queue info."); + return err; + } + + /* configure hairpin queues */ + for (i = cpfl_tx_vport->nb_data_txq; i < dev->data->nb_tx_queues; i++) { + cpfl_txq = dev->data->tx_queues[i]; + err = cpfl_hairpin_txq_config(tx_vport, cpfl_txq); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to configure hairpin Tx queue %u", i); + return err; + } + } + + err = cpfl_hairpin_tx_complq_config(cpfl_tx_vport); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to config Tx completion queue"); + return err; + } + + peer_dev = &rte_eth_devices[rx_port]; + cpfl_rx_vport = (struct cpfl_vport *)peer_dev->data->dev_private; + rx_vport = &cpfl_rx_vport->base; + cpfl_rxq_hairpin_mz_bind(peer_dev); + + err = cpfl_hairpin_rx_bufq_config(cpfl_rx_vport); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to config Rx buffer queue"); + return err; + } + + for (i = cpfl_rx_vport->nb_data_rxq; i < peer_dev->data->nb_rx_queues; i++) { + cpfl_rxq = peer_dev->data->rx_queues[i]; + err = cpfl_hairpin_rxq_config(rx_vport, cpfl_rxq); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to configure hairpin Rx queue %u", i); + return err; + } + err = cpfl_rx_queue_init(peer_dev, i); + if (err != 0) { + PMD_DRV_LOG(ERR, "Fail to init hairpin Rx queue %u", i); + return err; + } + } + + /* enable hairpin queues */ + for (i = cpfl_tx_vport->nb_data_txq; i < dev->data->nb_tx_queues; i++) { + cpfl_txq = dev->data->tx_queues[i]; + err = cpfl_switch_hairpin_rxtx_queue(cpfl_tx_vport, + i - cpfl_tx_vport->nb_data_txq, + false, true); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to switch hairpin TX queue %u on", + i); + return err; + } + cpfl_txq->base.q_started = true; + } + + err = cpfl_switch_hairpin_complq(cpfl_tx_vport, true); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to switch hairpin Tx complq"); + return err; + } + + for (i = cpfl_rx_vport->nb_data_rxq; i < peer_dev->data->nb_rx_queues; i++) { + cpfl_rxq = peer_dev->data->rx_queues[i]; + err = cpfl_switch_hairpin_rxtx_queue(cpfl_rx_vport, + i - cpfl_rx_vport->nb_data_rxq, + true, true); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to switch hairpin RX queue %u on", + i); + } + cpfl_rxq->base.q_started = true; + } + + err = cpfl_switch_hairpin_bufq(cpfl_rx_vport, true); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to switch hairpin Rx buffer queue"); + return err; + } + + return 0; +} + +static int +cpfl_hairpin_unbind(struct rte_eth_dev *dev, uint16_t rx_port) +{ + struct cpfl_vport *cpfl_tx_vport = dev->data->dev_private; + struct rte_eth_dev *peer_dev = &rte_eth_devices[rx_port]; + struct cpfl_vport *cpfl_rx_vport = peer_dev->data->dev_private; + struct cpfl_tx_queue *cpfl_txq; + struct cpfl_rx_queue *cpfl_rxq; + int i; + + /* disable hairpin queues */ + for (i = cpfl_tx_vport->nb_data_txq; i < dev->data->nb_tx_queues; i++) { + cpfl_txq = dev->data->tx_queues[i]; + cpfl_switch_hairpin_rxtx_queue(cpfl_tx_vport, + i - cpfl_tx_vport->nb_data_txq, + false, false); + cpfl_txq->base.q_started = false; + } + + cpfl_switch_hairpin_complq(cpfl_tx_vport, false); + + for (i = cpfl_rx_vport->nb_data_rxq; i < peer_dev->data->nb_rx_queues; i++) { + cpfl_rxq = peer_dev->data->rx_queues[i]; + cpfl_switch_hairpin_rxtx_queue(cpfl_rx_vport, + i - cpfl_rx_vport->nb_data_rxq, + true, false); + cpfl_rxq->base.q_started = false; + } + + cpfl_switch_hairpin_bufq(cpfl_rx_vport, false); + + return 0; +} + static const struct eth_dev_ops cpfl_eth_dev_ops = { .dev_configure = cpfl_dev_configure, .dev_close = cpfl_dev_close, @@ -1149,6 +1284,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = { .rx_hairpin_queue_setup = cpfl_rx_hairpin_queue_setup, .tx_hairpin_queue_setup = cpfl_tx_hairpin_queue_setup, .hairpin_get_peer_ports = cpfl_hairpin_get_peer_ports, + .hairpin_bind = cpfl_hairpin_bind, + .hairpin_unbind = cpfl_hairpin_unbind, }; static int From patchwork Wed May 31 13:04:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 127801 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D2F4642BF4; Wed, 31 May 2023 15:30:37 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0AE3E42DAD; Wed, 31 May 2023 15:29:44 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 6362F42D43 for ; Wed, 31 May 2023 15:29:35 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685539775; x=1717075775; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=oVQhuz58lMsIDG0WuorafsAG9p4gkLFTqvO9dYitu7A=; b=gWgsyqXL69Sa9JuAL0YYuHHYOV7VPQppGtAShKIZ8YrwKjF3u8yIxKM3 GD4r8ehJvHD97m0pJRe2LBiYaCPsynQBmmHXaC1ebAFGDPdxeOcf1SWIZ w1RVmRRd9HeA6pO0NBxbTJ5O+vY45gQ8l2FPCuV8/916eUHgNRlbUptya 63z859fyHYLUNQE34wpw/dD2Uu1Ufu+THzHpZK6s7eJ9+MJGwdNXamzW3 fKiwXGRUo2R413mJ+B6+eNfIHaEsOiso0D6jn4JW0ET8xwxx6TD90eNRz Y3kqRyrZf57XCCjE+h9L5ezqPLLAjHjozxbnSlamr29XBUc6b8zsSfwqg w==; X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="358497931" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="358497931" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2023 06:29:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="657325569" X-IronPort-AV: E=Sophos;i="6.00,207,1681196400"; d="scan'208";a="657325569" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga003.jf.intel.com with ESMTP; 31 May 2023 06:29:33 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing Subject: [PATCH v7 14/14] doc: update the doc of CPFL PMD Date: Wed, 31 May 2023 13:04:50 +0000 Message-Id: <20230531130450.26380-15-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230531130450.26380-1-beilei.xing@intel.com> References: <20230531102551.20936-1-beilei.xing@intel.com> <20230531130450.26380-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing Update cpfl.rst to clarify hairpin support. Signed-off-by: Beilei Xing --- doc/guides/nics/cpfl.rst | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/doc/guides/nics/cpfl.rst b/doc/guides/nics/cpfl.rst index d25db088eb..8d5c3082e4 100644 --- a/doc/guides/nics/cpfl.rst +++ b/doc/guides/nics/cpfl.rst @@ -106,3 +106,10 @@ The paths are chosen based on 2 conditions: A value "P" means the offload feature is not supported by vector path. If any not supported features are used, cpfl vector PMD is disabled and the scalar paths are chosen. + +Hairpin queue +~~~~~~~~~~~~~ + + E2100 Series can loopback packets from RX port to TX port, this feature is + called port-to-port or hairpin. + Currently, the PMD only supports single port hairpin.