From patchwork Fri Apr 21 06:50:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 126351 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BECE4429A9; Fri, 21 Apr 2023 09:14:43 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C9E0542D40; Fri, 21 Apr 2023 09:14:06 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id C9F3442D40 for ; Fri, 21 Apr 2023 09:14:04 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1682061245; x=1713597245; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=s5ZcdxqIaRzfPIeOWiPPB8ptUyN6uA4V4BSKp2mOKGg=; b=m2fBoGPvJ5tWzkxpaCWQFJjj1cYTuOEXNw5Qhnk1rzBaTqNsUc/G3snx yWNX/Z0IvKtUjDZQ8/FAdBqk4yv5OBoED7JaGCYoB6GEcYsdcoXJ9PvBM oTDMiYnv0XtOTtjH/Uv5Xv10hboUj4HcXLUvO1iJDPos4eAcaEgA6mXnl X5hnVMuKKK2kKozQu4nR3nn9no9lCg5OfzxzSMZc0qBNuNTtBnK4h4D13 22lFMzByXUjMJp93uKKHR12RpYrFb2Ur/kPMHtTMPloevVM9YTuPEkb6S KfX67H6eMpYwIRdDZ+usxSl0eq5kFlHF2/SeAg1hRqFF2pkvbNFzYOHtq A==; X-IronPort-AV: E=McAfee;i="6600,9927,10686"; a="326260084" X-IronPort-AV: E=Sophos;i="5.99,214,1677571200"; d="scan'208";a="326260084" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2023 00:14:04 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10686"; a="722669125" X-IronPort-AV: E=Sophos;i="5.99,214,1677571200"; d="scan'208";a="722669125" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga008.jf.intel.com with ESMTP; 21 Apr 2023 00:14:02 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Beilei Xing , Xiao Wang Subject: [PATCH 07/10] net/cpfl: support hairpin queue start/stop Date: Fri, 21 Apr 2023 06:50:45 +0000 Message-Id: <20230421065048.106899-8-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230421065048.106899-1-beilei.xing@intel.com> References: <20230421065048.106899-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing This patch supports Rx/Tx hairpin queue start/stop. Signed-off-by: Xiao Wang Signed-off-by: Mingxia Liu Signed-off-by: Beilei Xing --- drivers/common/idpf/idpf_common_virtchnl.c | 2 +- drivers/common/idpf/idpf_common_virtchnl.h | 3 + drivers/common/idpf/version.map | 1 + drivers/net/cpfl/cpfl_ethdev.c | 39 ++++++ drivers/net/cpfl/cpfl_rxtx.c | 153 ++++++++++++++++++--- drivers/net/cpfl/cpfl_rxtx.h | 14 ++ 6 files changed, 193 insertions(+), 19 deletions(-) diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c index 50cd43a8dd..20a5bc085d 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.c +++ b/drivers/common/idpf/idpf_common_virtchnl.c @@ -733,7 +733,7 @@ idpf_vc_vectors_dealloc(struct idpf_vport *vport) return err; } -static int +int idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid, uint32_t type, bool on) { diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h index 277235ba7d..18db6cd8c8 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.h +++ b/drivers/common/idpf/idpf_common_virtchnl.h @@ -71,6 +71,9 @@ __rte_internal int idpf_vc_txq_config_by_info(struct idpf_vport *vport, struct virtchnl2_txq_info *txq_info, uint16_t num_qs); __rte_internal +int idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid, + uint32_t type, bool on); +__rte_internal int idpf_vc_queue_grps_del(struct idpf_vport *vport, uint16_t num_q_grps, struct virtchnl2_queue_group_id *qg_ids); diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map index a339a4bf8e..0e87dba2ae 100644 --- a/drivers/common/idpf/version.map +++ b/drivers/common/idpf/version.map @@ -45,6 +45,7 @@ INTERNAL { idpf_vc_cmd_execute; idpf_vc_ctlq_post_rx_buffs; idpf_vc_ctlq_recv; + idpf_vc_ena_dis_one_queue; idpf_vc_irq_map_unmap_config; idpf_vc_one_msg_read; idpf_vc_ptype_info_query; diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index 13edf2e706..f154c83f27 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -895,6 +895,45 @@ cpfl_start_queues(struct rte_eth_dev *dev) } } + /* For non-cross vport hairpin queues, enable Tx queue and Rx queue, + * then enable Tx completion queue and Rx buffer queue. + */ + for (i = 0; i < dev->data->nb_tx_queues; i++) { + cpfl_txq = dev->data->tx_queues[i]; + if (cpfl_txq->hairpin_info.hairpin_q && !cpfl_txq->hairpin_info.manual_bind) { + err = cpfl_switch_hairpin_rxtx_queue(cpfl_vport, + i - cpfl_vport->nb_data_txq, + false, true); + if (err) + PMD_DRV_LOG(ERR, "Failed to switch hairpin TX queue %u on", + i); + else + cpfl_txq->base.q_started = true; + } + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + cpfl_rxq = dev->data->rx_queues[i]; + if (cpfl_rxq->hairpin_info.hairpin_q && !cpfl_rxq->hairpin_info.manual_bind) { + err = cpfl_switch_hairpin_rxtx_queue(cpfl_vport, + i - cpfl_vport->nb_data_rxq, + true, true); + if (err) + PMD_DRV_LOG(ERR, "Failed to switch hairpin RX queue %u on", + i); + else + cpfl_rxq->base.q_started = true; + } + } + + if (tx_cmplq_flag == 1 && rx_bufq_flag == 1) { + err = cpfl_switch_hairpin_bufq_complq(cpfl_vport, true); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to switch hairpin Tx complq and Rx bufq"); + return err; + } + } + return err; } diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c index 040beb5bac..ed2d100c35 100644 --- a/drivers/net/cpfl/cpfl_rxtx.c +++ b/drivers/net/cpfl/cpfl_rxtx.c @@ -1010,6 +1010,83 @@ cpfl_hairpin_txq_config(struct idpf_vport *vport, struct cpfl_tx_queue *cpfl_txq return idpf_vc_txq_config_by_info(vport, txq_info, 1); } +int +cpfl_switch_hairpin_bufq_complq(struct cpfl_vport *cpfl_vport, bool on) +{ + struct idpf_vport *vport = &cpfl_vport->base; + uint32_t type; + int err, queue_id; + + type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION; + queue_id = cpfl_vport->p2p_tx_complq->queue_id; + err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on); + if (err) + return err; + + type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER; + queue_id = cpfl_vport->p2p_rx_bufq->queue_id; + err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on); + + return err; +} + +int +cpfl_switch_hairpin_rxtx_queue(struct cpfl_vport *cpfl_vport, uint16_t logic_qid, + bool rx, bool on) +{ + struct idpf_vport *vport = &cpfl_vport->base; + uint32_t type; + int err, queue_id; + + type = rx ? VIRTCHNL2_QUEUE_TYPE_RX : VIRTCHNL2_QUEUE_TYPE_TX; + + if (type == VIRTCHNL2_QUEUE_TYPE_RX) + queue_id = cpfl_hw_qid_get(cpfl_vport->p2p_q_chunks_info.rx_start_qid, logic_qid); + else + queue_id = cpfl_hw_qid_get(cpfl_vport->p2p_q_chunks_info.tx_start_qid, logic_qid); + err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on); + if (err) + return err; + + return err; +} + +static int +cpfl_alloc_split_p2p_rxq_mbufs(struct idpf_rx_queue *rxq) +{ + volatile struct virtchnl2_p2p_rx_buf_desc *rxd; + struct rte_mbuf *mbuf = NULL; + uint64_t dma_addr; + uint16_t i; + + for (i = 0; i < rxq->nb_rx_desc; i++) { + mbuf = rte_mbuf_raw_alloc(rxq->mp); + if (unlikely(!mbuf)) { + PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX"); + return -ENOMEM; + } + + rte_mbuf_refcnt_set(mbuf, 1); + mbuf->next = NULL; + mbuf->data_off = RTE_PKTMBUF_HEADROOM; + mbuf->nb_segs = 1; + mbuf->port = rxq->port_id; + dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf)); + + rxd = &((volatile struct virtchnl2_p2p_rx_buf_desc *)(rxq->rx_ring))[i]; + rxd->reserve0 = 0; + rxd->pkt_addr = dma_addr; + + rxq->sw_ring[i] = mbuf; + } + + rxq->nb_rx_hold = 0; + /* The value written in the RX buffer queue tail register, must be a multiple of 8.*/ + rxq->rx_tail = rxq->nb_rx_desc - CPFL_HAIRPIN_Q_TAIL_AUX_VALUE; + + return 0; +} + int cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) { @@ -1063,22 +1140,31 @@ cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) IDPF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1); } else { /* Split queue */ - err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq1); - if (err != 0) { - PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf"); - return err; - } - err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq2); - if (err != 0) { - PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf"); - return err; + if (cpfl_rxq->hairpin_info.hairpin_q) { + err = cpfl_alloc_split_p2p_rxq_mbufs(rxq->bufq1); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to allocate p2p RX buffer queue mbuf"); + return err; + } + } else { + err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq1); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf"); + return err; + } + err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq2); + if (err != 0) { + PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf"); + return err; + } } rte_wmb(); /* Init the RX tail register. */ IDPF_PCI_REG_WRITE(rxq->bufq1->qrx_tail, rxq->bufq1->rx_tail); - IDPF_PCI_REG_WRITE(rxq->bufq2->qrx_tail, rxq->bufq2->rx_tail); + if (rxq->bufq2) + IDPF_PCI_REG_WRITE(rxq->bufq2->qrx_tail, rxq->bufq2->rx_tail); } return err; @@ -1185,7 +1271,12 @@ cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) return -EINVAL; cpfl_rxq = dev->data->rx_queues[rx_queue_id]; - err = idpf_vc_queue_switch(vport, rx_queue_id, true, false); + if (cpfl_rxq->hairpin_info.hairpin_q) + err = cpfl_switch_hairpin_rxtx_queue(cpfl_vport, + rx_queue_id - cpfl_vport->nb_data_txq, + true, false); + else + err = idpf_vc_queue_switch(vport, rx_queue_id, true, false); if (err != 0) { PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off", rx_queue_id); @@ -1199,10 +1290,17 @@ cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) idpf_qc_single_rx_queue_reset(rxq); } else { rxq->bufq1->ops->release_mbufs(rxq->bufq1); - rxq->bufq2->ops->release_mbufs(rxq->bufq2); - idpf_qc_split_rx_queue_reset(rxq); + if (rxq->bufq2) + rxq->bufq2->ops->release_mbufs(rxq->bufq2); + if (cpfl_rxq->hairpin_info.hairpin_q) { + cpfl_rx_hairpin_descq_reset(rxq); + cpfl_rx_hairpin_bufq_reset(rxq->bufq1); + } else { + idpf_qc_split_rx_queue_reset(rxq); + } } - dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + if (!cpfl_rxq->hairpin_info.hairpin_q) + dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; return 0; } @@ -1221,7 +1319,12 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) cpfl_txq = dev->data->tx_queues[tx_queue_id]; - err = idpf_vc_queue_switch(vport, tx_queue_id, false, false); + if (cpfl_txq->hairpin_info.hairpin_q) + err = cpfl_switch_hairpin_rxtx_queue(cpfl_vport, + tx_queue_id - cpfl_vport->nb_data_txq, + false, false); + else + err = idpf_vc_queue_switch(vport, tx_queue_id, false, false); if (err != 0) { PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off", tx_queue_id); @@ -1234,10 +1337,17 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) { idpf_qc_single_tx_queue_reset(txq); } else { - idpf_qc_split_tx_descq_reset(txq); - idpf_qc_split_tx_complq_reset(txq->complq); + if (cpfl_txq->hairpin_info.hairpin_q) { + cpfl_tx_hairpin_descq_reset(txq); + cpfl_tx_hairpin_complq_reset(txq->complq); + } else { + idpf_qc_split_tx_descq_reset(txq); + idpf_qc_split_tx_complq_reset(txq->complq); + } } - dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + + if (!cpfl_txq->hairpin_info.hairpin_q) + dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; return 0; } @@ -1257,10 +1367,17 @@ cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid) void cpfl_stop_queues(struct rte_eth_dev *dev) { + struct cpfl_vport *cpfl_vport = + (struct cpfl_vport *)dev->data->dev_private; struct cpfl_rx_queue *cpfl_rxq; struct cpfl_tx_queue *cpfl_txq; int i; + if (cpfl_vport->p2p_rx_bufq != NULL) { + if (cpfl_switch_hairpin_bufq_complq(cpfl_vport, false) != 0) + PMD_DRV_LOG(ERR, "Failed to stop hairpin Tx complq and Rx bufq"); + } + for (i = 0; i < dev->data->nb_rx_queues; i++) { cpfl_rxq = dev->data->rx_queues[i]; if (cpfl_rxq == NULL) diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h index b01ce5edf9..87603e161e 100644 --- a/drivers/net/cpfl/cpfl_rxtx.h +++ b/drivers/net/cpfl/cpfl_rxtx.h @@ -39,6 +39,17 @@ #define CPFL_RX_BUF_STRIDE 64 +/* The value written in the RX buffer queue tail register, + * and in WritePTR field in the TX completion queue context, + * must be a multiple of 8. + */ +#define CPFL_HAIRPIN_Q_TAIL_AUX_VALUE 8 + +struct virtchnl2_p2p_rx_buf_desc { + __le64 reserve0; + __le64 pkt_addr; /* Packet buffer address */ +}; + struct cpfl_rxq_hairpin_info { bool hairpin_q; /* if rx queue is a hairpin queue */ bool manual_bind; /* for cross vport */ @@ -92,4 +103,7 @@ int cpfl_hairpin_tx_complq_config(struct cpfl_vport *cpfl_vport); int cpfl_hairpin_txq_config(struct idpf_vport *vport, struct cpfl_tx_queue *cpfl_txq); int cpfl_hairpin_rx_bufq_config(struct cpfl_vport *cpfl_vport); int cpfl_hairpin_rxq_config(struct idpf_vport *vport, struct cpfl_rx_queue *cpfl_rxq); +int cpfl_switch_hairpin_bufq_complq(struct cpfl_vport *cpfl_vport, bool on); +int cpfl_switch_hairpin_rxtx_queue(struct cpfl_vport *cpfl_vport, uint16_t qid, + bool rx, bool on); #endif /* _CPFL_RXTX_H_ */