From patchwork Thu May 25 09:59:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 127419 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9CEDD42B9A; Thu, 25 May 2023 12:11:40 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4E2D942DBC; Thu, 25 May 2023 12:10:21 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id E90AB42D38 for ; Thu, 25 May 2023 12:10:15 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 34PA2W0Q020284 for ; Thu, 25 May 2023 03:10:15 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=N4rURLMmA4O8P//Rjcyp7WWHUfXTYBxTp7z5KUCWcns=; b=dGd8n16xEfcM5AngDJzL5H9SUximOHtE8p9IfqCyOug+GIZOXMOwQngC9K0EcVlqn2il jEbVOF0NnUPKuLQewhmWkYdvFJESjIvwpJY3NBxD1GAbw+9Fjntvwb9IYxySdgIHnCh1 srVvXle5gTjrmg0bmCUJE/uOp/y1+ZEWaaE19jzM2/lpjxBOlM9JeXmnlDHuNYaufvqh AzROMG2dxFZOsMpENoiomHO3d7z0xmdszbpFSYYU2Iv7fmW2RWtLZDXauwk+qhL6Ml1f 3TgQCRGiV2LhM4U03JO2XGyG4ZiQsu0ElHLuBVyj0AhRZ520JdMJ3QdzdOtT0KJOa9Wh qQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3qt5jng0pa-15 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 25 May 2023 03:10:15 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Thu, 25 May 2023 03:09:49 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Thu, 25 May 2023 03:09:49 -0700 Received: from hyd1588t430.caveonetworks.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id D6E115B6EA5; Thu, 25 May 2023 03:00:45 -0700 (PDT) From: Nithin Dabilpuram To: Pavan Nikhilesh , Shijith Thotton , Nithin Kumar Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao CC: , , Rakesh Kudurumalla Subject: [PATCH v3 30/32] net/cnxk: handle extbuf completion on ethdev stop Date: Thu, 25 May 2023 15:29:02 +0530 Message-ID: <20230525095904.3967080-30-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230525095904.3967080-1-ndabilpuram@marvell.com> References: <20230411091144.1087887-1-ndabilpuram@marvell.com> <20230525095904.3967080-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: Q6gQgSIFxDJb1hrhenAcaHPjB_lmryV_ X-Proofpoint-GUID: Q6gQgSIFxDJb1hrhenAcaHPjB_lmryV_ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.176.26 definitions=2023-05-25_06,2023-05-24_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Rakesh Kudurumalla During tranmissoin of packets, CQ corresponding to SQ is polled for transmit completion packets in transmit function, when last burst is transmitted corresponding transmit completion packets are left in CQ.This patch reads leftover packets in CQ on ethdev stop.Moved transmit completion code to cn10k_rxtx.h and cn9k_ethdev.h to avoid code duplication Signed-off-by: Rakesh Kudurumalla --- drivers/event/cnxk/cn10k_tx_worker.h | 2 +- drivers/event/cnxk/cn9k_worker.h | 2 +- drivers/net/cnxk/cn10k_ethdev.c | 13 +++++ drivers/net/cnxk/cn10k_rxtx.h | 76 +++++++++++++++++++++++++ drivers/net/cnxk/cn10k_tx.h | 83 +--------------------------- drivers/net/cnxk/cn9k_ethdev.c | 14 +++++ drivers/net/cnxk/cn9k_ethdev.h | 77 ++++++++++++++++++++++++++ drivers/net/cnxk/cn9k_tx.h | 83 +--------------------------- 8 files changed, 188 insertions(+), 162 deletions(-) diff --git a/drivers/event/cnxk/cn10k_tx_worker.h b/drivers/event/cnxk/cn10k_tx_worker.h index c18786a14c..7f170ac5f0 100644 --- a/drivers/event/cnxk/cn10k_tx_worker.h +++ b/drivers/event/cnxk/cn10k_tx_worker.h @@ -55,7 +55,7 @@ cn10k_sso_tx_one(struct cn10k_sso_hws *ws, struct rte_mbuf *m, uint64_t *cmd, return 0; if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F && txq->tx_compl.ena) - handle_tx_completion_pkts(txq, 1, 1); + handle_tx_completion_pkts(txq, 1); cn10k_nix_tx_skeleton(txq, cmd, flags, 0); /* Perform header writes before barrier diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h index 1ce4b044e8..fcb82987e5 100644 --- a/drivers/event/cnxk/cn9k_worker.h +++ b/drivers/event/cnxk/cn9k_worker.h @@ -784,7 +784,7 @@ cn9k_sso_hws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd, txq = cn9k_sso_hws_xtract_meta(m, txq_data); if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F && txq->tx_compl.ena) - handle_tx_completion_pkts(txq, 1, 1); + handle_tx_completion_pkts(txq, 1); if (((txq->nb_sqb_bufs_adj - __atomic_load_n((int16_t *)txq->fc_mem, __ATOMIC_RELAXED)) diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c index 2b4ab8b772..792c1b1970 100644 --- a/drivers/net/cnxk/cn10k_ethdev.c +++ b/drivers/net/cnxk/cn10k_ethdev.c @@ -367,6 +367,10 @@ static int cn10k_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx) { struct cn10k_eth_txq *txq = eth_dev->data->tx_queues[qidx]; + struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); + uint16_t flags = dev->tx_offload_flags; + struct roc_nix *nix = &dev->nix; + uint32_t head = 0, tail = 0; int rc; rc = cnxk_nix_tx_queue_stop(eth_dev, qidx); @@ -375,6 +379,15 @@ cn10k_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx) /* Clear fc cache pkts to trigger worker stop */ txq->fc_cache_pkts = 0; + + if ((flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) && txq->tx_compl.ena) { + struct roc_nix_sq *sq = &dev->sqs[qidx]; + do { + handle_tx_completion_pkts(txq, flags & NIX_TX_VWQE_F); + roc_nix_sq_head_tail_get(nix, sq->qid, &head, &tail); + } while (head != tail); + } + return 0; } diff --git a/drivers/net/cnxk/cn10k_rxtx.h b/drivers/net/cnxk/cn10k_rxtx.h index c256d54307..65dd57494a 100644 --- a/drivers/net/cnxk/cn10k_rxtx.h +++ b/drivers/net/cnxk/cn10k_rxtx.h @@ -113,4 +113,80 @@ struct cn10k_sec_sess_priv { (void *)((uintptr_t)(lmt_addr) + \ ((uint64_t)(lmt_num) << ROC_LMT_LINE_SIZE_LOG2) + (offset)) +static inline uint16_t +nix_tx_compl_nb_pkts(struct cn10k_eth_txq *txq, const uint64_t wdata, + const uint32_t qmask) +{ + uint16_t available = txq->tx_compl.available; + + /* Update the available count if cached value is not enough */ + if (!unlikely(available)) { + uint64_t reg, head, tail; + + /* Use LDADDA version to avoid reorder */ + reg = roc_atomic64_add_sync(wdata, txq->tx_compl.cq_status); + /* CQ_OP_STATUS operation error */ + if (reg & BIT_ULL(NIX_CQ_OP_STAT_OP_ERR) || + reg & BIT_ULL(NIX_CQ_OP_STAT_CQ_ERR)) + return 0; + + tail = reg & 0xFFFFF; + head = (reg >> 20) & 0xFFFFF; + if (tail < head) + available = tail - head + qmask + 1; + else + available = tail - head; + + txq->tx_compl.available = available; + } + return available; +} + +static inline void +handle_tx_completion_pkts(struct cn10k_eth_txq *txq, uint8_t mt_safe) +{ +#define CNXK_NIX_CQ_ENTRY_SZ 128 +#define CQE_SZ(x) ((x) * CNXK_NIX_CQ_ENTRY_SZ) + + uint16_t tx_pkts = 0, nb_pkts; + const uintptr_t desc = txq->tx_compl.desc_base; + const uint64_t wdata = txq->tx_compl.wdata; + const uint32_t qmask = txq->tx_compl.qmask; + uint32_t head = txq->tx_compl.head; + struct nix_cqe_hdr_s *tx_compl_cq; + struct nix_send_comp_s *tx_compl_s0; + struct rte_mbuf *m_next, *m; + + if (mt_safe) + rte_spinlock_lock(&txq->tx_compl.ext_buf_lock); + + nb_pkts = nix_tx_compl_nb_pkts(txq, wdata, qmask); + while (tx_pkts < nb_pkts) { + rte_prefetch_non_temporal((void *)(desc + + (CQE_SZ((head + 2) & qmask)))); + tx_compl_cq = (struct nix_cqe_hdr_s *) + (desc + CQE_SZ(head)); + tx_compl_s0 = (struct nix_send_comp_s *) + ((uint64_t *)tx_compl_cq + 1); + m = txq->tx_compl.ptr[tx_compl_s0->sqe_id]; + while (m->next != NULL) { + m_next = m->next; + rte_pktmbuf_free_seg(m); + m = m_next; + } + rte_pktmbuf_free_seg(m); + + head++; + head &= qmask; + tx_pkts++; + } + txq->tx_compl.head = head; + txq->tx_compl.available -= nb_pkts; + + plt_write64((wdata | nb_pkts), txq->tx_compl.cq_door); + + if (mt_safe) + rte_spinlock_unlock(&txq->tx_compl.ext_buf_lock); +} + #endif /* __CN10K_RXTX_H__ */ diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h index c9ec01cd9d..4f23a8dfc3 100644 --- a/drivers/net/cnxk/cn10k_tx.h +++ b/drivers/net/cnxk/cn10k_tx.h @@ -1151,83 +1151,6 @@ cn10k_nix_prepare_mseg(struct cn10k_eth_txq *txq, return segdw; } -static inline uint16_t -nix_tx_compl_nb_pkts(struct cn10k_eth_txq *txq, const uint64_t wdata, - const uint16_t pkts, const uint32_t qmask) -{ - uint32_t available = txq->tx_compl.available; - - /* Update the available count if cached value is not enough */ - if (unlikely(available < pkts)) { - uint64_t reg, head, tail; - - /* Use LDADDA version to avoid reorder */ - reg = roc_atomic64_add_sync(wdata, txq->tx_compl.cq_status); - /* CQ_OP_STATUS operation error */ - if (reg & BIT_ULL(NIX_CQ_OP_STAT_OP_ERR) || - reg & BIT_ULL(NIX_CQ_OP_STAT_CQ_ERR)) - return 0; - - tail = reg & 0xFFFFF; - head = (reg >> 20) & 0xFFFFF; - if (tail < head) - available = tail - head + qmask + 1; - else - available = tail - head; - - txq->tx_compl.available = available; - } - return RTE_MIN(pkts, available); -} - -static inline void -handle_tx_completion_pkts(struct cn10k_eth_txq *txq, const uint16_t pkts, - uint8_t mt_safe) -{ -#define CNXK_NIX_CQ_ENTRY_SZ 128 -#define CQE_SZ(x) ((x) * CNXK_NIX_CQ_ENTRY_SZ) - - uint16_t tx_pkts = 0, nb_pkts; - const uintptr_t desc = txq->tx_compl.desc_base; - const uint64_t wdata = txq->tx_compl.wdata; - const uint32_t qmask = txq->tx_compl.qmask; - uint32_t head = txq->tx_compl.head; - struct nix_cqe_hdr_s *tx_compl_cq; - struct nix_send_comp_s *tx_compl_s0; - struct rte_mbuf *m_next, *m; - - if (mt_safe) - rte_spinlock_lock(&txq->tx_compl.ext_buf_lock); - - nb_pkts = nix_tx_compl_nb_pkts(txq, wdata, pkts, qmask); - while (tx_pkts < nb_pkts) { - rte_prefetch_non_temporal((void *)(desc + - (CQE_SZ((head + 2) & qmask)))); - tx_compl_cq = (struct nix_cqe_hdr_s *) - (desc + CQE_SZ(head)); - tx_compl_s0 = (struct nix_send_comp_s *) - ((uint64_t *)tx_compl_cq + 1); - m = txq->tx_compl.ptr[tx_compl_s0->sqe_id]; - while (m->next != NULL) { - m_next = m->next; - rte_pktmbuf_free_seg(m); - m = m_next; - } - rte_pktmbuf_free_seg(m); - - head++; - head &= qmask; - tx_pkts++; - } - txq->tx_compl.head = head; - txq->tx_compl.available -= nb_pkts; - - plt_write64((wdata | nb_pkts), txq->tx_compl.cq_door); - - if (mt_safe) - rte_spinlock_unlock(&txq->tx_compl.ext_buf_lock); -} - static __rte_always_inline uint16_t cn10k_nix_xmit_pkts(void *tx_queue, uint64_t *ws, struct rte_mbuf **tx_pkts, uint16_t pkts, uint64_t *cmd, const uint16_t flags) @@ -1249,7 +1172,7 @@ cn10k_nix_xmit_pkts(void *tx_queue, uint64_t *ws, struct rte_mbuf **tx_pkts, bool sec; if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F && txq->tx_compl.ena) - handle_tx_completion_pkts(txq, pkts, flags & NIX_TX_VWQE_F); + handle_tx_completion_pkts(txq, flags & NIX_TX_VWQE_F); if (!(flags & NIX_TX_VWQE_F)) { NIX_XMIT_FC_OR_RETURN(txq, pkts); @@ -1398,7 +1321,7 @@ cn10k_nix_xmit_pkts_mseg(void *tx_queue, uint64_t *ws, bool sec; if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F && txq->tx_compl.ena) - handle_tx_completion_pkts(txq, pkts, flags & NIX_TX_VWQE_F); + handle_tx_completion_pkts(txq, flags & NIX_TX_VWQE_F); if (!(flags & NIX_TX_VWQE_F)) { NIX_XMIT_FC_OR_RETURN(txq, pkts); @@ -1953,7 +1876,7 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, uint64_t *ws, } wd; if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F && txq->tx_compl.ena) - handle_tx_completion_pkts(txq, pkts, flags & NIX_TX_VWQE_F); + handle_tx_completion_pkts(txq, flags & NIX_TX_VWQE_F); if (!(flags & NIX_TX_VWQE_F)) { NIX_XMIT_FC_OR_RETURN(txq, pkts); diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c index e55a2aa133..bae4dda5e2 100644 --- a/drivers/net/cnxk/cn9k_ethdev.c +++ b/drivers/net/cnxk/cn9k_ethdev.c @@ -329,14 +329,28 @@ static int cn9k_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx) { struct cn9k_eth_txq *txq = eth_dev->data->tx_queues[qidx]; + struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); + uint16_t flags = dev->tx_offload_flags; + struct roc_nix *nix = &dev->nix; + uint32_t head = 0, tail = 0; int rc; + rc = cnxk_nix_tx_queue_stop(eth_dev, qidx); if (rc) return rc; /* Clear fc cache pkts to trigger worker stop */ txq->fc_cache_pkts = 0; + + if ((flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) && txq->tx_compl.ena) { + struct roc_nix_sq *sq = &dev->sqs[qidx]; + do { + handle_tx_completion_pkts(txq, 0); + roc_nix_sq_head_tail_get(nix, sq->qid, &head, &tail); + } while (head != tail); + } + return 0; } diff --git a/drivers/net/cnxk/cn9k_ethdev.h b/drivers/net/cnxk/cn9k_ethdev.h index a82dcb3d19..9e0a3c5bb2 100644 --- a/drivers/net/cnxk/cn9k_ethdev.h +++ b/drivers/net/cnxk/cn9k_ethdev.h @@ -107,4 +107,81 @@ void cn9k_eth_set_tx_function(struct rte_eth_dev *eth_dev); /* Security context setup */ void cn9k_eth_sec_ops_override(void); +static inline uint16_t +nix_tx_compl_nb_pkts(struct cn9k_eth_txq *txq, const uint64_t wdata, + const uint32_t qmask) +{ + uint16_t available = txq->tx_compl.available; + + /* Update the available count if cached value is not enough */ + if (!unlikely(available)) { + uint64_t reg, head, tail; + + /* Use LDADDA version to avoid reorder */ + reg = roc_atomic64_add_sync(wdata, txq->tx_compl.cq_status); + /* CQ_OP_STATUS operation error */ + if (reg & BIT_ULL(NIX_CQ_OP_STAT_OP_ERR) || + reg & BIT_ULL(NIX_CQ_OP_STAT_CQ_ERR)) + return 0; + + tail = reg & 0xFFFFF; + head = (reg >> 20) & 0xFFFFF; + if (tail < head) + available = tail - head + qmask + 1; + else + available = tail - head; + + txq->tx_compl.available = available; + } + return available; +} + +static inline void +handle_tx_completion_pkts(struct cn9k_eth_txq *txq, uint8_t mt_safe) +{ +#define CNXK_NIX_CQ_ENTRY_SZ 128 +#define CQE_SZ(x) ((x) * CNXK_NIX_CQ_ENTRY_SZ) + + uint16_t tx_pkts = 0, nb_pkts; + const uintptr_t desc = txq->tx_compl.desc_base; + const uint64_t wdata = txq->tx_compl.wdata; + const uint32_t qmask = txq->tx_compl.qmask; + uint32_t head = txq->tx_compl.head; + struct nix_cqe_hdr_s *tx_compl_cq; + struct nix_send_comp_s *tx_compl_s0; + struct rte_mbuf *m_next, *m; + + if (mt_safe) + rte_spinlock_lock(&txq->tx_compl.ext_buf_lock); + + nb_pkts = nix_tx_compl_nb_pkts(txq, wdata, qmask); + while (tx_pkts < nb_pkts) { + rte_prefetch_non_temporal((void *)(desc + + (CQE_SZ((head + 2) & qmask)))); + tx_compl_cq = (struct nix_cqe_hdr_s *) + (desc + CQE_SZ(head)); + tx_compl_s0 = (struct nix_send_comp_s *) + ((uint64_t *)tx_compl_cq + 1); + m = txq->tx_compl.ptr[tx_compl_s0->sqe_id]; + while (m->next != NULL) { + m_next = m->next; + rte_pktmbuf_free_seg(m); + m = m_next; + } + rte_pktmbuf_free_seg(m); + + head++; + head &= qmask; + tx_pkts++; + } + txq->tx_compl.head = head; + txq->tx_compl.available -= nb_pkts; + + plt_write64((wdata | nb_pkts), txq->tx_compl.cq_door); + + if (mt_safe) + rte_spinlock_unlock(&txq->tx_compl.ext_buf_lock); +} + + #endif /* __CN9K_ETHDEV_H__ */ diff --git a/drivers/net/cnxk/cn9k_tx.h b/drivers/net/cnxk/cn9k_tx.h index e956c1ad2a..8f1e05a461 100644 --- a/drivers/net/cnxk/cn9k_tx.h +++ b/drivers/net/cnxk/cn9k_tx.h @@ -559,83 +559,6 @@ cn9k_nix_xmit_mseg_one_release(uint64_t *cmd, void *lmt_addr, } while (lmt_status == 0); } -static inline uint16_t -nix_tx_compl_nb_pkts(struct cn9k_eth_txq *txq, const uint64_t wdata, - const uint16_t pkts, const uint32_t qmask) -{ - uint32_t available = txq->tx_compl.available; - - /* Update the available count if cached value is not enough */ - if (unlikely(available < pkts)) { - uint64_t reg, head, tail; - - /* Use LDADDA version to avoid reorder */ - reg = roc_atomic64_add_sync(wdata, txq->tx_compl.cq_status); - /* CQ_OP_STATUS operation error */ - if (reg & BIT_ULL(NIX_CQ_OP_STAT_OP_ERR) || - reg & BIT_ULL(NIX_CQ_OP_STAT_CQ_ERR)) - return 0; - - tail = reg & 0xFFFFF; - head = (reg >> 20) & 0xFFFFF; - if (tail < head) - available = tail - head + qmask + 1; - else - available = tail - head; - - txq->tx_compl.available = available; - } - return RTE_MIN(pkts, available); -} - -static inline void -handle_tx_completion_pkts(struct cn9k_eth_txq *txq, const uint16_t pkts, - uint8_t mt_safe) -{ -#define CNXK_NIX_CQ_ENTRY_SZ 128 -#define CQE_SZ(x) ((x) * CNXK_NIX_CQ_ENTRY_SZ) - - uint16_t tx_pkts = 0, nb_pkts; - const uintptr_t desc = txq->tx_compl.desc_base; - const uint64_t wdata = txq->tx_compl.wdata; - const uint32_t qmask = txq->tx_compl.qmask; - uint32_t head = txq->tx_compl.head; - struct nix_cqe_hdr_s *tx_compl_cq; - struct nix_send_comp_s *tx_compl_s0; - struct rte_mbuf *m_next, *m; - - if (mt_safe) - rte_spinlock_lock(&txq->tx_compl.ext_buf_lock); - - nb_pkts = nix_tx_compl_nb_pkts(txq, wdata, pkts, qmask); - while (tx_pkts < nb_pkts) { - rte_prefetch_non_temporal((void *)(desc + - (CQE_SZ((head + 2) & qmask)))); - tx_compl_cq = (struct nix_cqe_hdr_s *) - (desc + CQE_SZ(head)); - tx_compl_s0 = (struct nix_send_comp_s *) - ((uint64_t *)tx_compl_cq + 1); - m = txq->tx_compl.ptr[tx_compl_s0->sqe_id]; - while (m->next != NULL) { - m_next = m->next; - rte_pktmbuf_free_seg(m); - m = m_next; - } - rte_pktmbuf_free_seg(m); - - head++; - head &= qmask; - tx_pkts++; - } - txq->tx_compl.head = head; - txq->tx_compl.available -= nb_pkts; - - plt_write64((wdata | nb_pkts), txq->tx_compl.cq_door); - - if (mt_safe) - rte_spinlock_unlock(&txq->tx_compl.ext_buf_lock); -} - static __rte_always_inline uint16_t cn9k_nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts, uint64_t *cmd, const uint16_t flags) @@ -648,7 +571,7 @@ cn9k_nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts, uint16_t i; if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F && txq->tx_compl.ena) - handle_tx_completion_pkts(txq, pkts, 0); + handle_tx_completion_pkts(txq, 0); NIX_XMIT_FC_OR_RETURN(txq, pkts); @@ -700,7 +623,7 @@ cn9k_nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts, uint64_t i; if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F && txq->tx_compl.ena) - handle_tx_completion_pkts(txq, pkts, 0); + handle_tx_completion_pkts(txq, 0); NIX_XMIT_FC_OR_RETURN(txq, pkts); @@ -1049,7 +972,7 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts_left; if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F && txq->tx_compl.ena) - handle_tx_completion_pkts(txq, pkts, 0); + handle_tx_completion_pkts(txq, 0); NIX_XMIT_FC_OR_RETURN(txq, pkts);