From patchwork Wed May 15 18:08:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 53447 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4F0CF69D4; Wed, 15 May 2019 20:08:35 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (rnd-relay.smtp.broadcom.com [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id AFF055A6A for ; Wed, 15 May 2019 20:08:25 +0200 (CEST) Received: from nis-sj1-27.broadcom.com (nis-sj1-27.lvn.broadcom.net [10.75.144.136]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id B47EF30C0BB; Wed, 15 May 2019 11:08:22 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com B47EF30C0BB DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1557943702; bh=EV3OWFFZVn29XyU5y2c55a+H40GNeBkQ1cDtuMaA+xc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mN6Rm+E69SAP7G54ba8ihxTCNjw65BfCaHJFJwQzNt0L05NNKfF5mH8G3AUflb+ud DcsxgPiKt0pcvCdw4SWmqnbsJxRulrTXT1lO/V6gh+pubmK46HiS3r9yWvZv2nfmOG AEb7K5T2t8UTArNYBsLEf9orR7PntlWvyziV08aA= Received: from C02VPB22HTD6.wifi.broadcom.net (c02vpb22htd6.wifi.broadcom.net [10.69.74.102]) by nis-sj1-27.broadcom.com (Postfix) with ESMTP id 95919AC07C2; Wed, 15 May 2019 11:08:23 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Sriharsha Basavapatna Date: Wed, 15 May 2019 11:08:17 -0700 Message-Id: <20190515180817.71523-7-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.20.1 (Apple Git-117) In-Reply-To: <20190515180817.71523-1-ajit.khaparde@broadcom.com> References: <20190515180817.71523-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 6/6] net/bnxt: support bulk free of Tx mbufs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The driver currently uses rte_pktmbuf_free() to free each mbuf after transmit completion. This is optimized to free multiple mbufs using rte_mempool_put_bulk(). Signed-off-by: Ajit Khaparde Signed-off-by: Sriharsha Basavapatna --- drivers/net/bnxt/bnxt_txq.c | 11 ++++++++ drivers/net/bnxt/bnxt_txq.h | 1 + drivers/net/bnxt/bnxt_txr.c | 50 +++++++++++++++++++++++++++++-------- 3 files changed, 52 insertions(+), 10 deletions(-) diff --git a/drivers/net/bnxt/bnxt_txq.c b/drivers/net/bnxt/bnxt_txq.c index b9b975e4c..5a7bfaf3e 100644 --- a/drivers/net/bnxt/bnxt_txq.c +++ b/drivers/net/bnxt/bnxt_txq.c @@ -69,6 +69,7 @@ void bnxt_tx_queue_release_op(void *tx_queue) rte_memzone_free(txq->mz); txq->mz = NULL; + rte_free(txq->free); rte_free(txq); } } @@ -110,6 +111,16 @@ int bnxt_tx_queue_setup_op(struct rte_eth_dev *eth_dev, rc = -ENOMEM; goto out; } + + txq->free = rte_zmalloc_socket(NULL, + sizeof(struct rte_mbuf *) * nb_desc, + RTE_CACHE_LINE_SIZE, socket_id); + if (!txq->free) { + PMD_DRV_LOG(ERR, "allocation of tx mbuf free array failed!"); + rte_free(txq); + rc = -ENOMEM; + goto out; + } txq->bp = bp; txq->nb_tx_desc = nb_desc; txq->tx_free_thresh = tx_conf->tx_free_thresh; diff --git a/drivers/net/bnxt/bnxt_txq.h b/drivers/net/bnxt/bnxt_txq.h index 720ca90cf..a0d4678d9 100644 --- a/drivers/net/bnxt/bnxt_txq.h +++ b/drivers/net/bnxt/bnxt_txq.h @@ -33,6 +33,7 @@ struct bnxt_tx_queue { unsigned int cp_nr_rings; struct bnxt_cp_ring_info *cp_ring; const struct rte_memzone *mz; + struct rte_mbuf **free; }; void bnxt_free_txq_stats(struct bnxt_tx_queue *txq); diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c index b15778b39..9de12e0d0 100644 --- a/drivers/net/bnxt/bnxt_txr.c +++ b/drivers/net/bnxt/bnxt_txr.c @@ -320,6 +320,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt, RTE_VERIFY(m_seg->data_len); txr->tx_prod = RING_NEXT(txr->tx_ring_struct, txr->tx_prod); tx_buf = &txr->tx_buf_ring[txr->tx_prod]; + tx_buf->mbuf = m_seg; txbd = &txr->tx_desc_ring[txr->tx_prod]; txbd->address = rte_cpu_to_le_64(rte_mbuf_data_iova(m_seg)); @@ -339,24 +340,53 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt, static void bnxt_tx_cmp(struct bnxt_tx_queue *txq, int nr_pkts) { struct bnxt_tx_ring_info *txr = txq->tx_ring; + struct rte_mempool *pool = NULL; + struct rte_mbuf **free = txq->free; uint16_t cons = txr->tx_cons; + unsigned int blk = 0; int i, j; for (i = 0; i < nr_pkts; i++) { - struct bnxt_sw_tx_bd *tx_buf; struct rte_mbuf *mbuf; + struct bnxt_sw_tx_bd *tx_buf = &txr->tx_buf_ring[cons]; + unsigned short nr_bds = tx_buf->nr_bds; - tx_buf = &txr->tx_buf_ring[cons]; - cons = RING_NEXT(txr->tx_ring_struct, cons); - mbuf = tx_buf->mbuf; - tx_buf->mbuf = NULL; - - /* EW - no need to unmap DMA memory? */ - - for (j = 1; j < tx_buf->nr_bds; j++) + for (j = 0; j < nr_bds; j++) { + mbuf = tx_buf->mbuf; + tx_buf->mbuf = NULL; cons = RING_NEXT(txr->tx_ring_struct, cons); - rte_pktmbuf_free(mbuf); + tx_buf = &txr->tx_buf_ring[cons]; + if (!mbuf) /* long_bd's tx_buf ? */ + continue; + + mbuf = rte_pktmbuf_prefree_seg(mbuf); + if (unlikely(!mbuf)) + continue; + + /* EW - no need to unmap DMA memory? */ + + if (likely(mbuf->pool == pool)) { + /* Add mbuf to the bulk free array */ + free[blk++] = mbuf; + } else { + /* Found an mbuf from a different pool. Free + * mbufs accumulated so far to the previous + * pool + */ + if (likely(pool != NULL)) + rte_mempool_put_bulk(pool, + (void *)free, + blk); + + /* Start accumulating mbufs in a new pool */ + free[0] = mbuf; + pool = mbuf->pool; + blk = 1; + } + } } + if (blk) + rte_mempool_put_bulk(pool, (void *)free, blk); txr->tx_cons = cons; }