From patchwork Thu Jul 7 00:14:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aleksandr Miloshenko X-Patchwork-Id: 113770 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7AAB0A0540; Thu, 7 Jul 2022 02:14:30 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 12DE84069D; Thu, 7 Jul 2022 02:14:30 +0200 (CEST) Received: from mail13.f5.com (mail13.f5.com [104.219.104.14]) by mails.dpdk.org (Postfix) with ESMTP id BFA6640691 for ; Thu, 7 Jul 2022 02:14:28 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=f5.com; i=@f5.com; q=dns/txt; s=f5; t=1657152870; x=1688688870; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=01/EVXABIW4oak2/nSETT+656WWCNHyHW5l8V+NbVCY=; b=hpyM4MiGyBSpFvkaoBItdf3UtDSOTG/5RjLaIgwkG8GAbGESpNYzVPwu J1/RRHPTL4LVS6SW0l0ioIpvTz9ufSJmGcVikDnDjav0BOJ3CIu/gxDkb R8Ed6R8dYjO2/CexIdvq6E1XP6q6YG+t9eSzXK7J82Yc4EiLxsLkYH2KI HZJqNXgUBh7QT2KNQePdPsX4Tw861kb+9IPP9ZD+zYJhIZGFiTiYKncyE CIbwiYrPBCUxt1KVgM+OB/46pIWHyTxGHrZYu9CPYxtpOq83c5lKbUf3l Az7agXBGs0SC6E3WnHLP+C4uVBB1zo2pyCnHQ0f36U1QnKZO2OwSibtpK Q==; IronPort-SDR: xnMtGMVi3FNDpftz464/9whwH3PEy05EjU6j062Yu6LtxyJO2mYNu7RTjRQre+RPPBuuWKS76V IOIy6u0P6dDWi4H5bh3iTaiKHWUpyaa6RkPxzAlGLUZYmoc28IOwmfZjvfLIVyOUJqHeStkElL bFUjkrAFQPb+z5OBKrVFWpug+yJboFD38yPVVaRpt4npzorwZfUyV5d52dJunA3IPsDKpUoRwH dUX3LU/t5aCU6rTevB4Cm9m/hiCDP6WRVsCZo1Ll5yd7QiVWM5ixxqo1ygzaNm7Hpe93tcIP9j t5w= IronPort-Data: A9a23:dh2nyqAeTru1zxVW/7/hw5YqxClBgxIJ4kV8jS/XYbTApDwngTQEn DdNXm+CM6mMYGPxLowgO4jg8RsEvp/WnIVlHQtv/xmBbZ7oRentXo3Fcxiqb0t+CuWZESqLy e1ANoGYRCwTZiaE+k/1WlTZQPUVOZigHtIQMsadUsxKbVIiGX1JZS5LwbZj2NY32YbhWGthh PupyyHhEA78s9JLGjJMg06zgEsHUMXa4Fv0jHRnDRx4lAO2e00uMX4qDfrZw03QGdAIQ7HgF 44v+5nilo/R109F5tpICd8XeGVSKlLZFVDmZna7x8FOK/WfzxHe3JrXNNJEAatWZ9Vlnfgok JMU6cH2EF92ePCR8Agee0Aw/yVWNOtD+bvAOniyt8+U50ObNXDrxp2CCWlsZdBAoLYsWT4mG fswbWplggq4r+C3xr25S+9jwNs+BMjtII4b/HpnyFnxB/snRZ3CBbXL49Bw3TEsi8QIFvHbD +IVZCRHYAzMYFtIIFh/IJh4m+yvgH/4aRVW9RSeoq9fy3OVyAFu0aTgK/LbJ5qBQsA9tk/eo 2PC+H/5DxcZM/SRlHyO9XfEuwNltUsXQ6pLTPvhqqMs2QXMgDFOYCD6nGCT+ZGR4nNSkfoGQ 6DI0kLCdZQPyXE= IronPort-HdrOrdr: A9a23:/qvJfa2r4Rx2zbLCVdnSuwqjBKskLtp133Aq2lEZdPWaSKGlfq eV7ZcmPHDP5wr5NEtKpTniAsm9qA3nm6KdiLN5VYtKNzOLhILHFutfBPPZogHdJw== X-IronPort-AV: E=McAfee;i="6400,9594,10400"; a="205028307" X-IronPort-AV: E=Sophos;i="5.92,251,1650956400"; d="scan'208";a="205028307" Received: from unknown (HELO localhost.localdomain) ([10.145.107.55]) by mail.f5net.com with ESMTP; 06 Jul 2022 17:14:29 -0700 From: Aleksandr Miloshenko To: jingjing.wu@intel.com, beilei.xing@intel.com Cc: dev@dpdk.org, b.oconnor@f5.com, s.lewites@f5.com, Aleksandr Miloshenko Subject: [PATCH] iavf: do tx done cleanup starting from tx_tail Date: Wed, 6 Jul 2022 17:14:14 -0700 Message-Id: <20220707001414.25105-1-a.miloshenko@f5.com> X-Mailer: git-send-email 2.37.0 MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org iavf_xmit_pkts() sets tx_tail to the next of last transmitted tx descriptor. So the cleanup of tx done descriptors must be started from tx_tail, not from the next of tx_tail. Otherwise rte_eth_tx_done_cleanup() doesn't free the first tx done mbuf when tx queue is full. Signed-off-by: Aleksandr Miloshenko --- drivers/net/iavf/iavf_rxtx.c | 18 ++++++++---------- 1 file changed, 8 insertions(+), 10 deletions(-) diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c index 109ba756f8..7cd5db6e49 100644 --- a/drivers/net/iavf/iavf_rxtx.c +++ b/drivers/net/iavf/iavf_rxtx.c @@ -3184,14 +3184,14 @@ iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq, uint32_t free_cnt) { struct iavf_tx_entry *swr_ring = txq->sw_ring; - uint16_t i, tx_last, tx_id; + uint16_t tx_last, tx_id; uint16_t nb_tx_free_last; uint16_t nb_tx_to_clean; - uint32_t pkt_cnt; + uint32_t pkt_cnt = 0; - /* Start free mbuf from the next of tx_tail */ - tx_last = txq->tx_tail; - tx_id = swr_ring[tx_last].next_id; + /* Start free mbuf from tx_tail */ + tx_id = txq->tx_tail; + tx_last = tx_id; if (txq->nb_free == 0 && iavf_xmit_cleanup(txq)) return 0; @@ -3204,10 +3204,8 @@ iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq, /* Loop through swr_ring to count the amount of * freeable mubfs and packets. */ - for (pkt_cnt = 0; pkt_cnt < free_cnt; ) { - for (i = 0; i < nb_tx_to_clean && - pkt_cnt < free_cnt && - tx_id != tx_last; i++) { + while (pkt_cnt < free_cnt) { + do { if (swr_ring[tx_id].mbuf != NULL) { rte_pktmbuf_free_seg(swr_ring[tx_id].mbuf); swr_ring[tx_id].mbuf = NULL; @@ -3220,7 +3218,7 @@ iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq, } tx_id = swr_ring[tx_id].next_id; - } + } while (--nb_tx_to_clean && pkt_cnt < free_cnt && tx_id != tx_last); if (txq->rs_thresh > txq->nb_tx_desc - txq->nb_free || tx_id == tx_last)