From patchwork Thu Dec 8 07:27:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 120554 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 365F8A00C2; Thu, 8 Dec 2022 08:28:02 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2334C42D31; Thu, 8 Dec 2022 08:27:50 +0100 (CET) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id D953742D27; Thu, 8 Dec 2022 08:27:47 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1670484468; x=1702020468; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mlKaZuJJrjek7S2PDcZX2JhpxPZdS0ONsWt6mT36Hfg=; b=c7X78bkBkwrHWRTQ146jZWTeV43IQLPWm4FShcm0f2WUS/+eWRteCrcl g8L7gjFgBC4JfSQaabKjRcHSjqdH8Mt911uBzX4VGsKCLhS6vlsz4qz32 ENzd0LEIcQ2In0anZWSa4pIJwkxDxbAGD0Vy/s4fGZu1auWjNQbQ4q7Jh LlqekdiyapPFm2tyRssUCCbB/q0M9p8//zXTlGgjWMzDDKMnJYUyXlhzj TbuCzN7mlCH5GuE2kTM1MWVJoqM6aEdsqkNzYgZDTX6qRuJgI6vqBgNsB HBq2ANB9RvoQF2BIOV4jQn6EfbD1sKugmlqFmCaRs4fwtMsvHyJJZdw2u w==; X-IronPort-AV: E=McAfee;i="6500,9779,10554"; a="297455955" X-IronPort-AV: E=Sophos;i="5.96,227,1665471600"; d="scan'208";a="297455955" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Dec 2022 23:27:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10554"; a="679428398" X-IronPort-AV: E=Sophos;i="5.96,227,1665471600"; d="scan'208";a="679428398" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by orsmga001.jf.intel.com with ESMTP; 07 Dec 2022 23:27:45 -0800 From: beilei.xing@intel.com To: jingjing.wu@intel.com, qi.z.zhang@intel.com Cc: dev@dpdk.org, stable@dpdk.org, Beilei Xing Subject: [PATCH 3/3] net/idpf: fix splitq xmit free Date: Thu, 8 Dec 2022 07:27:25 +0000 Message-Id: <20221208072725.32434-4-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20221208072725.32434-1-beilei.xing@intel.com> References: <20221208072725.32434-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Jingjing Wu When context descriptor is used during sending packets, mbuf is not freed correctly, it will cause mempool be exhausted. This patch refines the free function. Fixes: 770f4dfe0f79 ("net/idpf: support basic Tx data path") Cc: stable@dpdk.org Signed-off-by: Jingjing Wu Signed-off-by: Beilei Xing --- drivers/net/idpf/idpf_rxtx.c | 29 +++++++++++++++++++---------- 1 file changed, 19 insertions(+), 10 deletions(-) diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c index b4a396c3f5..5aef8ba2b6 100644 --- a/drivers/net/idpf/idpf_rxtx.c +++ b/drivers/net/idpf/idpf_rxtx.c @@ -1508,6 +1508,7 @@ idpf_split_tx_free(struct idpf_tx_queue *cq) struct idpf_tx_entry *txe; struct idpf_tx_queue *txq; uint16_t gen, qid, q_head; + uint16_t nb_desc_clean; uint8_t ctype; txd = &compl_ring[next]; @@ -1525,20 +1526,24 @@ idpf_split_tx_free(struct idpf_tx_queue *cq) switch (ctype) { case IDPF_TXD_COMPLT_RE: - if (q_head == 0) - txq->last_desc_cleaned = txq->nb_tx_desc - 1; - else - txq->last_desc_cleaned = q_head - 1; - if (unlikely((txq->last_desc_cleaned % 32) == 0)) { + /* clean to q_head which indicates be fetched txq desc id + 1. + * TODO: need to refine and remove the if condition. + */ + if (unlikely(q_head % 32)) { PMD_DRV_LOG(ERR, "unexpected desc (head = %u) completion.", q_head); return; } - + if (txq->last_desc_cleaned > q_head) + nb_desc_clean = (txq->nb_tx_desc - txq->last_desc_cleaned) + + q_head; + else + nb_desc_clean = q_head - txq->last_desc_cleaned; + txq->nb_free += nb_desc_clean; + txq->last_desc_cleaned = q_head; break; case IDPF_TXD_COMPLT_RS: - txq->nb_free++; - txq->nb_used--; + /* q_head indicates sw_id when ctype is 2 */ txe = &txq->sw_ring[q_head]; if (txe->mbuf != NULL) { rte_pktmbuf_free_seg(txe->mbuf); @@ -1693,12 +1698,16 @@ idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, /* fill the last descriptor with End of Packet (EOP) bit */ txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_EOP; - if (unlikely((tx_id % 32) == 0)) - txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_RE; if (ol_flags & IDPF_TX_CKSUM_OFFLOAD_MASK) txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_CS_EN; txq->nb_free = (uint16_t)(txq->nb_free - nb_used); txq->nb_used = (uint16_t)(txq->nb_used + nb_used); + + if (txq->nb_used >= 32) { + txd->qw1.cmd_dtype |= IDPF_TXD_FLEX_FLOW_CMD_RE; + /* Update txq RE bit counters */ + txq->nb_used = 0; + } } /* update the tail pointer if any packets were processed */