From patchwork Thu Apr 13 09:44:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wenjing Qiao X-Patchwork-Id: 126012 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9627742931; Thu, 13 Apr 2023 11:51:29 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7AF8D42D5E; Thu, 13 Apr 2023 11:50:32 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id 9199F42D48; Thu, 13 Apr 2023 11:50:30 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1681379430; x=1712915430; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Ah9vgoVDJaO1ih3yu8myzr6iCvuVkAMldnLcmp3bdvw=; b=jNgYbSg9SouRgRFJkbgVoHBDkZL+2lfllmumPGdf/MlF8RPv+tRXFlYs /NT2/jdQIJd7AtCcoFqh9sQ/FpCVsAmVl8ce7An3X4nIcxmpgBgLXrczw lVh5VUp/R5K3mH5qnGEERy9ywxcpHzHyGPhCQ7FLHplJvJD2UrilfDD/n 4fNt7ZurClvsIRNVfCBp1ww6im6cbVcIiCyKEwk1P9KGZ99YGXe/Byt1T FUFsSI030TzanSZgXRLofx8Uqn7cPzRxdly0w2S0Nr1QgxwaG5TpFxrwY CwlAPHZfrxoiWYJT0RMAR2C0+5qYyrI4Q0ng2/JvP8wOUqMIOLp/aiNQY g==; X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="409290483" X-IronPort-AV: E=Sophos;i="5.98,341,1673942400"; d="scan'208";a="409290483" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2023 02:50:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="778699311" X-IronPort-AV: E=Sophos;i="5.98,341,1673942400"; d="scan'208";a="778699311" Received: from dpdk-wenjing-01.sh.intel.com ([10.67.119.244]) by FMSMGA003.fm.intel.com with ESMTP; 13 Apr 2023 02:50:27 -0700 From: Wenjing Qiao To: jingjing.wu@intel.com, beilei.xing@intel.com, qi.z.zhang@intel.com Cc: dev@dpdk.org, Wenjing Qiao , stable@dpdk.org, Christopher Pau Subject: [PATCH 10/18] common/idpf: fix memory leaks on ctrlq functions Date: Thu, 13 Apr 2023 05:44:54 -0400 Message-Id: <20230413094502.1714755-11-wenjing.qiao@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230413094502.1714755-1-wenjing.qiao@intel.com> References: <20230413094502.1714755-1-wenjing.qiao@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org idpf_init_hw needs to free it's q_info. idpf_clean_arq_element needs to return buffers via post_rx_buffs Fixes: fb4ac04e9bfa ("common/idpf: introduce common library") Cc: stable@dpdk.org Signed-off-by: Christopher Pau Signed-off-by: Wenjing Qiao --- drivers/common/idpf/base/idpf_common.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/drivers/common/idpf/base/idpf_common.c b/drivers/common/idpf/base/idpf_common.c index 69e3b32f85..de82c3458f 100644 --- a/drivers/common/idpf/base/idpf_common.c +++ b/drivers/common/idpf/base/idpf_common.c @@ -130,6 +130,8 @@ int idpf_init_hw(struct idpf_hw *hw, struct idpf_ctlq_size ctlq_size) hw->mac.addr[4] = 0x03; hw->mac.addr[5] = 0x14; + idpf_free(hw, q_info); + return 0; } @@ -219,6 +221,7 @@ bool idpf_check_asq_alive(struct idpf_hw *hw) int idpf_clean_arq_element(struct idpf_hw *hw, struct idpf_arq_event_info *e, u16 *pending) { + struct idpf_dma_mem *dma_mem = NULL; struct idpf_ctlq_msg msg = { 0 }; int status; u16 msg_data_len; @@ -226,6 +229,8 @@ int idpf_clean_arq_element(struct idpf_hw *hw, *pending = 1; status = idpf_ctlq_recv(hw->arq, pending, &msg); + if (status == -ENOMSG) + goto exit; /* ctlq_msg does not align to ctlq_desc, so copy relevant data here */ e->desc.opcode = msg.opcode; @@ -240,7 +245,14 @@ int idpf_clean_arq_element(struct idpf_hw *hw, msg_data_len = msg.data_len; idpf_memcpy(e->msg_buf, msg.ctx.indirect.payload->va, msg_data_len, IDPF_DMA_TO_NONDMA); + dma_mem = msg.ctx.indirect.payload; + } else { + *pending = 0; } + + status = idpf_ctlq_post_rx_buffs(hw, hw->arq, pending, &dma_mem); + +exit: return status; }