From patchwork Wed Jul 27 02:39:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yan, Zhirun" X-Patchwork-Id: 114251 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 53D7BA00C4; Wed, 27 Jul 2022 04:39:54 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3D59140E03; Wed, 27 Jul 2022 04:39:54 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 115E540A89 for ; Wed, 27 Jul 2022 04:39:51 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1658889592; x=1690425592; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=jICjidXZGEE+Q6MGtGIbMDNE5uZzY3GCXznMya+j/ro=; b=OMrHR43+Mo5sFG5wd3HkWZQ+NZEExrStmpy8pWXLdOqnnvNxn7HuS2Ae 0BL/ZoYae1dwyVBtPUGBjHbfrSrNLE8xiIgmnsVe/oTPyB7YUyXfpR9i7 vZbRu9rWpzwZWEYGuoJfHWlHx15pYrQf3BKIWAQ4Nk52Y99Fdirslxr05 iXQTGT+CUa1j2Wj/P0h/oHPvfb6SFbWsKknF7oZNkJDMjSHWN6BXdtqru gzPFX1c31nZt1/hHA4YiUz/nNIDzBjpn8MKBFZbm6ylk6Lnj72O39EqnW IFZTWdjjlAoZUUSZR1XwU+OkTayfs0gz3NmTZHPzxNBaQ11xv6KdJmDg5 g==; X-IronPort-AV: E=McAfee;i="6400,9594,10420"; a="275001793" X-IronPort-AV: E=Sophos;i="5.93,194,1654585200"; d="scan'208";a="275001793" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Jul 2022 19:39:50 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,194,1654585200"; d="scan'208";a="575780940" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.118.165]) by orsmga006.jf.intel.com with ESMTP; 26 Jul 2022 19:39:46 -0700 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com Cc: Zhirun Yan , Liang@dpdk.org, Cunming Subject: [PATCH v1] graph: fix out of bounds access when re-allocate node objs Date: Wed, 27 Jul 2022 10:39:24 +0800 Message-Id: <20220727023924.2066465-1-zhirun.yan@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org For __rte_node_enqueue_prologue(), If the number of objs is more than the node->size * 2, the extra objs will write out of bounds memory. It should use __rte_node_stream_alloc_size() to request enough memory. And for rte_node_next_stream_put(), it will re-allocate a small size, when the node free space is small and new objs is less than the current node->size. Some objs pointers behind new size may be lost. And it will cause memory leak. It should request enough size of memory, containing the original objs and new objs at least. Fixes: 40d4f51403ec ("graph: implement fastpath routines") Signed-off-by: Zhirun Yan Signed-off-by: Liang, Cunming Acked-by: Jerin Jacob --- lib/graph/rte_graph_worker.h | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/lib/graph/rte_graph_worker.h b/lib/graph/rte_graph_worker.h index 0c0b9c095a..b7d145c3cb 100644 --- a/lib/graph/rte_graph_worker.h +++ b/lib/graph/rte_graph_worker.h @@ -218,13 +218,16 @@ static __rte_always_inline void __rte_node_enqueue_prologue(struct rte_graph *graph, struct rte_node *node, const uint16_t idx, const uint16_t space) { + uint32_t req_size; /* Add to the pending stream list if the node is new */ if (idx == 0) __rte_node_enqueue_tail_update(graph, node); - if (unlikely(node->size < (idx + space))) - __rte_node_stream_alloc(graph, node); + if (unlikely(node->size < (idx + space))) { + req_size = rte_align32pow2(node->size + space); + __rte_node_stream_alloc_size(graph, node, req_size); + } } /** @@ -430,9 +433,12 @@ rte_node_next_stream_get(struct rte_graph *graph, struct rte_node *node, node = __rte_node_next_node_get(node, next); const uint16_t idx = node->idx; uint16_t free_space = node->size - idx; + uint32_t req_size; - if (unlikely(free_space < nb_objs)) - __rte_node_stream_alloc_size(graph, node, nb_objs); + if (unlikely(free_space < nb_objs)) { + req_size = rte_align32pow2(node->size + nb_objs); + __rte_node_stream_alloc_size(graph, node, req_size); + } return &node->objs[idx]; }