[v1] graph: fix out of bounds access when re-allocate node objs

Message ID 20220727023924.2066465-1-zhirun.yan@intel.com (mailing list archive)
State Superseded, archived
Delegated to: Thomas Monjalon
Headers
Series [v1] graph: fix out of bounds access when re-allocate node objs |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/intel-Testing success Testing PASS
ci/iol-aarch64-compile-testing success Testing PASS
ci/iol-aarch64-unit-testing success Testing PASS
ci/iol-mellanox-Performance success Performance Testing PASS
ci/iol-x86_64-unit-testing success Testing PASS
ci/iol-x86_64-compile-testing success Testing PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/Intel-compilation success Compilation OK

Commit Message

Yan, Zhirun July 27, 2022, 2:39 a.m. UTC
  For __rte_node_enqueue_prologue(), If the number of objs is more than
the node->size * 2, the extra objs will write out of bounds memory.
It should use __rte_node_stream_alloc_size() to request enough memory.

And for rte_node_next_stream_put(), it will re-allocate a small size,
when the node free space is small and new objs is less than the current
node->size. Some objs pointers behind new size may be lost. And it will
cause memory leak. It should request enough size of memory, containing
the original objs and new objs at least.

Fixes: 40d4f51403ec ("graph: implement fastpath routines")

Signed-off-by: Zhirun Yan <zhirun.yan@intel.com>
Signed-off-by: Liang, Cunming <cunming.liang@intel.com>
---
 lib/graph/rte_graph_worker.h | 14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)
  

Comments

Jerin Jacob Aug. 1, 2022, 1:13 p.m. UTC | #1
On Wed, Jul 27, 2022 at 8:10 AM Zhirun Yan <zhirun.yan@intel.com> wrote:
>
> For __rte_node_enqueue_prologue(), If the number of objs is more than
> the node->size * 2, the extra objs will write out of bounds memory.
> It should use __rte_node_stream_alloc_size() to request enough memory.
>
> And for rte_node_next_stream_put(), it will re-allocate a small size,
> when the node free space is small and new objs is less than the current
> node->size. Some objs pointers behind new size may be lost. And it will
> cause memory leak. It should request enough size of memory, containing
> the original objs and new objs at least.
>
> Fixes: 40d4f51403ec ("graph: implement fastpath routines")
>
> Signed-off-by: Zhirun Yan <zhirun.yan@intel.com>
> Signed-off-by: Liang, Cunming <cunming.liang@intel.com>
> ---
>  lib/graph/rte_graph_worker.h | 14 ++++++++++----
>  1 file changed, 10 insertions(+), 4 deletions(-)
>
> diff --git a/lib/graph/rte_graph_worker.h b/lib/graph/rte_graph_worker.h
> index 0c0b9c095a..b7d145c3cb 100644
> --- a/lib/graph/rte_graph_worker.h
> +++ b/lib/graph/rte_graph_worker.h
> @@ -218,13 +218,16 @@ static __rte_always_inline void
>  __rte_node_enqueue_prologue(struct rte_graph *graph, struct rte_node *node,
>                             const uint16_t idx, const uint16_t space)
>  {
> +       uint32_t req_size;
>
>         /* Add to the pending stream list if the node is new */
>         if (idx == 0)
>                 __rte_node_enqueue_tail_update(graph, node);
>
> -       if (unlikely(node->size < (idx + space)))
> -               __rte_node_stream_alloc(graph, node);
> +       if (unlikely(node->size < (idx + space))) {
> +               req_size = rte_align32pow2(node->size + space);
> +               __rte_node_stream_alloc_size(graph, node, req_size);
> +       }

Change looks good to me.

Please have an inline function to avoid code duplication(Same change
in rte_node_next_stream_get())


With above change:
Acked-by: Jerin Jacob <jerinj@marvell.com>

>  }
>
>  /**
> @@ -430,9 +433,12 @@ rte_node_next_stream_get(struct rte_graph *graph, struct rte_node *node,
>         node = __rte_node_next_node_get(node, next);
>         const uint16_t idx = node->idx;
>         uint16_t free_space = node->size - idx;
> +       uint32_t req_size;
>
> -       if (unlikely(free_space < nb_objs))
> -               __rte_node_stream_alloc_size(graph, node, nb_objs);
> +       if (unlikely(free_space < nb_objs)) {
> +               req_size = rte_align32pow2(node->size + nb_objs);
> +               __rte_node_stream_alloc_size(graph, node, req_size);
> +       }
>
>         return &node->objs[idx];
>  }
> --
> 2.25.1
>
  

Patch

diff --git a/lib/graph/rte_graph_worker.h b/lib/graph/rte_graph_worker.h
index 0c0b9c095a..b7d145c3cb 100644
--- a/lib/graph/rte_graph_worker.h
+++ b/lib/graph/rte_graph_worker.h
@@ -218,13 +218,16 @@  static __rte_always_inline void
 __rte_node_enqueue_prologue(struct rte_graph *graph, struct rte_node *node,
 			    const uint16_t idx, const uint16_t space)
 {
+	uint32_t req_size;
 
 	/* Add to the pending stream list if the node is new */
 	if (idx == 0)
 		__rte_node_enqueue_tail_update(graph, node);
 
-	if (unlikely(node->size < (idx + space)))
-		__rte_node_stream_alloc(graph, node);
+	if (unlikely(node->size < (idx + space))) {
+		req_size = rte_align32pow2(node->size + space);
+		__rte_node_stream_alloc_size(graph, node, req_size);
+	}
 }
 
 /**
@@ -430,9 +433,12 @@  rte_node_next_stream_get(struct rte_graph *graph, struct rte_node *node,
 	node = __rte_node_next_node_get(node, next);
 	const uint16_t idx = node->idx;
 	uint16_t free_space = node->size - idx;
+	uint32_t req_size;
 
-	if (unlikely(free_space < nb_objs))
-		__rte_node_stream_alloc_size(graph, node, nb_objs);
+	if (unlikely(free_space < nb_objs)) {
+		req_size = rte_align32pow2(node->size + nb_objs);
+		__rte_node_stream_alloc_size(graph, node, req_size);
+	}
 
 	return &node->objs[idx];
 }