[RESEND] malloc: fix allocation for a specific case with ASan
Checks
Commit Message
Allocation would fail with ASan enabled if the size and alignment was
equal to half of the page size, e.g.:
size_t pg_sz = 2 * (1 << 20);
rte_malloc(NULL, pg_sz / 2, pg_sz / 2);
In such case, try_expand_heap_primary() only allocated one page but it
is not enough to fit this allocation with such alignment and
MALLOC_ELEM_TRAILER_LEN > 0, as correctly checked by
malloc_elem_can_hold().
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
---
lib/eal/common/malloc_heap.c | 4 ++--
lib/eal/common/malloc_mp.c | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)
Comments
Recheck-request: iol-unit-amd64-testing
Recheck-request: rebase=main,iol-unit-amd64-testing,iol-unit-arm64-testing
Recheck-request:
iol-compile-amd64-testing,iol-compile-arm64-testing,iol-unit-amd64-testing,iol-unit-arm64-testing
@@ -401,8 +401,8 @@ try_expand_heap_primary(struct malloc_heap *heap, uint64_t pg_sz,
int n_segs;
bool callback_triggered = false;
- alloc_sz = RTE_ALIGN_CEIL(RTE_ALIGN_CEIL(elt_size, align) +
- MALLOC_ELEM_OVERHEAD, pg_sz);
+ alloc_sz = RTE_ALIGN_CEIL(RTE_MAX(MALLOC_ELEM_HEADER_LEN, align) +
+ elt_size + MALLOC_ELEM_TRAILER_LEN, pg_sz);
n_segs = alloc_sz / pg_sz;
/* we can't know in advance how many pages we'll need, so we malloc */
@@ -251,8 +251,8 @@ handle_alloc_request(const struct malloc_mp_req *m,
return -1;
}
- alloc_sz = RTE_ALIGN_CEIL(RTE_ALIGN_CEIL(ar->elt_size, ar->align) +
- MALLOC_ELEM_OVERHEAD, ar->page_sz);
+ alloc_sz = RTE_ALIGN_CEIL(RTE_MAX(MALLOC_ELEM_HEADER_LEN, ar->align) +
+ ar->elt_size + MALLOC_ELEM_TRAILER_LEN, ar->page_sz);
n_segs = alloc_sz / ar->page_sz;
/* we can't know in advance how many pages we'll need, so we malloc */