From patchwork Thu Jul 20 09:22:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: fengchengwen X-Patchwork-Id: 129658 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2E42442EBF; Thu, 20 Jul 2023 11:31:28 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8612442C76; Thu, 20 Jul 2023 11:31:14 +0200 (CEST) Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by mails.dpdk.org (Postfix) with ESMTP id 05A9140F17 for ; Thu, 20 Jul 2023 11:31:09 +0200 (CEST) Received: from dggpeml100024.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4R66pb25dSzVjpb; Thu, 20 Jul 2023 17:29:43 +0800 (CST) Received: from localhost.localdomain (10.50.163.32) by dggpeml100024.china.huawei.com (7.185.36.115) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Thu, 20 Jul 2023 17:31:08 +0800 From: Chengwen Feng To: , CC: , , , , , , Subject: [PATCH v19 1/6] memarea: introduce memarea library Date: Thu, 20 Jul 2023 09:22:49 +0000 Message-ID: <20230720092254.54157-2-fengchengwen@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230720092254.54157-1-fengchengwen@huawei.com> References: <20220721044648.6817-1-fengchengwen@huawei.com> <20230720092254.54157-1-fengchengwen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.50.163.32] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpeml100024.china.huawei.com (7.185.36.115) X-CFilter-Loop: Reflected X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The memarea library is an allocator of variable-size object which based on a memory region. This patch provides rte_memarea_create() and rte_memarea_destroy() API. Signed-off-by: Chengwen Feng Reviewed-by: Dongdong Liu Acked-by: Morten Brørup Acked-by: Anatoly Burakov --- MAINTAINERS | 5 + doc/api/doxy-api-index.md | 3 +- doc/api/doxy-api.conf.in | 1 + doc/guides/prog_guide/index.rst | 1 + doc/guides/prog_guide/memarea_lib.rst | 48 ++++++ doc/guides/rel_notes/release_23_07.rst | 6 + lib/memarea/memarea_private.h | 116 ++++++++++++++ lib/memarea/meson.build | 18 +++ lib/memarea/rte_memarea.c | 204 +++++++++++++++++++++++++ lib/memarea/rte_memarea.h | 141 +++++++++++++++++ lib/memarea/version.map | 12 ++ lib/meson.build | 1 + 12 files changed, 555 insertions(+), 1 deletion(-) create mode 100644 doc/guides/prog_guide/memarea_lib.rst create mode 100644 lib/memarea/memarea_private.h create mode 100644 lib/memarea/meson.build create mode 100644 lib/memarea/rte_memarea.c create mode 100644 lib/memarea/rte_memarea.h create mode 100644 lib/memarea/version.map diff --git a/MAINTAINERS b/MAINTAINERS index 18bc05fccd..bd9cad7ee3 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1613,6 +1613,11 @@ F: app/test/test_lpm* F: app/test/test_func_reentrancy.c F: app/test/test_xmmt_ops.h +Memarea - EXPERIMENTAL +M: Chengwen Feng +F: lib/memarea +F: doc/guides/prog_guide/memarea_lib.rst + Membership - EXPERIMENTAL M: Yipeng Wang M: Sameh Gobriel diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md index 3bc8778981..5c32913f92 100644 --- a/doc/api/doxy-api-index.md +++ b/doc/api/doxy-api-index.md @@ -65,7 +65,8 @@ The public API headers are grouped by topics: [memzone](@ref rte_memzone.h), [mempool](@ref rte_mempool.h), [malloc](@ref rte_malloc.h), - [memcpy](@ref rte_memcpy.h) + [memcpy](@ref rte_memcpy.h), + [memarea](@ref rte_memarea.h) - **timers**: [cycles](@ref rte_cycles.h), diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in index 1a4210b948..1f35d8483e 100644 --- a/doc/api/doxy-api.conf.in +++ b/doc/api/doxy-api.conf.in @@ -54,6 +54,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \ @TOPDIR@/lib/latencystats \ @TOPDIR@/lib/lpm \ @TOPDIR@/lib/mbuf \ + @TOPDIR@/lib/memarea \ @TOPDIR@/lib/member \ @TOPDIR@/lib/mempool \ @TOPDIR@/lib/meter \ diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst index d89cd3edb6..aa8eebe256 100644 --- a/doc/guides/prog_guide/index.rst +++ b/doc/guides/prog_guide/index.rst @@ -38,6 +38,7 @@ Programmer's Guide hash_lib toeplitz_hash_lib efd_lib + memarea_lib member_lib lpm_lib lpm6_lib diff --git a/doc/guides/prog_guide/memarea_lib.rst b/doc/guides/prog_guide/memarea_lib.rst new file mode 100644 index 0000000000..bf19090294 --- /dev/null +++ b/doc/guides/prog_guide/memarea_lib.rst @@ -0,0 +1,48 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2023 HiSilicon Limited + +Memarea Library +=============== + +Introduction +------------ + +The memarea library provides an allocator of variable-size objects, it is +oriented towards the application layer, providing 'region-based memory +management' function [1]. + +The main features are as follows: + +* The memory region can be initialized from the following memory sources: + + - HEAP: e.g. invoke ``rte_malloc_socket``. + + - LIBC: e.g. invoke posix_memalign. + + - Another memarea: it can be allocated from another memarea. + +* It supports MT-safe as long as it's specified at creation time. + +* The address returned by the allocator is align to 8B. + +Library API Overview +-------------------- + +The ``rte_memarea_create()`` function is used to create a memarea, the function +returns the pointer to the created memarea or ``NULL`` if the creation failed. + +The ``rte_memarea_destroy()`` function is used to destroy a memarea. + +Debug Mode +---------- + +In debug mode, cookies are added at the beginning and end of objects, it will +help debugging buffer overflows. + +Debug mode is disabled by default, but can be enabled by setting +``RTE_LIBRTE_MEMAREA_DEBUG`` in ``config/rte_config.h``. + +Reference +--------- + +[1] https://en.wikipedia.org/wiki/Region-based_memory_management diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst index 6a1c45162b..2751d70740 100644 --- a/doc/guides/rel_notes/release_23_07.rst +++ b/doc/guides/rel_notes/release_23_07.rst @@ -222,6 +222,12 @@ New Features See the :doc:`../tools/dmaperf` for more details. +* **Added memarea library.** + + The memarea library is an allocator of variable-size objects, it is oriented + towards the application layer, providing 'region-based memory management' + function. + Removed Items ------------- diff --git a/lib/memarea/memarea_private.h b/lib/memarea/memarea_private.h new file mode 100644 index 0000000000..fd485bb7e7 --- /dev/null +++ b/lib/memarea/memarea_private.h @@ -0,0 +1,116 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 HiSilicon Limited + */ + +#ifndef MEMAREA_PRIVATE_H +#define MEMAREA_PRIVATE_H + +#include + +#define MEMAREA_MINIMUM_TOTAL_SIZE 1024 + +#define MEMAREA_OBJECT_SIZE_ALIGN 8 + +#define MEMAREA_OBJECT_HEADER_AVAILABLE_COOKIE 0xbeef1234beef1234ULL +#define MEMAREA_OBJECT_HEADER_ALLOCATED_COOKIE 0x12345678abcdef12ULL +#define MEMAREA_OBJECT_TRAILER_COOKIE 0xabcd1234abcd5678ULL + +/** Object cookie target status. */ +enum { + /** Object is set to be available, but don't set trailer cookie. */ + COOKIE_TARGET_STATUS_AVAILABLE, + /** Object is set to be allocated, but don't set trailer cookie. */ + COOKIE_TARGET_STATUS_ALLOCATED, + /** object is new split, the header cookie will set to be available, + * the trailer cookie of the previous object will be set. + */ + COOKIE_TARGET_STATUS_NEW_AVAILABLE, + /** object is new split, the header cookie will set to be allocated, + * the trailer cookie of the previous object will be set. + */ + COOKIE_TARGET_STATUS_NEW_ALLOCATED, + /** Object is to be merged, it will no longer exist. the header cookie + * is cleared and the trailer cookie of the previous object is cleared. + */ + COOKIE_TARGET_STATUS_CLEARED, +}; + +/** Object cookie expect status. */ +enum { + /** Object is supposed to be available. */ + COOKIE_EXPECT_STATUS_AVAILABLE, + /** Object is supposed to be allocated. */ + COOKIE_EXPECT_STATUS_ALLOCATED, + /** Object is supposed to be valid (available or allocated). */ + COOKIE_EXPECT_STATUS_VALID, +}; + +#define MEMAREA_OBJECT_IS_ALLOCATED(hdr) (TAILQ_NEXT((hdr), avail_next) == (void *)-1) +#define MEMAREA_OBJECT_MARK_ALLOCATED(hdr) (TAILQ_NEXT((hdr), avail_next) = (void *)-1) + +#ifdef RTE_LIBRTE_MEMAREA_DEBUG +#define MEMAREA_OBJECT_GET_SIZE(hdr) \ + ((uintptr_t)TAILQ_NEXT((hdr), obj_next) - (uintptr_t)(hdr) - \ + sizeof(struct memarea_objhdr) - sizeof(struct memarea_objtlr)) +#else +#define MEMAREA_OBJECT_GET_SIZE(hdr) \ + ((uintptr_t)TAILQ_NEXT((hdr), obj_next) - (uintptr_t)(hdr) - \ + sizeof(struct memarea_objhdr)) +#endif + +struct memarea_objhdr { + /** The obj_next will form obj_list. */ + TAILQ_ENTRY(memarea_objhdr) obj_next; + /** If the object is available, the avail_next will link in avail_list. + * If the object has been allocated, the avail_next.tqe_next is -1. + */ + TAILQ_ENTRY(memarea_objhdr) avail_next; +#ifdef RTE_LIBRTE_MEMAREA_DEBUG + uint64_t cookie; /**< Debug cookie */ +#endif +}; + +#ifdef RTE_LIBRTE_MEMAREA_DEBUG +struct memarea_objtlr { + uint64_t cookie; /**< Debug cookie */ +}; +#endif + +TAILQ_HEAD(memarea_objhdr_list, memarea_objhdr); + +struct rte_memarea { + struct rte_memarea_param init; + rte_spinlock_t lock; + void *area_base; + struct memarea_objhdr *guard_hdr; + /** The obj_list is an address ascending ordered linked list: + * ---------------------- -------------- + * | object-1 | | object-1 | + * obj_list -> |~~~~~~~~~~~~~~~~~~~~| data-region |~~~~~~~~~~~~| + * ---> | tailq | hdr-cookie | | tlr-cookie | + * | ---------------------- -------------- + * | + * | ---------------------- -------------- + * | | object-2 | | object-2 | + * ---> |~~~~~~~~~~~~~~~~~~~~| data-region |~~~~~~~~~~~~| + * ---> | tailq | hdr-cookie | | tlr-cookie | + * | ---------------------- -------------- + * ... + * ... more objects. + * ... + * | ---------------------- + * | | object-guard | + * ---> |~~~~~~~~~~~~~~~~~~~~| + * | tailq | hdr-cookie | + * ---------------------- + * Note: the last object is the guard object, which has no data-region + * and no trailer cookie. + **/ + struct memarea_objhdr_list obj_list; + /** The avail_list is an unordered linked list. This list will hold the + * objects which are available(means can be used to allocate). + */ + struct memarea_objhdr_list avail_list; +} __rte_cache_aligned; + +#endif /* MEMAREA_PRIVATE_H */ diff --git a/lib/memarea/meson.build b/lib/memarea/meson.build new file mode 100644 index 0000000000..7e18c02d3e --- /dev/null +++ b/lib/memarea/meson.build @@ -0,0 +1,18 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2023 HiSilicon Limited + +if is_windows + build = false + reason = 'not supported on Windows' + subdir_done() +endif + +sources = files( + 'rte_memarea.c', +) +headers = files( + 'rte_memarea.h', +) +deps += [] + +annotate_locks = false diff --git a/lib/memarea/rte_memarea.c b/lib/memarea/rte_memarea.c new file mode 100644 index 0000000000..5d806ca363 --- /dev/null +++ b/lib/memarea/rte_memarea.c @@ -0,0 +1,204 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 HiSilicon Limited + */ + +#include +#include + +#include +#include +#include +#include +#include + +#include "rte_memarea.h" +#include "memarea_private.h" + +RTE_LOG_REGISTER_DEFAULT(rte_memarea_logtype, INFO); +#define RTE_MEMAREA_LOG(level, fmt, args...) \ + rte_log(RTE_LOG_ ## level, rte_memarea_logtype, \ + "MEMAREA: %s(): " fmt "\n", __func__, ## args) + +static int +memarea_check_param(const struct rte_memarea_param *init) +{ + size_t len; + + if (init == NULL) { + RTE_MEMAREA_LOG(ERR, "init param is NULL!"); + return -EINVAL; + } + + len = strnlen(init->name, RTE_MEMAREA_NAMESIZE); + if (len == 0 || len >= RTE_MEMAREA_NAMESIZE) { + RTE_MEMAREA_LOG(ERR, "name size: %zu invalid!", len); + return -EINVAL; + } + + if (init->source != RTE_MEMAREA_SOURCE_HEAP && + init->source != RTE_MEMAREA_SOURCE_LIBC && + init->source != RTE_MEMAREA_SOURCE_MEMAREA) { + RTE_MEMAREA_LOG(ERR, "%s source: %d not supported!", + init->name, init->source); + return -EINVAL; + } + + if (init->total_sz < MEMAREA_MINIMUM_TOTAL_SIZE) { + RTE_MEMAREA_LOG(ERR, "%s total-size: %zu too small!", + init->name, init->total_sz); + return -EINVAL; + } + + if (init->source == RTE_MEMAREA_SOURCE_MEMAREA && init->ma.src == NULL) { + RTE_MEMAREA_LOG(ERR, "%s source memarea is NULL!", init->name); + return -EINVAL; + } + + if (init->alg != RTE_MEMAREA_ALGORITHM_NEXTFIT) { + RTE_MEMAREA_LOG(ERR, "%s algorithm: %d not supported!", + init->name, init->alg); + return -EINVAL; + } + + if (init->reserved_bits != 0 || init->reserved_64s[0] != 0 || + init->reserved_64s[1] != 0) { + RTE_MEMAREA_LOG(ERR, "%s reserved field not zero!", init->name); + return -EINVAL; + } + + return 0; +} + +static void * +memarea_alloc_from_libc(size_t size) +{ + void *ptr = NULL; + int ret; + ret = posix_memalign(&ptr, RTE_CACHE_LINE_SIZE, size); + if (ret != 0) + return NULL; + return ptr; +} + +static void * +memarea_alloc_area(const struct rte_memarea_param *init) +{ + void *ptr = NULL; + + if (init->source == RTE_MEMAREA_SOURCE_HEAP) + ptr = rte_malloc_socket(NULL, init->total_sz, RTE_CACHE_LINE_SIZE, + init->heap.socket_id); + else if (init->source == RTE_MEMAREA_SOURCE_LIBC) + ptr = memarea_alloc_from_libc(init->total_sz); + + return ptr; +} + +static void +memarea_free_area(const struct rte_memarea_param *init, void *ptr) +{ + if (init->source == RTE_MEMAREA_SOURCE_HEAP) + rte_free(ptr); + else if (init->source == RTE_MEMAREA_SOURCE_LIBC) + free(ptr); +} + +static inline void +memarea_set_cookie(struct memarea_objhdr *hdr, int status) +{ +#ifdef RTE_LIBRTE_MEMAREA_DEBUG + struct memarea_objtlr *tlr; + + if (status == 0) { + hdr->cookie = MEMAREA_OBJECT_HEADER_AVAILABLE_COOKIE; + } else if (status == 1) { + hdr->cookie = MEMAREA_OBJECT_HEADER_ALLOCATED_COOKIE; + } else if (status == 2) { + hdr->cookie = MEMAREA_OBJECT_HEADER_AVAILABLE_COOKIE; + tlr = RTE_PTR_SUB(hdr, sizeof(struct memarea_objtlr)); + tlr->cookie = MEMAREA_OBJECT_TRAILER_COOKIE; + } else if (status == 3) { + hdr->cookie = MEMAREA_OBJECT_HEADER_ALLOCATED_COOKIE; + tlr = RTE_PTR_SUB(hdr, sizeof(struct memarea_objtlr)); + tlr->cookie = MEMAREA_OBJECT_TRAILER_COOKIE; + } else if (status == 4) { + hdr->cookie = 0; + tlr = RTE_PTR_SUB(hdr, sizeof(struct memarea_objtlr)); + tlr->cookie = 0; + } +#else + RTE_SET_USED(hdr); + RTE_SET_USED(status); +#endif +} + +struct rte_memarea * +rte_memarea_create(const struct rte_memarea_param *init) +{ + struct memarea_objhdr *hdr, *guard_hdr; + struct rte_memarea *ma; + size_t align_sz; + void *ptr; + int ret; + + /** 1st: check parameter valid. */ + ret = memarea_check_param(init); + if (ret != 0) { + rte_errno = -ret; + return NULL; + } + + /** 2nd: alloc the memarea data region. */ + ptr = memarea_alloc_area(init); + if (ptr == NULL) { + RTE_MEMAREA_LOG(ERR, "%s alloc memory area fail!", init->name); + rte_errno = ENOMEM; + return NULL; + } + + /** 3rd: alloc the memare management struct. */ + ma = rte_zmalloc(NULL, sizeof(struct rte_memarea), RTE_CACHE_LINE_SIZE); + if (ma == NULL) { + memarea_free_area(init, ptr); + RTE_MEMAREA_LOG(ERR, "%s alloc management object fail!", init->name); + rte_errno = ENOMEM; + return NULL; + } + + /** 4th: backup init parameter, initialize the lock and list. */ + ma->init = *init; + rte_spinlock_init(&ma->lock); + TAILQ_INIT(&ma->obj_list); + TAILQ_INIT(&ma->avail_list); + + /** 5th: initialize the first object and last guard object. */ + hdr = ptr; + align_sz = RTE_ALIGN_FLOOR(init->total_sz, MEMAREA_OBJECT_SIZE_ALIGN); + guard_hdr = RTE_PTR_ADD(ptr, align_sz - sizeof(struct memarea_objhdr)); + ma->area_base = ptr; + ma->guard_hdr = guard_hdr; + + /** 5.1: hook the first object to both obj_list and avail_list. */ + TAILQ_INSERT_TAIL(&ma->obj_list, hdr, obj_next); + TAILQ_INSERT_TAIL(&ma->avail_list, hdr, avail_next); + memarea_set_cookie(hdr, COOKIE_TARGET_STATUS_AVAILABLE); + + /** 5.2: hook the guard object to only obj_list. */ + memset(guard_hdr, 0, sizeof(struct memarea_objhdr)); + TAILQ_INSERT_AFTER(&ma->obj_list, hdr, guard_hdr, obj_next); + MEMAREA_OBJECT_MARK_ALLOCATED(guard_hdr); + memarea_set_cookie(guard_hdr, COOKIE_TARGET_STATUS_NEW_ALLOCATED); + + return ma; +} + +void +rte_memarea_destroy(struct rte_memarea *ma) +{ + if (ma == NULL) { + rte_errno = EINVAL; + return; + } + memarea_free_area(&ma->init, ma->area_base); + rte_free(ma); +} diff --git a/lib/memarea/rte_memarea.h b/lib/memarea/rte_memarea.h new file mode 100644 index 0000000000..1d4381efd7 --- /dev/null +++ b/lib/memarea/rte_memarea.h @@ -0,0 +1,141 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 HiSilicon Limited + */ + +#ifndef RTE_MEMAREA_H +#define RTE_MEMAREA_H + +/** + * @file + * RTE Memarea. + * + * The memarea is an allocator of variable-size object which based on a memory + * region. It has the following features: + * + * - The memory region can be initialized from the following memory sources: + * 1. HEAP: e.g. invoke rte_malloc_xxx family. + * 2. LIBC: e.g. invoke posix_memalign. + * 3. Another memarea: it can be allocated from another memarea. + * + * - It supports MT-safe as long as it's specified at creation time. If not + * specified, all the functions of the memarea API are lock-free, and assume + * to not be invoked in parallel on different logical cores to work on the + * same memarea. + * + * - The address returned by the allocator is align to 8B. + * + * @note The current implementation is a minimum set and does not support + * multiple-process. + */ + +#include +#include +#include + +#include + +#ifdef __cplusplus +extern "C" { +#endif + +#define RTE_MEMAREA_NAMESIZE 64 + +/** + * Memarea memory source. + */ +enum rte_memarea_source { + /** Memory source comes from rte_malloc_xxx memory. */ + RTE_MEMAREA_SOURCE_HEAP, + /** Memory source comes from libc. */ + RTE_MEMAREA_SOURCE_LIBC, + /** Memory source comes from another memarea. */ + RTE_MEMAREA_SOURCE_MEMAREA, +}; + +/** + * Memarea memory management algorithm. + */ +enum rte_memarea_algorithm { + /** The default management algorithm is a variant of the next fit + * algorithm. It uses a free-list to apply for memory and uses an + * object-list in ascending order of address to support merging + * upon free. + */ + RTE_MEMAREA_ALGORITHM_NEXTFIT, +}; + +struct rte_memarea; + +/** + * Memarea creation parameters. + */ +struct rte_memarea_param { + char name[RTE_MEMAREA_NAMESIZE]; /**< Name of memarea. */ + enum rte_memarea_source source; /**< Memory source of memarea. */ + enum rte_memarea_algorithm alg; /**< Memory management algorithm. */ + /** Total size (bytes) of memarea, it should not be less be 1024. */ + size_t total_sz; + /** Indicates whether the memarea API should be MT-safe. */ + uint32_t mt_safe : 1; + /** Reserved for future field, should be initialized to zero. */ + uint32_t reserved_bits : 31; + union { + /** The initialization parameters if the source is set to be + * RTE_MEMAREA_SOURCE_HEAP. + */ + struct { + /** Socket from which to apply for memarea's memory. */ + int socket_id; + } heap; + /** The initialization parameters if the source is set to be + * RTE_MEMAREA_SOURCE_MEMAREA. + */ + struct { + /** Source memarea which to apply for this memarea's + * memory from. + */ + struct rte_memarea *src; + } ma; + }; + /** Reserved for future fields, should be initialized to zero. */ + uint64_t reserved_64s[2]; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Create memarea. + * + * Create one new memarea. + * + * @param init + * The init parameter of memarea. + * + * @return + * Non-NULL on success. Otherwise NULL is returned (the rte_errno is set). + */ +__rte_experimental +struct rte_memarea *rte_memarea_create(const struct rte_memarea_param *init); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Destroy memarea. + * + * Destroy the memarea. + * + * @param ma + * The pointer of memarea. + * + * @note The rte_errno is set if destroy failed. + */ +__rte_experimental +void rte_memarea_destroy(struct rte_memarea *ma); + +#ifdef __cplusplus +} +#endif + +#endif /* RTE_MEMAREA_H */ diff --git a/lib/memarea/version.map b/lib/memarea/version.map new file mode 100644 index 0000000000..f36a04d7cf --- /dev/null +++ b/lib/memarea/version.map @@ -0,0 +1,12 @@ +EXPERIMENTAL { + global: + + rte_memarea_create; + rte_memarea_destroy; + + local: *; +}; + +INTERNAL { + local: *; +}; diff --git a/lib/meson.build b/lib/meson.build index fac2f52cad..36821e7007 100644 --- a/lib/meson.build +++ b/lib/meson.build @@ -42,6 +42,7 @@ libraries = [ 'kni', 'latencystats', 'lpm', + 'memarea', 'member', 'pcapng', 'power', From patchwork Thu Jul 20 09:22:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: fengchengwen X-Patchwork-Id: 129657 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1550642EBF; Thu, 20 Jul 2023 11:31:22 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4BFDA42BB1; Thu, 20 Jul 2023 11:31:13 +0200 (CEST) Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by mails.dpdk.org (Postfix) with ESMTP id F3B0640EE3 for ; Thu, 20 Jul 2023 11:31:09 +0200 (CEST) Received: from dggpeml100024.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4R66mN0nhKzNmTw; Thu, 20 Jul 2023 17:27:48 +0800 (CST) Received: from localhost.localdomain (10.50.163.32) by dggpeml100024.china.huawei.com (7.185.36.115) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Thu, 20 Jul 2023 17:31:08 +0800 From: Chengwen Feng To: , CC: , , , , , , Subject: [PATCH v19 2/6] test/memarea: support memarea test Date: Thu, 20 Jul 2023 09:22:50 +0000 Message-ID: <20230720092254.54157-3-fengchengwen@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230720092254.54157-1-fengchengwen@huawei.com> References: <20220721044648.6817-1-fengchengwen@huawei.com> <20230720092254.54157-1-fengchengwen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.50.163.32] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpeml100024.china.huawei.com (7.185.36.115) X-CFilter-Loop: Reflected X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch supports memarea test of rte_memarea_create() and rte_memarea_destroy() API. Signed-off-by: Chengwen Feng Reviewed-by: Dongdong Liu Acked-by: Morten Brørup Acked-by: Anatoly Burakov --- MAINTAINERS | 1 + app/test/meson.build | 2 + app/test/test_memarea.c | 166 ++++++++++++++++++++++++++++++++++++++++ 3 files changed, 169 insertions(+) create mode 100644 app/test/test_memarea.c diff --git a/MAINTAINERS b/MAINTAINERS index bd9cad7ee3..4ee43a9964 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1617,6 +1617,7 @@ Memarea - EXPERIMENTAL M: Chengwen Feng F: lib/memarea F: doc/guides/prog_guide/memarea_lib.rst +F: app/test/test_memarea* Membership - EXPERIMENTAL M: Yipeng Wang diff --git a/app/test/meson.build b/app/test/meson.build index b89cf0368f..0d9701d8c6 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -83,6 +83,7 @@ test_sources = files( 'test_malloc.c', 'test_malloc_perf.c', 'test_mbuf.c', + 'test_memarea.c', 'test_member.c', 'test_member_perf.c', 'test_memcpy.c', @@ -201,6 +202,7 @@ fast_tests = [ ['malloc_autotest', false, true], ['mbuf_autotest', false, true], ['mcslock_autotest', false, true], + ['memarea_autotest', true, true], ['memcpy_autotest', true, true], ['memory_autotest', false, true], ['mempool_autotest', false, true], diff --git a/app/test/test_memarea.c b/app/test/test_memarea.c new file mode 100644 index 0000000000..6078c93a16 --- /dev/null +++ b/app/test/test_memarea.c @@ -0,0 +1,166 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 HiSilicon Limited + */ + +#ifdef RTE_EXEC_ENV_WINDOWS + +#include + +#include "test.h" + +static int +test_memarea(void) +{ + printf("memarea not supported on Windows, skipping test\n"); + return TEST_SKIPPED; +} + +#else + +#include +#include + +#include +#include +#include + +#include "test.h" + +#define MEMAREA_TEST_DEFAULT_SIZE 0x1000 + +static void +test_memarea_init_param(struct rte_memarea_param *init) +{ + memset(init, 0, sizeof(struct rte_memarea_param)); + sprintf(init->name, "%s", "autotest"); + init->source = RTE_MEMAREA_SOURCE_LIBC; + init->total_sz = MEMAREA_TEST_DEFAULT_SIZE; + init->mt_safe = 1; +} + +static int +test_memarea_create_bad_param(void) +{ + struct rte_memarea_param init; + struct rte_memarea *ma; + + /* test for NULL */ + rte_errno = 0; + ma = rte_memarea_create(NULL); + TEST_ASSERT(ma == NULL, "Memarea creation expect fail"); + TEST_ASSERT(rte_errno == EINVAL, "Expected EINVAL"); + + /* test for invalid name */ + rte_errno = 0; + memset(&init, 0, sizeof(init)); + ma = rte_memarea_create(&init); + TEST_ASSERT(ma == NULL, "Memarea creation expect fail"); + TEST_ASSERT(rte_errno == EINVAL, "Expected EINVAL"); + rte_errno = 0; + memset(&init.name, 1, sizeof(init.name)); + ma = rte_memarea_create(&init); + TEST_ASSERT(ma == NULL, "Memarea creation expect fail"); + TEST_ASSERT(rte_errno == EINVAL, "Expected EINVAL"); + + /* test for invalid source */ + rte_errno = 0; + test_memarea_init_param(&init); + init.source = RTE_MEMAREA_SOURCE_MEMAREA + 1; + ma = rte_memarea_create(&init); + TEST_ASSERT(ma == NULL, "Memarea creation expect fail"); + TEST_ASSERT(rte_errno == EINVAL, "Expected EINVAL"); + + /* test for total_sz */ + rte_errno = 0; + test_memarea_init_param(&init); + init.total_sz = 0; + ma = rte_memarea_create(&init); + TEST_ASSERT(ma == NULL, "Memarea creation expect fail"); + TEST_ASSERT(rte_errno == EINVAL, "Expected EINVAL"); + + /* test for memarea NULL */ + rte_errno = 0; + test_memarea_init_param(&init); + init.source = RTE_MEMAREA_SOURCE_MEMAREA; + ma = rte_memarea_create(&init); + TEST_ASSERT(ma == NULL, "Memarea creation expect fail"); + TEST_ASSERT(rte_errno == EINVAL, "Expected EINVAL"); + + /* test for algorithm invalid */ + rte_errno = 0; + test_memarea_init_param(&init); + init.alg = RTE_MEMAREA_ALGORITHM_NEXTFIT + 1; + ma = rte_memarea_create(&init); + TEST_ASSERT(ma == NULL, "Memarea creation expect fail"); + TEST_ASSERT(rte_errno == EINVAL, "Expected EINVAL"); + + /* test for reserved field */ + rte_errno = 0; + test_memarea_init_param(&init); + init.reserved_bits = 1; + ma = rte_memarea_create(&init); + TEST_ASSERT(ma == NULL, "Memarea creation expect fail"); + TEST_ASSERT(rte_errno == EINVAL, "Expected EINVAL"); + rte_errno = 0; + test_memarea_init_param(&init); + init.reserved_64s[0] = 1; + ma = rte_memarea_create(&init); + TEST_ASSERT(ma == NULL, "Memarea creation expect fail"); + TEST_ASSERT(rte_errno == EINVAL, "Expected EINVAL"); + rte_errno = 0; + test_memarea_init_param(&init); + init.reserved_64s[1] = 1; + ma = rte_memarea_create(&init); + TEST_ASSERT(ma == NULL, "Memarea creation expect fail"); + TEST_ASSERT(rte_errno == EINVAL, "Expected EINVAL"); + + return TEST_SUCCESS; +} + +static int +test_memarea_create_destroy(void) +{ + struct rte_memarea *ma; + struct rte_memarea_param init; + + rte_errno = 0; + + /* test for create with HEAP */ + test_memarea_init_param(&init); + init.source = RTE_MEMAREA_SOURCE_HEAP; + init.heap.socket_id = SOCKET_ID_ANY; + ma = rte_memarea_create(&init); + TEST_ASSERT(ma != NULL, "Memarea creation failed"); + rte_memarea_destroy(ma); + + /* test for create with LIBC */ + test_memarea_init_param(&init); + init.source = RTE_MEMAREA_SOURCE_LIBC; + ma = rte_memarea_create(&init); + TEST_ASSERT(ma != NULL, "Memarea creation failed"); + rte_memarea_destroy(ma); + + return TEST_SUCCESS; +} + +static struct unit_test_suite memarea_test_suite = { + .suite_name = "Memarea Unit Test Suite", + .setup = NULL, + .teardown = NULL, + .unit_test_cases = { + TEST_CASE(test_memarea_create_bad_param), + TEST_CASE(test_memarea_create_destroy), + + TEST_CASES_END() /**< NULL terminate unit test array */ + } +}; + +static int +test_memarea(void) +{ + return unit_test_suite_runner(&memarea_test_suite); +} + +#endif /* RTE_EXEC_ENV_WINDOWS */ + +REGISTER_TEST_COMMAND(memarea_autotest, test_memarea); From patchwork Thu Jul 20 09:22:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: fengchengwen X-Patchwork-Id: 129656 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CB32842EBF; Thu, 20 Jul 2023 11:31:15 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1F68341144; Thu, 20 Jul 2023 11:31:12 +0200 (CEST) Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by mails.dpdk.org (Postfix) with ESMTP id EE78240E2D for ; Thu, 20 Jul 2023 11:31:09 +0200 (CEST) Received: from dggpeml100024.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4R66pb48VQzVjp3; Thu, 20 Jul 2023 17:29:43 +0800 (CST) Received: from localhost.localdomain (10.50.163.32) by dggpeml100024.china.huawei.com (7.185.36.115) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Thu, 20 Jul 2023 17:31:08 +0800 From: Chengwen Feng To: , CC: , , , , , , Subject: [PATCH v19 3/6] memarea: support alloc and free API Date: Thu, 20 Jul 2023 09:22:51 +0000 Message-ID: <20230720092254.54157-4-fengchengwen@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230720092254.54157-1-fengchengwen@huawei.com> References: <20220721044648.6817-1-fengchengwen@huawei.com> <20230720092254.54157-1-fengchengwen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.50.163.32] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpeml100024.china.huawei.com (7.185.36.115) X-CFilter-Loop: Reflected X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch supports rte_memarea_alloc() and rte_memarea_free() API. Signed-off-by: Chengwen Feng Reviewed-by: Dongdong Liu Acked-by: Morten Brørup Acked-by: Anatoly Burakov --- doc/guides/prog_guide/memarea_lib.rst | 6 + lib/memarea/memarea_private.h | 10 ++ lib/memarea/rte_memarea.c | 159 ++++++++++++++++++++++++++ lib/memarea/rte_memarea.h | 46 ++++++++ lib/memarea/version.map | 2 + 5 files changed, 223 insertions(+) diff --git a/doc/guides/prog_guide/memarea_lib.rst b/doc/guides/prog_guide/memarea_lib.rst index bf19090294..157baf3c7e 100644 --- a/doc/guides/prog_guide/memarea_lib.rst +++ b/doc/guides/prog_guide/memarea_lib.rst @@ -33,6 +33,12 @@ returns the pointer to the created memarea or ``NULL`` if the creation failed. The ``rte_memarea_destroy()`` function is used to destroy a memarea. +The ``rte_memarea_alloc()`` function is used to alloc one memory object from +the memarea. + +The ``rte_memarea_free()`` function is used to free one memory object which +allocated by ``rte_memarea_alloc()``. + Debug Mode ---------- diff --git a/lib/memarea/memarea_private.h b/lib/memarea/memarea_private.h index fd485bb7e7..ab6253294e 100644 --- a/lib/memarea/memarea_private.h +++ b/lib/memarea/memarea_private.h @@ -52,10 +52,20 @@ enum { #define MEMAREA_OBJECT_GET_SIZE(hdr) \ ((uintptr_t)TAILQ_NEXT((hdr), obj_next) - (uintptr_t)(hdr) - \ sizeof(struct memarea_objhdr) - sizeof(struct memarea_objtlr)) +#define MEMAREA_SPLIT_OBJECT_MIN_SIZE \ + (sizeof(struct memarea_objhdr) + MEMAREA_OBJECT_SIZE_ALIGN + \ + sizeof(struct memarea_objtlr)) +#define MEMAREA_SPLIT_OBJECT_GET_HEADER(hdr, alloc_sz) \ + RTE_PTR_ADD(hdr, sizeof(struct memarea_objhdr) + alloc_sz + \ + sizeof(struct memarea_objtlr)) #else #define MEMAREA_OBJECT_GET_SIZE(hdr) \ ((uintptr_t)TAILQ_NEXT((hdr), obj_next) - (uintptr_t)(hdr) - \ sizeof(struct memarea_objhdr)) +#define MEMAREA_SPLIT_OBJECT_MIN_SIZE \ + (sizeof(struct memarea_objhdr) + MEMAREA_OBJECT_SIZE_ALIGN) +#define MEMAREA_SPLIT_OBJECT_GET_HEADER(hdr, alloc_sz) \ + RTE_PTR_ADD(hdr, sizeof(struct memarea_objhdr) + alloc_sz) #endif struct memarea_objhdr { diff --git a/lib/memarea/rte_memarea.c b/lib/memarea/rte_memarea.c index 5d806ca363..7a35c875a7 100644 --- a/lib/memarea/rte_memarea.c +++ b/lib/memarea/rte_memarea.c @@ -2,8 +2,10 @@ * Copyright(c) 2023 HiSilicon Limited */ +#include #include #include +#include #include #include @@ -90,6 +92,8 @@ memarea_alloc_area(const struct rte_memarea_param *init) init->heap.socket_id); else if (init->source == RTE_MEMAREA_SOURCE_LIBC) ptr = memarea_alloc_from_libc(init->total_sz); + else if (init->source == RTE_MEMAREA_SOURCE_MEMAREA) + ptr = rte_memarea_alloc(init->ma.src, init->total_sz); return ptr; } @@ -101,6 +105,8 @@ memarea_free_area(const struct rte_memarea_param *init, void *ptr) rte_free(ptr); else if (init->source == RTE_MEMAREA_SOURCE_LIBC) free(ptr); + else if (init->source == RTE_MEMAREA_SOURCE_MEMAREA) + rte_memarea_free(init->ma.src, ptr); } static inline void @@ -202,3 +208,156 @@ rte_memarea_destroy(struct rte_memarea *ma) memarea_free_area(&ma->init, ma->area_base); rte_free(ma); } + +static inline void +memarea_lock(struct rte_memarea *ma) +{ + if (ma->init.mt_safe) + rte_spinlock_lock(&ma->lock); +} + +static inline void +memarea_unlock(struct rte_memarea *ma) +{ + if (ma->init.mt_safe) + rte_spinlock_unlock(&ma->lock); +} + +static inline void +memarea_check_cookie(const struct rte_memarea *ma, const struct memarea_objhdr *hdr, int status) +{ +#ifdef RTE_LIBRTE_MEMAREA_DEBUG + static const char *const str[] = { "PASS", "FAILED" }; + struct memarea_objtlr *tlr; + bool hdr_fail, tlr_fail; + + if (hdr == ma->guard_hdr) + return; + + tlr = RTE_PTR_SUB(TAILQ_NEXT(hdr, obj_next), sizeof(struct memarea_objtlr)); + hdr_fail = (status == COOKIE_EXPECT_STATUS_AVAILABLE && + hdr->cookie != MEMAREA_OBJECT_HEADER_AVAILABLE_COOKIE) || + (status == COOKIE_EXPECT_STATUS_ALLOCATED && + hdr->cookie != MEMAREA_OBJECT_HEADER_ALLOCATED_COOKIE) || + (status == COOKIE_EXPECT_STATUS_VALID && + (hdr->cookie != MEMAREA_OBJECT_HEADER_AVAILABLE_COOKIE && + hdr->cookie != MEMAREA_OBJECT_HEADER_ALLOCATED_COOKIE)); + tlr_fail = (tlr->cookie != MEMAREA_OBJECT_TRAILER_COOKIE); + if (!hdr_fail && !tlr_fail) + return; + + rte_panic("MEMAREA: %s check cookies failed! addr-%p header-cookie<0x%" PRIx64 " %s> trailer-cookie<0x%" PRIx64 " %s>\n", + ma->init.name, RTE_PTR_ADD(hdr, sizeof(struct memarea_objhdr)), + hdr->cookie, str[hdr_fail], tlr->cookie, str[tlr_fail]); +#else + RTE_SET_USED(ma); + RTE_SET_USED(hdr); + RTE_SET_USED(status); +#endif +} + +static inline void +memarea_split_object(struct rte_memarea *ma, struct memarea_objhdr *hdr, size_t alloc_sz) +{ + struct memarea_objhdr *split_hdr; + + split_hdr = MEMAREA_SPLIT_OBJECT_GET_HEADER(hdr, alloc_sz); + memarea_set_cookie(split_hdr, COOKIE_TARGET_STATUS_NEW_AVAILABLE); + TAILQ_INSERT_AFTER(&ma->obj_list, hdr, split_hdr, obj_next); + TAILQ_INSERT_AFTER(&ma->avail_list, hdr, split_hdr, avail_next); +} + +void * +rte_memarea_alloc(struct rte_memarea *ma, size_t size) +{ + size_t align_sz = RTE_ALIGN(size, MEMAREA_OBJECT_SIZE_ALIGN); + struct memarea_objhdr *hdr; + size_t avail_sz; + void *ptr = NULL; + + if (ma == NULL || size == 0 || align_sz < size) { + rte_errno = EINVAL; + return ptr; + } + + memarea_lock(ma); + + /** traverse every available object, return the first satisfied one. */ + TAILQ_FOREACH(hdr, &ma->avail_list, avail_next) { + /** 1st: check whether the object size meets. */ + memarea_check_cookie(ma, hdr, COOKIE_EXPECT_STATUS_AVAILABLE); + avail_sz = MEMAREA_OBJECT_GET_SIZE(hdr); + if (avail_sz < align_sz) + continue; + + /** 2nd: if the object size is too long, a new object can be split. */ + if (avail_sz - align_sz > MEMAREA_SPLIT_OBJECT_MIN_SIZE) + memarea_split_object(ma, hdr, align_sz); + + /** 3rd: allocate successful. */ + TAILQ_REMOVE(&ma->avail_list, hdr, avail_next); + MEMAREA_OBJECT_MARK_ALLOCATED(hdr); + memarea_set_cookie(hdr, COOKIE_TARGET_STATUS_ALLOCATED); + + ptr = RTE_PTR_ADD(hdr, sizeof(struct memarea_objhdr)); + break; + } + + memarea_unlock(ma); + + if (ptr == NULL) + rte_errno = ENOMEM; + return ptr; +} + +static inline void +memarea_merge_object(struct rte_memarea *ma, struct memarea_objhdr *curr, + struct memarea_objhdr *next) +{ + RTE_SET_USED(curr); + TAILQ_REMOVE(&ma->obj_list, next, obj_next); + TAILQ_REMOVE(&ma->avail_list, next, avail_next); + memarea_set_cookie(next, COOKIE_TARGET_STATUS_CLEARED); +} + +void +rte_memarea_free(struct rte_memarea *ma, void *ptr) +{ + struct memarea_objhdr *hdr, *prev, *next; + + if (ma == NULL || ptr == NULL) { + rte_errno = EINVAL; + return; + } + + hdr = RTE_PTR_SUB(ptr, sizeof(struct memarea_objhdr)); + if (!MEMAREA_OBJECT_IS_ALLOCATED(hdr)) { + RTE_MEMAREA_LOG(ERR, "detect invalid object in %s!", ma->init.name); + rte_errno = EFAULT; + return; + } + memarea_check_cookie(ma, hdr, COOKIE_EXPECT_STATUS_ALLOCATED); + + memarea_lock(ma); + + /** 1st: add to avail list. */ + TAILQ_INSERT_HEAD(&ma->avail_list, hdr, avail_next); + memarea_set_cookie(hdr, COOKIE_TARGET_STATUS_AVAILABLE); + + /** 2nd: merge if previous object is avail. */ + prev = TAILQ_PREV(hdr, memarea_objhdr_list, obj_next); + if (prev != NULL && !MEMAREA_OBJECT_IS_ALLOCATED(prev)) { + memarea_check_cookie(ma, prev, COOKIE_EXPECT_STATUS_AVAILABLE); + memarea_merge_object(ma, prev, hdr); + hdr = prev; + } + + /** 3rd: merge if next object is avail. */ + next = TAILQ_NEXT(hdr, obj_next); + if (next != NULL && !MEMAREA_OBJECT_IS_ALLOCATED(next)) { + memarea_check_cookie(ma, next, COOKIE_EXPECT_STATUS_AVAILABLE); + memarea_merge_object(ma, hdr, next); + } + + memarea_unlock(ma); +} diff --git a/lib/memarea/rte_memarea.h b/lib/memarea/rte_memarea.h index 1d4381efd7..bb1bd5bae5 100644 --- a/lib/memarea/rte_memarea.h +++ b/lib/memarea/rte_memarea.h @@ -134,6 +134,52 @@ struct rte_memarea *rte_memarea_create(const struct rte_memarea_param *init); __rte_experimental void rte_memarea_destroy(struct rte_memarea *ma); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Allocate memory from memarea. + * + * Allocate one memory object from the memarea. + * + * @param ma + * The pointer of memarea. + * @param size + * The memory size to be allocated. + * + * @return + * - NULL on error. Not enough memory, or invalid arguments (see the + * rte_errno). + * - Otherwise, the pointer to the allocated object. + * + * @note The memory allocated is not guaranteed to be zeroed. + */ +__rte_experimental +void *rte_memarea_alloc(struct rte_memarea *ma, size_t size); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Free memory to memarea. + * + * Free one memory object to the memarea. + * @note The memory object must have been returned by a previous call to + * rte_memarea_alloc(), it must be freed to the same memarea which previous + * allocated from. The behaviour of rte_memarea_free() is undefined if the + * memarea or pointer does not match these requirements. + * + * @param ma + * The pointer of memarea. If the ma is NULL, the function does nothing. + * @param ptr + * The pointer of memory object which need be freed. If the pointer is NULL, + * the function does nothing. + * + * @note The rte_errno is set if free failed. + */ +__rte_experimental +void rte_memarea_free(struct rte_memarea *ma, void *ptr); + #ifdef __cplusplus } #endif diff --git a/lib/memarea/version.map b/lib/memarea/version.map index f36a04d7cf..effbd0b488 100644 --- a/lib/memarea/version.map +++ b/lib/memarea/version.map @@ -1,8 +1,10 @@ EXPERIMENTAL { global: + rte_memarea_alloc; rte_memarea_create; rte_memarea_destroy; + rte_memarea_free; local: *; }; From patchwork Thu Jul 20 09:22:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: fengchengwen X-Patchwork-Id: 129661 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C9BFE42EBF; Thu, 20 Jul 2023 11:31:52 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CED1142D44; Thu, 20 Jul 2023 11:31:18 +0200 (CEST) Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by mails.dpdk.org (Postfix) with ESMTP id 4A4AD410FB for ; Thu, 20 Jul 2023 11:31:10 +0200 (CEST) Received: from dggpeml100024.china.huawei.com (unknown [172.30.72.57]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4R66qM01YrzrRqV; Thu, 20 Jul 2023 17:30:23 +0800 (CST) Received: from localhost.localdomain (10.50.163.32) by dggpeml100024.china.huawei.com (7.185.36.115) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Thu, 20 Jul 2023 17:31:08 +0800 From: Chengwen Feng To: , CC: , , , , , , Subject: [PATCH v19 4/6] test/memarea: support alloc and free API test Date: Thu, 20 Jul 2023 09:22:52 +0000 Message-ID: <20230720092254.54157-5-fengchengwen@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230720092254.54157-1-fengchengwen@huawei.com> References: <20220721044648.6817-1-fengchengwen@huawei.com> <20230720092254.54157-1-fengchengwen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.50.163.32] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpeml100024.china.huawei.com (7.185.36.115) X-CFilter-Loop: Reflected X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch supports rte_memarea_alloc() and rte_memarea_free() API test. Signed-off-by: Chengwen Feng Reviewed-by: Dongdong Liu Acked-by: Morten Brørup Acked-by: Anatoly Burakov --- app/test/test_memarea.c | 222 +++++++++++++++++++++++++++++++++++++++- 1 file changed, 221 insertions(+), 1 deletion(-) diff --git a/app/test/test_memarea.c b/app/test/test_memarea.c index 6078c93a16..805fb82d08 100644 --- a/app/test/test_memarea.c +++ b/app/test/test_memarea.c @@ -38,6 +38,12 @@ test_memarea_init_param(struct rte_memarea_param *init) init->mt_safe = 1; } +static void +test_memarea_fill_region(void *ptr, size_t size) +{ + memset(ptr, 0xff, size); +} + static int test_memarea_create_bad_param(void) { @@ -120,7 +126,7 @@ test_memarea_create_bad_param(void) static int test_memarea_create_destroy(void) { - struct rte_memarea *ma; + struct rte_memarea *ma, *src_ma; struct rte_memarea_param init; rte_errno = 0; @@ -140,6 +146,215 @@ test_memarea_create_destroy(void) TEST_ASSERT(ma != NULL, "Memarea creation failed"); rte_memarea_destroy(ma); + /* test for create with another memarea */ + test_memarea_init_param(&init); + init.source = RTE_MEMAREA_SOURCE_LIBC; + src_ma = rte_memarea_create(&init); + TEST_ASSERT(src_ma != NULL, "Memarea creation failed"); + test_memarea_init_param(&init); + init.source = RTE_MEMAREA_SOURCE_MEMAREA; + init.total_sz = init.total_sz >> 1; + init.ma.src = src_ma; + ma = rte_memarea_create(&init); + TEST_ASSERT(ma != NULL, "Memarea creation failed"); + rte_memarea_destroy(ma); + rte_memarea_destroy(src_ma); + + TEST_ASSERT(rte_errno == 0, "Expected ZERO"); + + return TEST_SUCCESS; +} + +static int +test_memarea_alloc_bad_param(void) +{ + struct rte_memarea_param init; + struct rte_memarea *ma; + size_t size; + void *ptr; + + test_memarea_init_param(&init); + init.source = RTE_MEMAREA_SOURCE_LIBC; + init.total_sz = MEMAREA_TEST_DEFAULT_SIZE; + ma = rte_memarea_create(&init); + TEST_ASSERT(ma != NULL, "Memarea creation failed"); + + /* test for invalid ma */ + rte_errno = 0; + ptr = rte_memarea_alloc(NULL, 1); + TEST_ASSERT(ptr == NULL, "Memarea allocation expect fail"); + TEST_ASSERT(rte_errno == EINVAL, "Expected EINVAL"); + + /* test for invalid size (size = 0) */ + rte_errno = 0; + ptr = rte_memarea_alloc(ma, 0); + TEST_ASSERT(ptr == NULL, "Memarea allocation expect fail"); + TEST_ASSERT(rte_errno == EINVAL, "Expected EINVAL"); + + /* test for invalid size (size rewind) */ + rte_errno = 0; + memset(&size, 0xff, sizeof(size)); + ptr = rte_memarea_alloc(ma, size); + TEST_ASSERT(ptr == NULL, "Memarea allocation expect fail"); + TEST_ASSERT(rte_errno == EINVAL, "Expected EINVAL"); + + rte_memarea_destroy(ma); + + return TEST_SUCCESS; +} + +static int +test_memarea_free_bad_param(void) +{ + struct rte_memarea_param init; + struct rte_memarea *ma; + void *ptr; + + test_memarea_init_param(&init); + init.source = RTE_MEMAREA_SOURCE_LIBC; + init.total_sz = MEMAREA_TEST_DEFAULT_SIZE; + ma = rte_memarea_create(&init); + TEST_ASSERT(ma != NULL, "Memarea creation failed"); + ptr = rte_memarea_alloc(ma, 1); + TEST_ASSERT(ptr != NULL, "Memarea allocation failed"); + test_memarea_fill_region(ptr, 1); + + /* test for invalid ma */ + rte_errno = 0; + rte_memarea_free(NULL, ptr); + TEST_ASSERT(rte_errno == EINVAL, "Expected EINVAL"); + + /* test for invalid ptr */ + rte_errno = 0; + rte_memarea_free(ma, NULL); + TEST_ASSERT(rte_errno == EINVAL, "Expected EINVAL"); + + rte_memarea_destroy(ma); + + return TEST_SUCCESS; +} + +static int +test_memarea_alloc_fail(void) +{ + struct rte_memarea_param init; + struct rte_memarea *ma; + void *ptr[2]; + + test_memarea_init_param(&init); + init.source = RTE_MEMAREA_SOURCE_LIBC; + init.total_sz = MEMAREA_TEST_DEFAULT_SIZE; + ma = rte_memarea_create(&init); + TEST_ASSERT(ma != NULL, "Memarea creation failed"); + + /* test alloc fail with big size */ + rte_errno = 0; + ptr[0] = rte_memarea_alloc(ma, MEMAREA_TEST_DEFAULT_SIZE); + TEST_ASSERT(ptr[0] == NULL, "Memarea allocation expect fail"); + TEST_ASSERT(rte_errno == ENOMEM, "Expected ENOMEM"); + + /* test alloc fail because no memory */ + ptr[0] = rte_memarea_alloc(ma, MEMAREA_TEST_DEFAULT_SIZE >> 1); + TEST_ASSERT(ptr[0] != NULL, "Memarea allocation failed"); + test_memarea_fill_region(ptr[0], MEMAREA_TEST_DEFAULT_SIZE >> 1); + rte_errno = 0; + ptr[1] = rte_memarea_alloc(ma, MEMAREA_TEST_DEFAULT_SIZE >> 1); + TEST_ASSERT(ptr[1] == NULL, "Memarea allocation expect fail"); + TEST_ASSERT(rte_errno == ENOMEM, "Expected ENOMEM"); + rte_memarea_free(ma, ptr[0]); + + /* test alloc fail when second fail */ + ptr[0] = rte_memarea_alloc(ma, MEMAREA_TEST_DEFAULT_SIZE >> 1); + TEST_ASSERT(ptr[0] != NULL, "Memarea allocation failed"); + test_memarea_fill_region(ptr[0], MEMAREA_TEST_DEFAULT_SIZE >> 1); + rte_errno = 0; + ptr[1] = rte_memarea_alloc(ma, MEMAREA_TEST_DEFAULT_SIZE >> 1); + TEST_ASSERT(ptr[1] == NULL, "Memarea allocation expect fail"); + TEST_ASSERT(rte_errno == ENOMEM, "Expected ENOMEM"); + rte_memarea_free(ma, ptr[0]); + ptr[1] = rte_memarea_alloc(ma, MEMAREA_TEST_DEFAULT_SIZE >> 1); + TEST_ASSERT(ptr[1] != NULL, "Memarea allocation failed"); + test_memarea_fill_region(ptr[1], MEMAREA_TEST_DEFAULT_SIZE >> 1); + rte_memarea_free(ma, ptr[1]); + + rte_memarea_destroy(ma); + + return TEST_SUCCESS; +} + +static int +test_memarea_free_fail(void) +{ + struct rte_memarea_param init; + struct rte_memarea *ma; + void *ptr; + + test_memarea_init_param(&init); + init.source = RTE_MEMAREA_SOURCE_LIBC; + init.total_sz = MEMAREA_TEST_DEFAULT_SIZE; + ma = rte_memarea_create(&init); + TEST_ASSERT(ma != NULL, "Memarea creation failed"); + + /* test repeat free */ + rte_errno = 0; + ptr = rte_memarea_alloc(ma, MEMAREA_TEST_DEFAULT_SIZE >> 1); + TEST_ASSERT(ptr != NULL, "Memarea allocation failed"); + test_memarea_fill_region(ptr, MEMAREA_TEST_DEFAULT_SIZE >> 1); + rte_memarea_free(ma, ptr); + TEST_ASSERT(rte_errno == 0, "Expected Zero"); + rte_memarea_free(ma, ptr); + TEST_ASSERT(rte_errno == EFAULT, "Expected EFAULT"); + + rte_memarea_destroy(ma); + + return TEST_SUCCESS; +} + +static int +test_memarea_alloc_free(void) +{ +#define ALLOC_MAX_NUM 8 + struct rte_memarea_param init; + void *ptr[ALLOC_MAX_NUM]; + struct rte_memarea *ma; + int i; + + test_memarea_init_param(&init); + init.source = RTE_MEMAREA_SOURCE_LIBC; + init.total_sz = MEMAREA_TEST_DEFAULT_SIZE; + ma = rte_memarea_create(&init); + TEST_ASSERT(ma != NULL, "Memarea creation failed"); + memset(ptr, 0, sizeof(ptr)); + + rte_errno = 0; + + /* test random alloc and free */ + for (i = 0; i < ALLOC_MAX_NUM; i++) { + ptr[i] = rte_memarea_alloc(ma, 1); + TEST_ASSERT(ptr[i] != NULL, "Memarea allocation failed"); + test_memarea_fill_region(ptr[i], 1); + } + + /* test merge left */ + rte_memarea_free(ma, ptr[0]); + rte_memarea_free(ma, ptr[1]); + + /* test merge right */ + rte_memarea_free(ma, ptr[7]); + rte_memarea_free(ma, ptr[6]); + + /* test merge left and right */ + rte_memarea_free(ma, ptr[3]); + rte_memarea_free(ma, ptr[2]); + + /* test merge remains */ + rte_memarea_free(ma, ptr[4]); + rte_memarea_free(ma, ptr[5]); + + TEST_ASSERT(rte_errno == 0, "Expected Zero"); + + rte_memarea_destroy(ma); + return TEST_SUCCESS; } @@ -150,6 +365,11 @@ static struct unit_test_suite memarea_test_suite = { .unit_test_cases = { TEST_CASE(test_memarea_create_bad_param), TEST_CASE(test_memarea_create_destroy), + TEST_CASE(test_memarea_alloc_bad_param), + TEST_CASE(test_memarea_free_bad_param), + TEST_CASE(test_memarea_alloc_fail), + TEST_CASE(test_memarea_free_fail), + TEST_CASE(test_memarea_alloc_free), TEST_CASES_END() /**< NULL terminate unit test array */ } From patchwork Thu Jul 20 09:22:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: fengchengwen X-Patchwork-Id: 129655 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3B43442EBF; Thu, 20 Jul 2023 11:31:11 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 00D6E40EE3; Thu, 20 Jul 2023 11:31:11 +0200 (CEST) Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by mails.dpdk.org (Postfix) with ESMTP id E31AC40DF5 for ; Thu, 20 Jul 2023 11:31:09 +0200 (CEST) Received: from dggpeml100024.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4R66mN40MtzNmVD; Thu, 20 Jul 2023 17:27:48 +0800 (CST) Received: from localhost.localdomain (10.50.163.32) by dggpeml100024.china.huawei.com (7.185.36.115) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Thu, 20 Jul 2023 17:31:08 +0800 From: Chengwen Feng To: , CC: , , , , , , Subject: [PATCH v19 5/6] memarea: support dump API Date: Thu, 20 Jul 2023 09:22:53 +0000 Message-ID: <20230720092254.54157-6-fengchengwen@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230720092254.54157-1-fengchengwen@huawei.com> References: <20220721044648.6817-1-fengchengwen@huawei.com> <20230720092254.54157-1-fengchengwen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.50.163.32] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpeml100024.china.huawei.com (7.185.36.115) X-CFilter-Loop: Reflected X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch supports rte_memarea_dump() API which could be used for debug. Signed-off-by: Chengwen Feng Reviewed-by: Dongdong Liu Acked-by: Morten Brørup Acked-by: Anatoly Burakov --- doc/guides/prog_guide/memarea_lib.rst | 3 + lib/memarea/rte_memarea.c | 100 ++++++++++++++++++++++++++ lib/memarea/rte_memarea.h | 21 ++++++ lib/memarea/version.map | 1 + 4 files changed, 125 insertions(+) diff --git a/doc/guides/prog_guide/memarea_lib.rst b/doc/guides/prog_guide/memarea_lib.rst index 157baf3c7e..ef22294664 100644 --- a/doc/guides/prog_guide/memarea_lib.rst +++ b/doc/guides/prog_guide/memarea_lib.rst @@ -39,6 +39,9 @@ the memarea. The ``rte_memarea_free()`` function is used to free one memory object which allocated by ``rte_memarea_alloc()``. ++The ``rte_memarea_dump()`` function is used to dump the internal information ++of a memarea. + Debug Mode ---------- diff --git a/lib/memarea/rte_memarea.c b/lib/memarea/rte_memarea.c index 7a35c875a7..d5d9a46736 100644 --- a/lib/memarea/rte_memarea.c +++ b/lib/memarea/rte_memarea.c @@ -361,3 +361,103 @@ rte_memarea_free(struct rte_memarea *ma, void *ptr) memarea_unlock(ma); } + +static const char * +memarea_source_name(enum rte_memarea_source source) +{ + if (source == RTE_MEMAREA_SOURCE_HEAP) + return "heap"; + else if (source == RTE_MEMAREA_SOURCE_LIBC) + return "libc"; + else if (source == RTE_MEMAREA_SOURCE_MEMAREA) + return "memarea"; + else + return "unknown"; +} + +static const char * +memarea_alg_name(enum rte_memarea_algorithm alg) +{ + if (alg == RTE_MEMAREA_ALGORITHM_NEXTFIT) + return "nextfit"; + else + return "unknown"; +} + +static void +memarea_dump_objects_brief(struct rte_memarea *ma, FILE *f) +{ + uint32_t total_objs = 0, total_avail_objs = 0; + struct memarea_objhdr *hdr; + size_t total_avail_sz = 0; + + TAILQ_FOREACH(hdr, &ma->obj_list, obj_next) { + if (hdr == ma->guard_hdr) + break; + memarea_check_cookie(ma, hdr, COOKIE_EXPECT_STATUS_VALID); + total_objs++; + if (!MEMAREA_OBJECT_IS_ALLOCATED(hdr)) { + total_avail_objs++; + total_avail_sz += MEMAREA_OBJECT_GET_SIZE(hdr); + } + } + fprintf(f, " total-objects: %u\n", total_objs); + fprintf(f, " total-avail-objects: %u\n", total_avail_objs); + fprintf(f, " total-avail-objects-size: 0x%zx\n", total_avail_sz); +} + +static void +memarea_dump_objects_detail(struct rte_memarea *ma, FILE *f) +{ + struct memarea_objhdr *hdr; + size_t offset; + void *ptr; + + fprintf(f, " objects:\n"); + TAILQ_FOREACH(hdr, &ma->obj_list, obj_next) { + if (hdr == ma->guard_hdr) + break; + memarea_check_cookie(ma, hdr, COOKIE_EXPECT_STATUS_VALID); + ptr = RTE_PTR_ADD(hdr, sizeof(struct memarea_objhdr)); + offset = RTE_PTR_DIFF(ptr, ma->area_base); +#ifdef RTE_LIBRTE_MEMAREA_DEBUG + fprintf(f, " %p off: 0x%zx size: 0x%zx %s\n", + ptr, offset, MEMAREA_OBJECT_GET_SIZE(hdr), + MEMAREA_OBJECT_IS_ALLOCATED(hdr) ? "allocated" : ""); +#else + fprintf(f, " off: 0x%zx size: 0x%zx %s\n", + offset, MEMAREA_OBJECT_GET_SIZE(hdr), + MEMAREA_OBJECT_IS_ALLOCATED(hdr) ? "allocated" : ""); +#endif + } +} + +int +rte_memarea_dump(struct rte_memarea *ma, FILE *f, bool dump_all) +{ + if (ma == NULL || f == NULL) { + rte_errno = EINVAL; + return -1; + } + + memarea_lock(ma); + fprintf(f, "memarea name: %s\n", ma->init.name); + fprintf(f, " source: %s\n", memarea_source_name(ma->init.source)); + if (ma->init.source == RTE_MEMAREA_SOURCE_HEAP) + fprintf(f, " heap-numa-socket: %d\n", ma->init.heap.socket_id); + else if (ma->init.source == RTE_MEMAREA_SOURCE_MEMAREA) + fprintf(f, " source-memarea: %s\n", ma->init.ma.src->init.name); + fprintf(f, " algorithm: %s\n", memarea_alg_name(ma->init.alg)); + fprintf(f, " total-size: 0x%zx\n", ma->init.total_sz); + fprintf(f, " mt-safe: %s\n", ma->init.mt_safe ? "yes" : "no"); +#ifdef RTE_LIBRTE_MEMAREA_DEBUG + fprintf(f, " area-base: %p\n", ma->area_base); + fprintf(f, " guard-header: %p\n", ma->guard_hdr); +#endif + memarea_dump_objects_brief(ma, f); + if (dump_all) + memarea_dump_objects_detail(ma, f); + memarea_unlock(ma); + + return 0; +} diff --git a/lib/memarea/rte_memarea.h b/lib/memarea/rte_memarea.h index bb1bd5bae5..fa57f6c455 100644 --- a/lib/memarea/rte_memarea.h +++ b/lib/memarea/rte_memarea.h @@ -180,6 +180,27 @@ void *rte_memarea_alloc(struct rte_memarea *ma, size_t size); __rte_experimental void rte_memarea_free(struct rte_memarea *ma, void *ptr); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Dump memarea. + * + * Dump one memarea. + * + * @param ma + * The pointer of memarea. + * @param f + * The file to write the output to. + * @param dump_all + * Indicate whether to dump the allocated and free memory objects information. + * + * @return + * 0 on success. Otherwise negative value is returned (the rte_errno is set). + */ +__rte_experimental +int rte_memarea_dump(struct rte_memarea *ma, FILE *f, bool dump_all); + #ifdef __cplusplus } #endif diff --git a/lib/memarea/version.map b/lib/memarea/version.map index effbd0b488..9513d91e0b 100644 --- a/lib/memarea/version.map +++ b/lib/memarea/version.map @@ -4,6 +4,7 @@ EXPERIMENTAL { rte_memarea_alloc; rte_memarea_create; rte_memarea_destroy; + rte_memarea_dump; rte_memarea_free; local: *; From patchwork Thu Jul 20 09:22:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: fengchengwen X-Patchwork-Id: 129659 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7297442EBF; Thu, 20 Jul 2023 11:31:35 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AD73442B8E; Thu, 20 Jul 2023 11:31:15 +0200 (CEST) Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by mails.dpdk.org (Postfix) with ESMTP id AEDD840DF5 for ; Thu, 20 Jul 2023 11:31:10 +0200 (CEST) Received: from dggpeml100024.china.huawei.com (unknown [172.30.72.53]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4R66qN0KHdz18Lk7; Thu, 20 Jul 2023 17:30:24 +0800 (CST) Received: from localhost.localdomain (10.50.163.32) by dggpeml100024.china.huawei.com (7.185.36.115) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Thu, 20 Jul 2023 17:31:08 +0800 From: Chengwen Feng To: , CC: , , , , , , Subject: [PATCH v19 6/6] test/memarea: support dump API test Date: Thu, 20 Jul 2023 09:22:54 +0000 Message-ID: <20230720092254.54157-7-fengchengwen@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230720092254.54157-1-fengchengwen@huawei.com> References: <20220721044648.6817-1-fengchengwen@huawei.com> <20230720092254.54157-1-fengchengwen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.50.163.32] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpeml100024.china.huawei.com (7.185.36.115) X-CFilter-Loop: Reflected X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch supports rte_memarea_dump() API test. Signed-off-by: Chengwen Feng Reviewed-by: Dongdong Liu Acked-by: Morten Brørup --- app/test/test_memarea.c | 52 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 52 insertions(+) diff --git a/app/test/test_memarea.c b/app/test/test_memarea.c index 805fb82d08..e793022aa2 100644 --- a/app/test/test_memarea.c +++ b/app/test/test_memarea.c @@ -353,6 +353,57 @@ test_memarea_alloc_free(void) TEST_ASSERT(rte_errno == 0, "Expected Zero"); + fprintf(stderr, "There should have no allocated object.\n"); + rte_memarea_dump(ma, stderr, true); + + rte_memarea_destroy(ma); + + return TEST_SUCCESS; +} + +static int +test_memarea_dump(void) +{ + struct rte_memarea_param init; + uint32_t alloced_num = 0; + struct rte_memarea *ma; + void *ptr; + int ret; + + test_memarea_init_param(&init); + init.source = RTE_MEMAREA_SOURCE_LIBC; + init.total_sz = MEMAREA_TEST_DEFAULT_SIZE; + ma = rte_memarea_create(&init); + TEST_ASSERT(ma != NULL, "Memarea creation failed"); + + /* test for invalid parameters */ + rte_errno = 0; + ret = rte_memarea_dump(NULL, stderr, false); + TEST_ASSERT(ret == -1, "Expected -1"); + TEST_ASSERT(rte_errno == EINVAL, "Expected EINVAL"); + rte_errno = 0; + ret = rte_memarea_dump(ma, NULL, false); + TEST_ASSERT(ret == -1, "Expected -1"); + TEST_ASSERT(rte_errno == EINVAL, "Expected EINVAL"); + + /* test for dump */ + ptr = rte_memarea_alloc(ma, 1); + TEST_ASSERT(ptr != NULL, "Memarea allocation failed"); + alloced_num++; + ptr = rte_memarea_alloc(ma, 1); + TEST_ASSERT(ptr != NULL, "Memarea allocation failed"); + alloced_num++; + ptr = rte_memarea_alloc(ma, 1); + TEST_ASSERT(ptr != NULL, "Memarea allocation failed"); + alloced_num++; + ptr = rte_memarea_alloc(ma, MEMAREA_TEST_DEFAULT_SIZE); + TEST_ASSERT(ptr == NULL, "Memarea allocation expect fail"); + ptr = rte_memarea_alloc(ma, MEMAREA_TEST_DEFAULT_SIZE); + TEST_ASSERT(ptr == NULL, "Memarea allocation expect fail"); + fprintf(stderr, "There should have %u allocated object.\n", alloced_num); + ret = rte_memarea_dump(ma, stderr, true); + TEST_ASSERT(ret == 0, "Memarea dump failed"); + rte_memarea_destroy(ma); return TEST_SUCCESS; @@ -370,6 +421,7 @@ static struct unit_test_suite memarea_test_suite = { TEST_CASE(test_memarea_alloc_fail), TEST_CASE(test_memarea_free_fail), TEST_CASE(test_memarea_alloc_free), + TEST_CASE(test_memarea_dump), TEST_CASES_END() /**< NULL terminate unit test array */ }