[v2,1/6] eal: add static per-lcore memory allocation facility
Checks
Commit Message
Introduce DPDK per-lcore id variables, or lcore variables for short.
An lcore variable has one value for every current and future lcore
id-equipped thread.
The primary <rte_lcore_var.h> use case is for statically allocating
small, frequently-accessed data structures, for which one instance
should exist for each lcore.
Lcore variables are similar to thread-local storage (TLS, e.g., C11
_Thread_local), but decoupling the values' life time with that of the
threads.
Lcore variables are also similar in terms of functionality provided by
FreeBSD kernel's DPCPU_*() family of macros and the associated
build-time machinery. DPCPU uses linker scripts, which effectively
prevents the reuse of its, otherwise seemingly viable, approach.
The currently-prevailing way to solve the same problem as lcore
variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
lcore variables over this approach is that data related to the same
lcore now is close (spatially, in memory), rather than data used by
the same module, which in turn avoid excessive use of padding,
polluting caches with unused data.
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
--
PATCH v2:
* Add Windows support. (Morten Brørup)
* Fix lcore variables API index reference. (Morten Brørup)
* Various improvements of the API documentation. (Morten Brørup)
* Elimination of unused symbol in version.map. (Morten Brørup)
PATCH:
* Update MAINTAINERS and release notes.
* Stop covering included files in extern "C" {}.
RFC v6:
* Include <stdlib.h> to get aligned_alloc().
* Tweak documentation (grammar).
* Provide API-level guarantees that lcore variable values take on an
initial value of zero.
* Fix misplaced __rte_cache_aligned in the API doc example.
RFC v5:
* In Doxygen, consistenly use @<cmd> (and not \<cmd>).
* The RTE_LCORE_VAR_GET() and SET() convience access macros
covered an uncommon use case, where the lcore value is of a
primitive type, rather than a struct, and is thus eliminated
from the API. (Morten Brørup)
* In the wake up GET()/SET() removeal, rename RTE_LCORE_VAR_PTR()
RTE_LCORE_VAR_VALUE().
* The underscores are removed from __rte_lcore_var_lcore_ptr() to
signal that this function is a part of the public API.
* Macro arguments are documented.
RFV v4:
* Replace large static array with libc heap-allocated memory. One
implication of this change is there no longer exists a fixed upper
bound for the total amount of memory used by lcore variables.
RTE_MAX_LCORE_VAR has changed meaning, and now represent the
maximum size of any individual lcore variable value.
* Fix issues in example. (Morten Brørup)
* Improve access macro type checking. (Morten Brørup)
* Refer to the lcore variable handle as "handle" and not "name" in
various macros.
* Document lack of thread safety in rte_lcore_var_alloc().
* Provide API-level assurance the lcore variable handle is
always non-NULL, to all applications to use NULL to mean
"not yet allocated".
* Note zero-sized allocations are not allowed.
* Give API-level guarantee the lcore variable values are zeroed.
RFC v3:
* Replace use of GCC-specific alignof(<expression>) with alignof(<type>).
* Update example to reflect FOREACH macro name change (in RFC v2).
RFC v2:
* Use alignof to derive alignment requirements. (Morten Brørup)
* Change name of FOREACH to make it distinct from <rte_lcore.h>'s
*per-EAL-thread* RTE_LCORE_FOREACH(). (Morten Brørup)
* Allow user-specified alignment, but limit max to cache line size.
---
MAINTAINERS | 6 +
config/rte_config.h | 1 +
doc/api/doxy-api-index.md | 1 +
doc/guides/rel_notes/release_24_11.rst | 14 +
lib/eal/common/eal_common_lcore_var.c | 78 +++++
lib/eal/common/meson.build | 1 +
lib/eal/include/meson.build | 1 +
lib/eal/include/rte_lcore_var.h | 385 +++++++++++++++++++++++++
lib/eal/version.map | 2 +
9 files changed, 489 insertions(+)
create mode 100644 lib/eal/common/eal_common_lcore_var.c
create mode 100644 lib/eal/include/rte_lcore_var.h
Comments
On 2024/9/12 1:04, Mattias Rönnblom wrote:
> Introduce DPDK per-lcore id variables, or lcore variables for short.
>
> An lcore variable has one value for every current and future lcore
> id-equipped thread.
>
> The primary <rte_lcore_var.h> use case is for statically allocating
> small, frequently-accessed data structures, for which one instance
> should exist for each lcore.
>
> Lcore variables are similar to thread-local storage (TLS, e.g., C11
> _Thread_local), but decoupling the values' life time with that of the
> threads.
>
> Lcore variables are also similar in terms of functionality provided by
> FreeBSD kernel's DPCPU_*() family of macros and the associated
> build-time machinery. DPCPU uses linker scripts, which effectively
> prevents the reuse of its, otherwise seemingly viable, approach.
>
> The currently-prevailing way to solve the same problem as lcore
> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
> lcore variables over this approach is that data related to the same
> lcore now is close (spatially, in memory), rather than data used by
> the same module, which in turn avoid excessive use of padding,
> polluting caches with unused data.
>
> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> Acked-by: Morten Brørup <mb@smartsharesystems.com>
>
> --
>
> PATCH v2:
> * Add Windows support. (Morten Brørup)
> * Fix lcore variables API index reference. (Morten Brørup)
> * Various improvements of the API documentation. (Morten Brørup)
> * Elimination of unused symbol in version.map. (Morten Brørup)
these history could move to the cover letter.
>
> PATCH:
> * Update MAINTAINERS and release notes.
> * Stop covering included files in extern "C" {}.
>
> RFC v6:
> * Include <stdlib.h> to get aligned_alloc().
> * Tweak documentation (grammar).
> * Provide API-level guarantees that lcore variable values take on an
> initial value of zero.
> * Fix misplaced __rte_cache_aligned in the API doc example.
>
> RFC v5:
> * In Doxygen, consistenly use @<cmd> (and not \<cmd>).
> * The RTE_LCORE_VAR_GET() and SET() convience access macros
> covered an uncommon use case, where the lcore value is of a
> primitive type, rather than a struct, and is thus eliminated
> from the API. (Morten Brørup)
> * In the wake up GET()/SET() removeal, rename RTE_LCORE_VAR_PTR()
> RTE_LCORE_VAR_VALUE().
> * The underscores are removed from __rte_lcore_var_lcore_ptr() to
> signal that this function is a part of the public API.
> * Macro arguments are documented.
>
> RFV v4:
> * Replace large static array with libc heap-allocated memory. One
> implication of this change is there no longer exists a fixed upper
> bound for the total amount of memory used by lcore variables.
> RTE_MAX_LCORE_VAR has changed meaning, and now represent the
> maximum size of any individual lcore variable value.
> * Fix issues in example. (Morten Brørup)
> * Improve access macro type checking. (Morten Brørup)
> * Refer to the lcore variable handle as "handle" and not "name" in
> various macros.
> * Document lack of thread safety in rte_lcore_var_alloc().
> * Provide API-level assurance the lcore variable handle is
> always non-NULL, to all applications to use NULL to mean
> "not yet allocated".
> * Note zero-sized allocations are not allowed.
> * Give API-level guarantee the lcore variable values are zeroed.
>
> RFC v3:
> * Replace use of GCC-specific alignof(<expression>) with alignof(<type>).
> * Update example to reflect FOREACH macro name change (in RFC v2).
>
> RFC v2:
> * Use alignof to derive alignment requirements. (Morten Brørup)
> * Change name of FOREACH to make it distinct from <rte_lcore.h>'s
> *per-EAL-thread* RTE_LCORE_FOREACH(). (Morten Brørup)
> * Allow user-specified alignment, but limit max to cache line size.
> ---
> MAINTAINERS | 6 +
> config/rte_config.h | 1 +
> doc/api/doxy-api-index.md | 1 +
> doc/guides/rel_notes/release_24_11.rst | 14 +
> lib/eal/common/eal_common_lcore_var.c | 78 +++++
> lib/eal/common/meson.build | 1 +
> lib/eal/include/meson.build | 1 +
> lib/eal/include/rte_lcore_var.h | 385 +++++++++++++++++++++++++
> lib/eal/version.map | 2 +
> 9 files changed, 489 insertions(+)
> create mode 100644 lib/eal/common/eal_common_lcore_var.c
> create mode 100644 lib/eal/include/rte_lcore_var.h
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index c5a703b5c0..362d9a3f28 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -282,6 +282,12 @@ F: lib/eal/include/rte_random.h
> F: lib/eal/common/rte_random.c
> F: app/test/test_rand_perf.c
>
> +Lcore Variables
> +M: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> +F: lib/eal/include/rte_lcore_var.h
> +F: lib/eal/common/eal_common_lcore_var.c
> +F: app/test/test_lcore_var.c
> +
> ARM v7
> M: Wathsala Vithanage <wathsala.vithanage@arm.com>
> F: config/arm/
> diff --git a/config/rte_config.h b/config/rte_config.h
> index dd7bb0d35b..311692e498 100644
> --- a/config/rte_config.h
> +++ b/config/rte_config.h
> @@ -41,6 +41,7 @@
> /* EAL defines */
> #define RTE_CACHE_GUARD_LINES 1
> #define RTE_MAX_HEAPS 32
> +#define RTE_MAX_LCORE_VAR 1048576
> #define RTE_MAX_MEMSEG_LISTS 128
> #define RTE_MAX_MEMSEG_PER_LIST 8192
> #define RTE_MAX_MEM_MB_PER_LIST 32768
> diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
> index f9f0300126..ed577f14ee 100644
> --- a/doc/api/doxy-api-index.md
> +++ b/doc/api/doxy-api-index.md
> @@ -99,6 +99,7 @@ The public API headers are grouped by topics:
> [interrupts](@ref rte_interrupts.h),
> [launch](@ref rte_launch.h),
> [lcore](@ref rte_lcore.h),
> + [lcore variables](@ref rte_lcore_var.h),
> [per-lcore](@ref rte_per_lcore.h),
> [service cores](@ref rte_service.h),
> [keepalive](@ref rte_keepalive.h),
> diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
> index 0ff70d9057..a3884f7491 100644
> --- a/doc/guides/rel_notes/release_24_11.rst
> +++ b/doc/guides/rel_notes/release_24_11.rst
> @@ -55,6 +55,20 @@ New Features
> Also, make sure to start the actual text at the margin.
> =======================================================
>
> +* **Added EAL per-lcore static memory allocation facility.**
> +
> + Added EAL API <rte_lcore_var.h> for statically allocating small,
> + frequently-accessed data structures, for which one instance should
> + exist for each EAL thread and registered non-EAL thread.
> +
> + With lcore variables, data is organized spatially on a per-lcore id
> + basis, rather than per library or PMD, avoiding the need for cache
> + aligning (or RTE_CACHE_GUARDing) data structures, which in turn
> + reduces CPU cache internal fragmentation, improving performance.
> +
> + Lcore variables are similar to thread-local storage (TLS, e.g.,
> + C11 _Thread_local), but decoupling the values' life time from that
> + of the threads.
>
> Removed Items
> -------------
> diff --git a/lib/eal/common/eal_common_lcore_var.c b/lib/eal/common/eal_common_lcore_var.c
> new file mode 100644
> index 0000000000..309822039b
> --- /dev/null
> +++ b/lib/eal/common/eal_common_lcore_var.c
> @@ -0,0 +1,78 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2024 Ericsson AB
> + */
> +
> +#include <inttypes.h>
> +#include <stdlib.h>
> +
> +#ifdef RTE_EXEC_ENV_WINDOWS
> +#include <malloc.h>
> +#endif
> +
> +#include <rte_common.h>
> +#include <rte_debug.h>
> +#include <rte_log.h>
> +
> +#include <rte_lcore_var.h>
> +
> +#include "eal_private.h"
> +
> +#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
> +
> +static void *lcore_buffer;
> +static size_t offset = RTE_MAX_LCORE_VAR;
> +
> +static void *
> +lcore_var_alloc(size_t size, size_t align)
> +{
> + void *handle;
> + void *value;
> +
> + offset = RTE_ALIGN_CEIL(offset, align);
> +
> + if (offset + size > RTE_MAX_LCORE_VAR) {
> +#ifdef RTE_EXEC_ENV_WINDOWS
> + lcore_buffer = _aligned_malloc(LCORE_BUFFER_SIZE,
> + RTE_CACHE_LINE_SIZE);
> +#else
> + lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
> + LCORE_BUFFER_SIZE);
> +#endif
> + RTE_VERIFY(lcore_buffer != NULL);
> +
> + offset = 0;
> + }
> +
> + handle = RTE_PTR_ADD(lcore_buffer, offset);
> +
> + offset += size;
> +
> + RTE_LCORE_VAR_FOREACH_VALUE(value, handle)
> + memset(value, 0, size);
> +
> + EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with a "
> + "%"PRIuPTR"-byte alignment", size, align);
Currrent the data was malloc by libc function, I think it's mainly for such INIT macro which will be init before main.
But it will introduce following problem:
1\ it can't benefit from huge-pages. this patch may reserved many 1MBs for each lcore, if we could place it in huge-pages it will reduce the TLB miss rate, especially it freq access data.
2\ it can't across multi-process. many of current lcore-data also don't support multi-process, but I think it worth do that, and it will help us to some service recovery when sub-process failed and reboot.
...
On 2024-09-12 04:33, fengchengwen wrote:
> On 2024/9/12 1:04, Mattias Rönnblom wrote:
>> Introduce DPDK per-lcore id variables, or lcore variables for short.
>>
>> An lcore variable has one value for every current and future lcore
>> id-equipped thread.
>>
>> The primary <rte_lcore_var.h> use case is for statically allocating
>> small, frequently-accessed data structures, for which one instance
>> should exist for each lcore.
>>
>> Lcore variables are similar to thread-local storage (TLS, e.g., C11
>> _Thread_local), but decoupling the values' life time with that of the
>> threads.
>>
>> Lcore variables are also similar in terms of functionality provided by
>> FreeBSD kernel's DPCPU_*() family of macros and the associated
>> build-time machinery. DPCPU uses linker scripts, which effectively
>> prevents the reuse of its, otherwise seemingly viable, approach.
>>
>> The currently-prevailing way to solve the same problem as lcore
>> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
>> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
>> lcore variables over this approach is that data related to the same
>> lcore now is close (spatially, in memory), rather than data used by
>> the same module, which in turn avoid excessive use of padding,
>> polluting caches with unused data.
>>
>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>> Acked-by: Morten Brørup <mb@smartsharesystems.com>
>>
>> --
>>
>> PATCH v2:
>> * Add Windows support. (Morten Brørup)
>> * Fix lcore variables API index reference. (Morten Brørup)
>> * Various improvements of the API documentation. (Morten Brørup)
>> * Elimination of unused symbol in version.map. (Morten Brørup)
>
> these history could move to the cover letter.
>
>>
>> PATCH:
>> * Update MAINTAINERS and release notes.
>> * Stop covering included files in extern "C" {}.
>>
>> RFC v6:
>> * Include <stdlib.h> to get aligned_alloc().
>> * Tweak documentation (grammar).
>> * Provide API-level guarantees that lcore variable values take on an
>> initial value of zero.
>> * Fix misplaced __rte_cache_aligned in the API doc example.
>>
>> RFC v5:
>> * In Doxygen, consistenly use @<cmd> (and not \<cmd>).
>> * The RTE_LCORE_VAR_GET() and SET() convience access macros
>> covered an uncommon use case, where the lcore value is of a
>> primitive type, rather than a struct, and is thus eliminated
>> from the API. (Morten Brørup)
>> * In the wake up GET()/SET() removeal, rename RTE_LCORE_VAR_PTR()
>> RTE_LCORE_VAR_VALUE().
>> * The underscores are removed from __rte_lcore_var_lcore_ptr() to
>> signal that this function is a part of the public API.
>> * Macro arguments are documented.
>>
>> RFV v4:
>> * Replace large static array with libc heap-allocated memory. One
>> implication of this change is there no longer exists a fixed upper
>> bound for the total amount of memory used by lcore variables.
>> RTE_MAX_LCORE_VAR has changed meaning, and now represent the
>> maximum size of any individual lcore variable value.
>> * Fix issues in example. (Morten Brørup)
>> * Improve access macro type checking. (Morten Brørup)
>> * Refer to the lcore variable handle as "handle" and not "name" in
>> various macros.
>> * Document lack of thread safety in rte_lcore_var_alloc().
>> * Provide API-level assurance the lcore variable handle is
>> always non-NULL, to all applications to use NULL to mean
>> "not yet allocated".
>> * Note zero-sized allocations are not allowed.
>> * Give API-level guarantee the lcore variable values are zeroed.
>>
>> RFC v3:
>> * Replace use of GCC-specific alignof(<expression>) with alignof(<type>).
>> * Update example to reflect FOREACH macro name change (in RFC v2).
>>
>> RFC v2:
>> * Use alignof to derive alignment requirements. (Morten Brørup)
>> * Change name of FOREACH to make it distinct from <rte_lcore.h>'s
>> *per-EAL-thread* RTE_LCORE_FOREACH(). (Morten Brørup)
>> * Allow user-specified alignment, but limit max to cache line size.
>> ---
>> MAINTAINERS | 6 +
>> config/rte_config.h | 1 +
>> doc/api/doxy-api-index.md | 1 +
>> doc/guides/rel_notes/release_24_11.rst | 14 +
>> lib/eal/common/eal_common_lcore_var.c | 78 +++++
>> lib/eal/common/meson.build | 1 +
>> lib/eal/include/meson.build | 1 +
>> lib/eal/include/rte_lcore_var.h | 385 +++++++++++++++++++++++++
>> lib/eal/version.map | 2 +
>> 9 files changed, 489 insertions(+)
>> create mode 100644 lib/eal/common/eal_common_lcore_var.c
>> create mode 100644 lib/eal/include/rte_lcore_var.h
>>
>> diff --git a/MAINTAINERS b/MAINTAINERS
>> index c5a703b5c0..362d9a3f28 100644
>> --- a/MAINTAINERS
>> +++ b/MAINTAINERS
>> @@ -282,6 +282,12 @@ F: lib/eal/include/rte_random.h
>> F: lib/eal/common/rte_random.c
>> F: app/test/test_rand_perf.c
>>
>> +Lcore Variables
>> +M: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>> +F: lib/eal/include/rte_lcore_var.h
>> +F: lib/eal/common/eal_common_lcore_var.c
>> +F: app/test/test_lcore_var.c
>> +
>> ARM v7
>> M: Wathsala Vithanage <wathsala.vithanage@arm.com>
>> F: config/arm/
>> diff --git a/config/rte_config.h b/config/rte_config.h
>> index dd7bb0d35b..311692e498 100644
>> --- a/config/rte_config.h
>> +++ b/config/rte_config.h
>> @@ -41,6 +41,7 @@
>> /* EAL defines */
>> #define RTE_CACHE_GUARD_LINES 1
>> #define RTE_MAX_HEAPS 32
>> +#define RTE_MAX_LCORE_VAR 1048576
>> #define RTE_MAX_MEMSEG_LISTS 128
>> #define RTE_MAX_MEMSEG_PER_LIST 8192
>> #define RTE_MAX_MEM_MB_PER_LIST 32768
>> diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
>> index f9f0300126..ed577f14ee 100644
>> --- a/doc/api/doxy-api-index.md
>> +++ b/doc/api/doxy-api-index.md
>> @@ -99,6 +99,7 @@ The public API headers are grouped by topics:
>> [interrupts](@ref rte_interrupts.h),
>> [launch](@ref rte_launch.h),
>> [lcore](@ref rte_lcore.h),
>> + [lcore variables](@ref rte_lcore_var.h),
>> [per-lcore](@ref rte_per_lcore.h),
>> [service cores](@ref rte_service.h),
>> [keepalive](@ref rte_keepalive.h),
>> diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
>> index 0ff70d9057..a3884f7491 100644
>> --- a/doc/guides/rel_notes/release_24_11.rst
>> +++ b/doc/guides/rel_notes/release_24_11.rst
>> @@ -55,6 +55,20 @@ New Features
>> Also, make sure to start the actual text at the margin.
>> =======================================================
>>
>> +* **Added EAL per-lcore static memory allocation facility.**
>> +
>> + Added EAL API <rte_lcore_var.h> for statically allocating small,
>> + frequently-accessed data structures, for which one instance should
>> + exist for each EAL thread and registered non-EAL thread.
>> +
>> + With lcore variables, data is organized spatially on a per-lcore id
>> + basis, rather than per library or PMD, avoiding the need for cache
>> + aligning (or RTE_CACHE_GUARDing) data structures, which in turn
>> + reduces CPU cache internal fragmentation, improving performance.
>> +
>> + Lcore variables are similar to thread-local storage (TLS, e.g.,
>> + C11 _Thread_local), but decoupling the values' life time from that
>> + of the threads.
>>
>> Removed Items
>> -------------
>> diff --git a/lib/eal/common/eal_common_lcore_var.c b/lib/eal/common/eal_common_lcore_var.c
>> new file mode 100644
>> index 0000000000..309822039b
>> --- /dev/null
>> +++ b/lib/eal/common/eal_common_lcore_var.c
>> @@ -0,0 +1,78 @@
>> +/* SPDX-License-Identifier: BSD-3-Clause
>> + * Copyright(c) 2024 Ericsson AB
>> + */
>> +
>> +#include <inttypes.h>
>> +#include <stdlib.h>
>> +
>> +#ifdef RTE_EXEC_ENV_WINDOWS
>> +#include <malloc.h>
>> +#endif
>> +
>> +#include <rte_common.h>
>> +#include <rte_debug.h>
>> +#include <rte_log.h>
>> +
>> +#include <rte_lcore_var.h>
>> +
>> +#include "eal_private.h"
>> +
>> +#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
>> +
>> +static void *lcore_buffer;
>> +static size_t offset = RTE_MAX_LCORE_VAR;
>> +
>> +static void *
>> +lcore_var_alloc(size_t size, size_t align)
>> +{
>> + void *handle;
>> + void *value;
>> +
>> + offset = RTE_ALIGN_CEIL(offset, align);
>> +
>> + if (offset + size > RTE_MAX_LCORE_VAR) {
>> +#ifdef RTE_EXEC_ENV_WINDOWS
>> + lcore_buffer = _aligned_malloc(LCORE_BUFFER_SIZE,
>> + RTE_CACHE_LINE_SIZE);
>> +#else
>> + lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
>> + LCORE_BUFFER_SIZE);
>> +#endif
>> + RTE_VERIFY(lcore_buffer != NULL);
>> +
>> + offset = 0;
>> + }
>> +
>> + handle = RTE_PTR_ADD(lcore_buffer, offset);
>> +
>> + offset += size;
>> +
>> + RTE_LCORE_VAR_FOREACH_VALUE(value, handle)
>> + memset(value, 0, size);
>> +
>> + EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with a "
>> + "%"PRIuPTR"-byte alignment", size, align);
>
> Currrent the data was malloc by libc function, I think it's mainly for such INIT macro which will be init before main.
> But it will introduce following problem:
> 1\ it can't benefit from huge-pages. this patch may reserved many 1MBs for each lcore, if we could place it in huge-pages it will reduce the TLB miss rate, especially it freq access data.
This mechanism is for small allocations, which the sum of is also
expected to be small (although the system won't break if they aren't).
If you have large allocations, you are better off using lazy huge page
allocations further down the initialization process. Otherwise, you will
end up using memory for RTE_MAX_LCORE instances, rather than the actual
lcore count, which could be substantially smaller.
But sure, everything else being equal, you could have used huge pages
for these lcore variable values. But everything isn't equal.
> 2\ it can't across multi-process. many of current lcore-data also don't support multi-process, but I think it worth do that, and it will help us to some service recovery when sub-process failed and reboot.
>
> ...
>
Not sure I think that's a downside. Further cementing that anti-pattern
into DPDK seems to be a bad idea to me.
lcore variables doesn't *introduce* any of these issues, since the
mechanisms it's replacing also have these shortcomings (if you think
about them as such - I'm not sure I do).
On 2024/9/12 13:35, Mattias Rönnblom wrote:
> On 2024-09-12 04:33, fengchengwen wrote:
>> On 2024/9/12 1:04, Mattias Rönnblom wrote:
>>> Introduce DPDK per-lcore id variables, or lcore variables for short.
>>>
>>> An lcore variable has one value for every current and future lcore
>>> id-equipped thread.
>>>
>>> The primary <rte_lcore_var.h> use case is for statically allocating
>>> small, frequently-accessed data structures, for which one instance
>>> should exist for each lcore.
>>>
>>> Lcore variables are similar to thread-local storage (TLS, e.g., C11
>>> _Thread_local), but decoupling the values' life time with that of the
>>> threads.
>>>
>>> Lcore variables are also similar in terms of functionality provided by
>>> FreeBSD kernel's DPCPU_*() family of macros and the associated
>>> build-time machinery. DPCPU uses linker scripts, which effectively
>>> prevents the reuse of its, otherwise seemingly viable, approach.
>>>
>>> The currently-prevailing way to solve the same problem as lcore
>>> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
>>> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
>>> lcore variables over this approach is that data related to the same
>>> lcore now is close (spatially, in memory), rather than data used by
>>> the same module, which in turn avoid excessive use of padding,
>>> polluting caches with unused data.
>>>
>>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>>> Acked-by: Morten Brørup <mb@smartsharesystems.com>
>>>
>>> --
>>>
>>> PATCH v2:
>>> * Add Windows support. (Morten Brørup)
>>> * Fix lcore variables API index reference. (Morten Brørup)
>>> * Various improvements of the API documentation. (Morten Brørup)
>>> * Elimination of unused symbol in version.map. (Morten Brørup)
>>
>> these history could move to the cover letter.
>>
>>>
>>> PATCH:
>>> * Update MAINTAINERS and release notes.
>>> * Stop covering included files in extern "C" {}.
>>>
>>> RFC v6:
>>> * Include <stdlib.h> to get aligned_alloc().
>>> * Tweak documentation (grammar).
>>> * Provide API-level guarantees that lcore variable values take on an
>>> initial value of zero.
>>> * Fix misplaced __rte_cache_aligned in the API doc example.
>>>
>>> RFC v5:
>>> * In Doxygen, consistenly use @<cmd> (and not \<cmd>).
>>> * The RTE_LCORE_VAR_GET() and SET() convience access macros
>>> covered an uncommon use case, where the lcore value is of a
>>> primitive type, rather than a struct, and is thus eliminated
>>> from the API. (Morten Brørup)
>>> * In the wake up GET()/SET() removeal, rename RTE_LCORE_VAR_PTR()
>>> RTE_LCORE_VAR_VALUE().
>>> * The underscores are removed from __rte_lcore_var_lcore_ptr() to
>>> signal that this function is a part of the public API.
>>> * Macro arguments are documented.
>>>
>>> RFV v4:
>>> * Replace large static array with libc heap-allocated memory. One
>>> implication of this change is there no longer exists a fixed upper
>>> bound for the total amount of memory used by lcore variables.
>>> RTE_MAX_LCORE_VAR has changed meaning, and now represent the
>>> maximum size of any individual lcore variable value.
>>> * Fix issues in example. (Morten Brørup)
>>> * Improve access macro type checking. (Morten Brørup)
>>> * Refer to the lcore variable handle as "handle" and not "name" in
>>> various macros.
>>> * Document lack of thread safety in rte_lcore_var_alloc().
>>> * Provide API-level assurance the lcore variable handle is
>>> always non-NULL, to all applications to use NULL to mean
>>> "not yet allocated".
>>> * Note zero-sized allocations are not allowed.
>>> * Give API-level guarantee the lcore variable values are zeroed.
>>>
>>> RFC v3:
>>> * Replace use of GCC-specific alignof(<expression>) with alignof(<type>).
>>> * Update example to reflect FOREACH macro name change (in RFC v2).
>>>
>>> RFC v2:
>>> * Use alignof to derive alignment requirements. (Morten Brørup)
>>> * Change name of FOREACH to make it distinct from <rte_lcore.h>'s
>>> *per-EAL-thread* RTE_LCORE_FOREACH(). (Morten Brørup)
>>> * Allow user-specified alignment, but limit max to cache line size.
>>> ---
>>> MAINTAINERS | 6 +
>>> config/rte_config.h | 1 +
>>> doc/api/doxy-api-index.md | 1 +
>>> doc/guides/rel_notes/release_24_11.rst | 14 +
>>> lib/eal/common/eal_common_lcore_var.c | 78 +++++
>>> lib/eal/common/meson.build | 1 +
>>> lib/eal/include/meson.build | 1 +
>>> lib/eal/include/rte_lcore_var.h | 385 +++++++++++++++++++++++++
>>> lib/eal/version.map | 2 +
>>> 9 files changed, 489 insertions(+)
>>> create mode 100644 lib/eal/common/eal_common_lcore_var.c
>>> create mode 100644 lib/eal/include/rte_lcore_var.h
>>>
>>> diff --git a/MAINTAINERS b/MAINTAINERS
>>> index c5a703b5c0..362d9a3f28 100644
>>> --- a/MAINTAINERS
>>> +++ b/MAINTAINERS
>>> @@ -282,6 +282,12 @@ F: lib/eal/include/rte_random.h
>>> F: lib/eal/common/rte_random.c
>>> F: app/test/test_rand_perf.c
>>> +Lcore Variables
>>> +M: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>>> +F: lib/eal/include/rte_lcore_var.h
>>> +F: lib/eal/common/eal_common_lcore_var.c
>>> +F: app/test/test_lcore_var.c
>>> +
>>> ARM v7
>>> M: Wathsala Vithanage <wathsala.vithanage@arm.com>
>>> F: config/arm/
>>> diff --git a/config/rte_config.h b/config/rte_config.h
>>> index dd7bb0d35b..311692e498 100644
>>> --- a/config/rte_config.h
>>> +++ b/config/rte_config.h
>>> @@ -41,6 +41,7 @@
>>> /* EAL defines */
>>> #define RTE_CACHE_GUARD_LINES 1
>>> #define RTE_MAX_HEAPS 32
>>> +#define RTE_MAX_LCORE_VAR 1048576
>>> #define RTE_MAX_MEMSEG_LISTS 128
>>> #define RTE_MAX_MEMSEG_PER_LIST 8192
>>> #define RTE_MAX_MEM_MB_PER_LIST 32768
>>> diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
>>> index f9f0300126..ed577f14ee 100644
>>> --- a/doc/api/doxy-api-index.md
>>> +++ b/doc/api/doxy-api-index.md
>>> @@ -99,6 +99,7 @@ The public API headers are grouped by topics:
>>> [interrupts](@ref rte_interrupts.h),
>>> [launch](@ref rte_launch.h),
>>> [lcore](@ref rte_lcore.h),
>>> + [lcore variables](@ref rte_lcore_var.h),
>>> [per-lcore](@ref rte_per_lcore.h),
>>> [service cores](@ref rte_service.h),
>>> [keepalive](@ref rte_keepalive.h),
>>> diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst
>>> index 0ff70d9057..a3884f7491 100644
>>> --- a/doc/guides/rel_notes/release_24_11.rst
>>> +++ b/doc/guides/rel_notes/release_24_11.rst
>>> @@ -55,6 +55,20 @@ New Features
>>> Also, make sure to start the actual text at the margin.
>>> =======================================================
>>> +* **Added EAL per-lcore static memory allocation facility.**
>>> +
>>> + Added EAL API <rte_lcore_var.h> for statically allocating small,
>>> + frequently-accessed data structures, for which one instance should
>>> + exist for each EAL thread and registered non-EAL thread.
>>> +
>>> + With lcore variables, data is organized spatially on a per-lcore id
>>> + basis, rather than per library or PMD, avoiding the need for cache
>>> + aligning (or RTE_CACHE_GUARDing) data structures, which in turn
>>> + reduces CPU cache internal fragmentation, improving performance.
>>> +
>>> + Lcore variables are similar to thread-local storage (TLS, e.g.,
>>> + C11 _Thread_local), but decoupling the values' life time from that
>>> + of the threads.
>>> Removed Items
>>> -------------
>>> diff --git a/lib/eal/common/eal_common_lcore_var.c b/lib/eal/common/eal_common_lcore_var.c
>>> new file mode 100644
>>> index 0000000000..309822039b
>>> --- /dev/null
>>> +++ b/lib/eal/common/eal_common_lcore_var.c
>>> @@ -0,0 +1,78 @@
>>> +/* SPDX-License-Identifier: BSD-3-Clause
>>> + * Copyright(c) 2024 Ericsson AB
>>> + */
>>> +
>>> +#include <inttypes.h>
>>> +#include <stdlib.h>
>>> +
>>> +#ifdef RTE_EXEC_ENV_WINDOWS
>>> +#include <malloc.h>
>>> +#endif
>>> +
>>> +#include <rte_common.h>
>>> +#include <rte_debug.h>
>>> +#include <rte_log.h>
>>> +
>>> +#include <rte_lcore_var.h>
>>> +
>>> +#include "eal_private.h"
>>> +
>>> +#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
>>> +
>>> +static void *lcore_buffer;
>>> +static size_t offset = RTE_MAX_LCORE_VAR;
>>> +
>>> +static void *
>>> +lcore_var_alloc(size_t size, size_t align)
>>> +{
>>> + void *handle;
>>> + void *value;
>>> +
>>> + offset = RTE_ALIGN_CEIL(offset, align);
>>> +
>>> + if (offset + size > RTE_MAX_LCORE_VAR) {
>>> +#ifdef RTE_EXEC_ENV_WINDOWS
>>> + lcore_buffer = _aligned_malloc(LCORE_BUFFER_SIZE,
>>> + RTE_CACHE_LINE_SIZE);
>>> +#else
>>> + lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
>>> + LCORE_BUFFER_SIZE);
>>> +#endif
>>> + RTE_VERIFY(lcore_buffer != NULL);
>>> +
>>> + offset = 0;
>>> + }
>>> +
>>> + handle = RTE_PTR_ADD(lcore_buffer, offset);
>>> +
>>> + offset += size;
>>> +
>>> + RTE_LCORE_VAR_FOREACH_VALUE(value, handle)
>>> + memset(value, 0, size);
>>> +
>>> + EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with a "
>>> + "%"PRIuPTR"-byte alignment", size, align);
>>
>> Currrent the data was malloc by libc function, I think it's mainly for such INIT macro which will be init before main.
>> But it will introduce following problem:
>> 1\ it can't benefit from huge-pages. this patch may reserved many 1MBs for each lcore, if we could place it in huge-pages it will reduce the TLB miss rate, especially it freq access data.
>
> This mechanism is for small allocations, which the sum of is also expected to be small (although the system won't break if they aren't).
>
> If you have large allocations, you are better off using lazy huge page allocations further down the initialization process. Otherwise, you will end up using memory for RTE_MAX_LCORE instances, rather than the actual lcore count, which could be substantially smaller.
Yes, it may cost two much memory if allocated from hugepage memory.
>
> But sure, everything else being equal, you could have used huge pages for these lcore variable values. But everything isn't equal.
>
>> 2\ it can't across multi-process. many of current lcore-data also don't support multi-process, but I think it worth do that, and it will help us to some service recovery when sub-process failed and reboot.
>>
>> ...
>>
>
> Not sure I think that's a downside. Further cementing that anti-pattern into DPDK seems to be a bad idea to me.
>
> lcore variables doesn't *introduce* any of these issues, since the mechanisms it's replacing also have these shortcomings (if you think about them as such - I'm not sure I do).
Got it.
This feature is a enhanced for current lcore variables, which bring together scattered data from the point view of a single core.
and current it seemmed hard to extend support hugepage memory.
On Thu, Sep 12, 2024 at 11:05 AM Mattias Rönnblom <hofors@lysator.liu.se> wrote:
>
> On 2024-09-12 04:33, fengchengwen wrote:
> > On 2024/9/12 1:04, Mattias Rönnblom wrote:
> >> Introduce DPDK per-lcore id variables, or lcore variables for short.
> >>
> >> An lcore variable has one value for every current and future lcore
> >> id-equipped thread.
> >>
> >> The primary <rte_lcore_var.h> use case is for statically allocating
> >> small, frequently-accessed data structures, for which one instance
> >> should exist for each lcore.
> >>
> >> Lcore variables are similar to thread-local storage (TLS, e.g., C11
> >> _Thread_local), but decoupling the values' life time with that of the
> >> threads.
> >>
> >> Lcore variables are also similar in terms of functionality provided by
> >> FreeBSD kernel's DPCPU_*() family of macros and the associated
> >> build-time machinery. DPCPU uses linker scripts, which effectively
> >> prevents the reuse of its, otherwise seemingly viable, approach.
> >>
> >> The currently-prevailing way to solve the same problem as lcore
> >> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
> >> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
> >> lcore variables over this approach is that data related to the same
> >> lcore now is close (spatially, in memory), rather than data used by
> >> the same module, which in turn avoid excessive use of padding,
> >> polluting caches with unused data.
> >>
> >> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> >> Acked-by: Morten Brørup <mb@smartsharesystems.com>
> >>
> >> --
> >>
> >> +
> >> +#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
> >> +
> >> +static void *lcore_buffer;
> >> +static size_t offset = RTE_MAX_LCORE_VAR;
> >> +
> >> +static void *
> >> +lcore_var_alloc(size_t size, size_t align)
> >> +{
> >> + void *handle;
> >> + void *value;
> >> +
> >> + offset = RTE_ALIGN_CEIL(offset, align);
> >> +
> >> + if (offset + size > RTE_MAX_LCORE_VAR) {
> >> +#ifdef RTE_EXEC_ENV_WINDOWS
> >> + lcore_buffer = _aligned_malloc(LCORE_BUFFER_SIZE,
> >> + RTE_CACHE_LINE_SIZE);
> >> +#else
> >> + lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
> >> + LCORE_BUFFER_SIZE);
> >> +#endif
> >> + RTE_VERIFY(lcore_buffer != NULL);
> >> +
> >> + offset = 0;
> >> + }
> >> +
> >> + handle = RTE_PTR_ADD(lcore_buffer, offset);
> >> +
> >> + offset += size;
> >> +
> >> + RTE_LCORE_VAR_FOREACH_VALUE(value, handle)
> >> + memset(value, 0, size);
> >> +
> >> + EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with a "
> >> + "%"PRIuPTR"-byte alignment", size, align);
> >
> > Currrent the data was malloc by libc function, I think it's mainly for such INIT macro which will be init before main.
> > But it will introduce following problem:
> > 1\ it can't benefit from huge-pages. this patch may reserved many 1MBs for each lcore, if we could place it in huge-pages it will reduce the TLB miss rate, especially it freq access data.
>
> This mechanism is for small allocations, which the sum of is also
> expected to be small (although the system won't break if they aren't).
>
> If you have large allocations, you are better off using lazy huge page
> allocations further down the initialization process. Otherwise, you will
> end up using memory for RTE_MAX_LCORE instances, rather than the actual
> lcore count, which could be substantially smaller.
+ @Anatoly Burakov
If I am not wrong, DPDK huge page memory allocator (rte_malloc()), may
have similar overhead glibc once. Meaning, The hugepage allocated only
when needed and space is over.
if so, why not use rte_malloc() if available.
>
> But sure, everything else being equal, you could have used huge pages
> for these lcore variable values. But everything isn't equal.
>
> > 2\ it can't across multi-process. many of current lcore-data also don't support multi-process, but I think it worth do that, and it will help us to some service recovery when sub-process failed and reboot.
> >
> > ...
> >
>
> Not sure I think that's a downside. Further cementing that anti-pattern
> into DPDK seems to be a bad idea to me.
>
> lcore variables doesn't *introduce* any of these issues, since the
> mechanisms it's replacing also have these shortcomings (if you think
> about them as such - I'm not sure I do).
> +#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
Considering hugepages...
Lcore variables may be allocated before DPDK's memory allocator (rte_malloc()) is ready, so rte_malloc() cannot be used for lcore variables.
And lcore variables are not usable (shared) for DPDK multi-process, so the lcore_buffer could be allocated through the O/S APIs as anonymous hugepages, instead of using rte_malloc().
The alternative, using rte_malloc(), would disallow allocating lcore variables before DPDK's memory allocator has been initialized, which I think is too late.
Anyway, hugepages is not a "must have" here, it is a "nice to have". It can be added to the lcore variables subsystem at a later time.
Here are some thoughts about optimizing for TLB entry usage...
If lcore variables use hugepages, and LCORE_BUFFER_SIZE matches the hugepage size (2 MB), all the lcore variables will only consume 1 hugepage TLB entry.
However, this may limit the max size of an lcore variable (RTE_MAX_LCORE_VAR) too much, if the system supports many lcores (RTE_MAX_LCORE).
E.g. with 1024 lcores, the max size of an lcore variable would be 2048 bytes.
And with 128 lcores, the max size of an lcore variable would be 16 KB.
So if we want to optimize for hugepage TLB entry usage, the question becomes: What is a reasonable max size of an lcore variable?
And although hugepages is only a "nice to have", the max size of an lcore variable (RTE_MAX_LCORE_VAR) is part of the API/ABI, so we should consider it now, if we want to optimize for hugepage TLB entry usage in the future.
A few more comments below, not related to hugepages.
> +
> +static void *lcore_buffer;
> +static size_t offset = RTE_MAX_LCORE_VAR;
> +
> +static void *
> +lcore_var_alloc(size_t size, size_t align)
> +{
> + void *handle;
> + void *value;
> +
> + offset = RTE_ALIGN_CEIL(offset, align);
> +
> + if (offset + size > RTE_MAX_LCORE_VAR) {
> +#ifdef RTE_EXEC_ENV_WINDOWS
> + lcore_buffer = _aligned_malloc(LCORE_BUFFER_SIZE,
> + RTE_CACHE_LINE_SIZE);
> +#else
> + lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
> + LCORE_BUFFER_SIZE);
> +#endif
> + RTE_VERIFY(lcore_buffer != NULL);
> +
> + offset = 0;
> + }
> +
> + handle = RTE_PTR_ADD(lcore_buffer, offset);
> +
> + offset += size;
> +
> + RTE_LCORE_VAR_FOREACH_VALUE(value, handle)
> + memset(value, 0, size);
> +
> + EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with
> a "
> + "%"PRIuPTR"-byte alignment", size, align);
> +
> + return handle;
> +}
> +
> +void *
> +rte_lcore_var_alloc(size_t size, size_t align)
> +{
> + /* Having the per-lcore buffer size aligned on cache lines
> + * assures as well as having the base pointer aligned on cache
> + * size assures that aligned offsets also translate to alipgned
> + * pointers across all values.
> + */
> + RTE_BUILD_BUG_ON(RTE_MAX_LCORE_VAR % RTE_CACHE_LINE_SIZE != 0);
> + RTE_ASSERT(align <= RTE_CACHE_LINE_SIZE);
> + RTE_ASSERT(size <= RTE_MAX_LCORE_VAR);
This specific RTE_ASSERT() should be upgraded to RTE_VERIFY(), so it is checked in non-debug builds too.
The code is slow path and not inline, and if this check doesn't pass, accessing the lcore variable will cause a buffer overrun. Prefer failing early.
> +
> + /* '0' means asking for worst-case alignment requirements */
> + if (align == 0)
> + align = alignof(max_align_t);
> +
> + RTE_ASSERT(rte_is_power_of_2(align));
> +
> + return lcore_var_alloc(size, align);
> +}
> +/**
> + * Allocate space in the per-lcore id buffers for an lcore variable.
> + *
> + * The pointer returned is only an opaque identifer of the variable. To
> + * get an actual pointer to a particular instance of the variable use
> + * @ref RTE_LCORE_VAR_VALUE or @ref RTE_LCORE_VAR_LCORE_VALUE.
> + *
> + * The lcore variable values' memory is set to zero.
> + *
> + * The allocation is always successful, barring a fatal exhaustion of
> + * the per-lcore id buffer space.
> + *
> + * rte_lcore_var_alloc() is not multi-thread safe.
> + *
> + * @param size
> + * The size (in bytes) of the variable's per-lcore id value. Must be
> > 0.
> + * @param align
> + * If 0, the values will be suitably aligned for any kind of type
> + * (i.e., alignof(max_align_t)). Otherwise, the values will be
> aligned
> + * on a multiple of *align*, which must be a power of 2 and equal or
> + * less than @c RTE_CACHE_LINE_SIZE.
> + * @return
> + * The variable's handle, stored in a void pointer value. The value
> + * is always non-NULL.
> + */
> +__rte_experimental
I don't know how useful these are, but consider adding:
#ifndef RTE_TOOLCHAIN_MSVC
__attribute__((malloc))
__attribute__((alloc_size(1)))
__attribute__((alloc_align(2)))
__attribute__((returns_nonnull))
#endif
> +void *
> +rte_lcore_var_alloc(size_t size, size_t align);
On Thu, Sep 12, 2024 at 2:40 PM Morten Brørup <mb@smartsharesystems.com> wrote:
>
> > +#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
>
> Considering hugepages...
>
> Lcore variables may be allocated before DPDK's memory allocator (rte_malloc()) is ready, so rte_malloc() cannot be used for lcore variables.
>
> And lcore variables are not usable (shared) for DPDK multi-process, so the lcore_buffer could be allocated through the O/S APIs as anonymous hugepages, instead of using rte_malloc().
>
> The alternative, using rte_malloc(), would disallow allocating lcore variables before DPDK's memory allocator has been initialized, which I think is too late.
I thought it is not. A lot of the subsystems are initialized after the
memory subsystem is initialized.
[1] example given in documentation. I thought, RTE_INIT needs to
replaced if the subsystem called after memory initialized (which is
the case for most of the libraries)
Trace library had a similar situation. It is managed like [2]
[1]
* struct foo_lcore_state {
* int a;
* long b;
* };
*
* static RTE_LCORE_VAR_HANDLE(struct foo_lcore_state, lcore_states);
*
* long foo_get_a_plus_b(void)
* {
* struct foo_lcore_state *state = RTE_LCORE_VAR_VALUE(lcore_states);
*
* return state->a + state->b;
* }
*
* RTE_INIT(rte_foo_init)
* {
* RTE_LCORE_VAR_ALLOC(lcore_states);
*
* struct foo_lcore_state *state;
* RTE_LCORE_VAR_FOREACH_VALUE(state, lcore_states) {
* (initialize 'state')
* }
*
* (other initialization)
* }
[2]
/* First attempt from huge page */
header = eal_malloc_no_trace(NULL, trace_mem_sz(trace->buff_len), 8);
if (header) {
trace->lcore_meta[count].area = TRACE_AREA_HUGEPAGE;
goto found;
}
/* Second attempt from heap */
header = malloc(trace_mem_sz(trace->buff_len));
if (header == NULL) {
trace_crit("trace mem malloc attempt failed");
header = NULL;
goto fail;
}
> From: Jerin Jacob [mailto:jerinjacobk@gmail.com]
> Sent: Thursday, 12 September 2024 15.17
>
> On Thu, Sep 12, 2024 at 2:40 PM Morten Brørup <mb@smartsharesystems.com>
> wrote:
> >
> > > +#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
> >
> > Considering hugepages...
> >
> > Lcore variables may be allocated before DPDK's memory allocator
> (rte_malloc()) is ready, so rte_malloc() cannot be used for lcore variables.
> >
> > And lcore variables are not usable (shared) for DPDK multi-process, so the
> lcore_buffer could be allocated through the O/S APIs as anonymous hugepages,
> instead of using rte_malloc().
> >
> > The alternative, using rte_malloc(), would disallow allocating lcore
> variables before DPDK's memory allocator has been initialized, which I think
> is too late.
>
> I thought it is not. A lot of the subsystems are initialized after the
> memory subsystem is initialized.
> [1] example given in documentation. I thought, RTE_INIT needs to
> replaced if the subsystem called after memory initialized (which is
> the case for most of the libraries)
The list of RTE_INIT functions are called before main(). It is not very useful.
Yes, it would be good to replace (or supplement) RTE_INIT_PRIO by something similar, which calls the list of "INIT" functions at the appropriate time during EAL initialization.
DPDK should then use this "INIT" list for all its initialization, so the init function of new features (such as this, and trace) can be inserted at the correct location in the list.
> Trace library had a similar situation. It is managed like [2]
Yes, if we insist on using rte_malloc() for lcore variables, the alternative is to prohibit establishing lcore variables in functions called through RTE_INIT.
Although I don't like this alternative, it might be viable.
>
>
>
> [1]
> * struct foo_lcore_state {
> * int a;
> * long b;
> * };
> *
> * static RTE_LCORE_VAR_HANDLE(struct foo_lcore_state, lcore_states);
> *
> * long foo_get_a_plus_b(void)
> * {
> * struct foo_lcore_state *state = RTE_LCORE_VAR_VALUE(lcore_states);
> *
> * return state->a + state->b;
> * }
> *
> * RTE_INIT(rte_foo_init)
> * {
> * RTE_LCORE_VAR_ALLOC(lcore_states);
> *
> * struct foo_lcore_state *state;
> * RTE_LCORE_VAR_FOREACH_VALUE(state, lcore_states) {
> * (initialize 'state')
> * }
> *
> * (other initialization)
> * }
>
>
> [2]
>
>
> /* First attempt from huge page */
> header = eal_malloc_no_trace(NULL, trace_mem_sz(trace->buff_len), 8);
> if (header) {
> trace->lcore_meta[count].area = TRACE_AREA_HUGEPAGE;
> goto found;
> }
>
> /* Second attempt from heap */
> header = malloc(trace_mem_sz(trace->buff_len));
> if (header == NULL) {
> trace_crit("trace mem malloc attempt failed");
> header = NULL;
> goto fail;
>
> }
On Thu, Sep 12, 2024 at 7:11 PM Morten Brørup <mb@smartsharesystems.com> wrote:
>
> > From: Jerin Jacob [mailto:jerinjacobk@gmail.com]
> > Sent: Thursday, 12 September 2024 15.17
> >
> > On Thu, Sep 12, 2024 at 2:40 PM Morten Brørup <mb@smartsharesystems.com>
> > wrote:
> > >
> > > > +#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
> > >
> > > Considering hugepages...
> > >
> > > Lcore variables may be allocated before DPDK's memory allocator
> > (rte_malloc()) is ready, so rte_malloc() cannot be used for lcore variables.
> > >
> > > And lcore variables are not usable (shared) for DPDK multi-process, so the
> > lcore_buffer could be allocated through the O/S APIs as anonymous hugepages,
> > instead of using rte_malloc().
> > >
> > > The alternative, using rte_malloc(), would disallow allocating lcore
> > variables before DPDK's memory allocator has been initialized, which I think
> > is too late.
> >
> > I thought it is not. A lot of the subsystems are initialized after the
> > memory subsystem is initialized.
> > [1] example given in documentation. I thought, RTE_INIT needs to
> > replaced if the subsystem called after memory initialized (which is
> > the case for most of the libraries)
>
> The list of RTE_INIT functions are called before main(). It is not very useful.
>
> Yes, it would be good to replace (or supplement) RTE_INIT_PRIO by something similar, which calls the list of "INIT" functions at the appropriate time during EAL initialization.
>
> DPDK should then use this "INIT" list for all its initialization, so the init function of new features (such as this, and trace) can be inserted at the correct location in the list.
>
> > Trace library had a similar situation. It is managed like [2]
>
> Yes, if we insist on using rte_malloc() for lcore variables, the alternative is to prohibit establishing lcore variables in functions called through RTE_INIT.
I was not insisting on using ONLY rte_malloc(). Since rte_malloc() can
be called before rte_eal_init)(it will return NULL). Alloc routine can
check first rte_malloc() is available if not switch over glibc.
On Thu, Sep 12, 2024 at 8:52 PM Jerin Jacob <jerinjacobk@gmail.com> wrote:
>
> On Thu, Sep 12, 2024 at 7:11 PM Morten Brørup <mb@smartsharesystems.com> wrote:
> >
> > > From: Jerin Jacob [mailto:jerinjacobk@gmail.com]
> > > Sent: Thursday, 12 September 2024 15.17
> > >
> > > On Thu, Sep 12, 2024 at 2:40 PM Morten Brørup <mb@smartsharesystems.com>
> > > wrote:
> > > >
> > > > > +#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
> > > >
> > > > Considering hugepages...
> > > >
> > > > Lcore variables may be allocated before DPDK's memory allocator
> > > (rte_malloc()) is ready, so rte_malloc() cannot be used for lcore variables.
> > > >
> > > > And lcore variables are not usable (shared) for DPDK multi-process, so the
> > > lcore_buffer could be allocated through the O/S APIs as anonymous hugepages,
> > > instead of using rte_malloc().
> > > >
> > > > The alternative, using rte_malloc(), would disallow allocating lcore
> > > variables before DPDK's memory allocator has been initialized, which I think
> > > is too late.
> > >
> > > I thought it is not. A lot of the subsystems are initialized after the
> > > memory subsystem is initialized.
> > > [1] example given in documentation. I thought, RTE_INIT needs to
> > > replaced if the subsystem called after memory initialized (which is
> > > the case for most of the libraries)
> >
> > The list of RTE_INIT functions are called before main(). It is not very useful.
> >
> > Yes, it would be good to replace (or supplement) RTE_INIT_PRIO by something similar, which calls the list of "INIT" functions at the appropriate time during EAL initialization.
> >
> > DPDK should then use this "INIT" list for all its initialization, so the init function of new features (such as this, and trace) can be inserted at the correct location in the list.
> >
> > > Trace library had a similar situation. It is managed like [2]
> >
> > Yes, if we insist on using rte_malloc() for lcore variables, the alternative is to prohibit establishing lcore variables in functions called through RTE_INIT.
>
> I was not insisting on using ONLY rte_malloc(). Since rte_malloc() can
> be called before rte_eal_init)(it will return NULL). Alloc routine can
> check first rte_malloc() is available if not switch over glibc.
@Mattias Rönnblom This comment is not addressed in v7. Could you check?
On 2024-09-18 12:11, Jerin Jacob wrote:
> On Thu, Sep 12, 2024 at 8:52 PM Jerin Jacob <jerinjacobk@gmail.com> wrote:
>>
>> On Thu, Sep 12, 2024 at 7:11 PM Morten Brørup <mb@smartsharesystems.com> wrote:
>>>
>>>> From: Jerin Jacob [mailto:jerinjacobk@gmail.com]
>>>> Sent: Thursday, 12 September 2024 15.17
>>>>
>>>> On Thu, Sep 12, 2024 at 2:40 PM Morten Brørup <mb@smartsharesystems.com>
>>>> wrote:
>>>>>
>>>>>> +#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
>>>>>
>>>>> Considering hugepages...
>>>>>
>>>>> Lcore variables may be allocated before DPDK's memory allocator
>>>> (rte_malloc()) is ready, so rte_malloc() cannot be used for lcore variables.
>>>>>
>>>>> And lcore variables are not usable (shared) for DPDK multi-process, so the
>>>> lcore_buffer could be allocated through the O/S APIs as anonymous hugepages,
>>>> instead of using rte_malloc().
>>>>>
>>>>> The alternative, using rte_malloc(), would disallow allocating lcore
>>>> variables before DPDK's memory allocator has been initialized, which I think
>>>> is too late.
>>>>
>>>> I thought it is not. A lot of the subsystems are initialized after the
>>>> memory subsystem is initialized.
>>>> [1] example given in documentation. I thought, RTE_INIT needs to
>>>> replaced if the subsystem called after memory initialized (which is
>>>> the case for most of the libraries)
>>>
>>> The list of RTE_INIT functions are called before main(). It is not very useful.
>>>
>>> Yes, it would be good to replace (or supplement) RTE_INIT_PRIO by something similar, which calls the list of "INIT" functions at the appropriate time during EAL initialization.
>>>
>>> DPDK should then use this "INIT" list for all its initialization, so the init function of new features (such as this, and trace) can be inserted at the correct location in the list.
>>>
>>>> Trace library had a similar situation. It is managed like [2]
>>>
>>> Yes, if we insist on using rte_malloc() for lcore variables, the alternative is to prohibit establishing lcore variables in functions called through RTE_INIT.
>>
>> I was not insisting on using ONLY rte_malloc(). Since rte_malloc() can
>> be called before rte_eal_init)(it will return NULL). Alloc routine can
>> check first rte_malloc() is available if not switch over glibc.
>
>
> @Mattias Rönnblom This comment is not addressed in v7. Could you check?
Calling rte_malloc() and depending on it returning NULL if it's too
early in the initialization process sounds a little fragile, but maybe
it's fine.
One issue with lcore-variables-in-huge-pages I've failed to mentioned
this time around this is being discussed is that it would increase
memory usage by something like RTE_MAX_LCORE * 0.5 MB (or more probably
a little more).
In the huge pages case, you can't rely on demand paging to avoid
bringing in unused pages.
That said, I suspect some very latency-sensitive apps lock all pages in
memory, and thus lose out on this OS feature.
I suggest we just leave the first incarnation of lcore variables in
normal pages.
Thanks for the reminder.
@@ -282,6 +282,12 @@ F: lib/eal/include/rte_random.h
F: lib/eal/common/rte_random.c
F: app/test/test_rand_perf.c
+Lcore Variables
+M: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
+F: lib/eal/include/rte_lcore_var.h
+F: lib/eal/common/eal_common_lcore_var.c
+F: app/test/test_lcore_var.c
+
ARM v7
M: Wathsala Vithanage <wathsala.vithanage@arm.com>
F: config/arm/
@@ -41,6 +41,7 @@
/* EAL defines */
#define RTE_CACHE_GUARD_LINES 1
#define RTE_MAX_HEAPS 32
+#define RTE_MAX_LCORE_VAR 1048576
#define RTE_MAX_MEMSEG_LISTS 128
#define RTE_MAX_MEMSEG_PER_LIST 8192
#define RTE_MAX_MEM_MB_PER_LIST 32768
@@ -99,6 +99,7 @@ The public API headers are grouped by topics:
[interrupts](@ref rte_interrupts.h),
[launch](@ref rte_launch.h),
[lcore](@ref rte_lcore.h),
+ [lcore variables](@ref rte_lcore_var.h),
[per-lcore](@ref rte_per_lcore.h),
[service cores](@ref rte_service.h),
[keepalive](@ref rte_keepalive.h),
@@ -55,6 +55,20 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+* **Added EAL per-lcore static memory allocation facility.**
+
+ Added EAL API <rte_lcore_var.h> for statically allocating small,
+ frequently-accessed data structures, for which one instance should
+ exist for each EAL thread and registered non-EAL thread.
+
+ With lcore variables, data is organized spatially on a per-lcore id
+ basis, rather than per library or PMD, avoiding the need for cache
+ aligning (or RTE_CACHE_GUARDing) data structures, which in turn
+ reduces CPU cache internal fragmentation, improving performance.
+
+ Lcore variables are similar to thread-local storage (TLS, e.g.,
+ C11 _Thread_local), but decoupling the values' life time from that
+ of the threads.
Removed Items
-------------
new file mode 100644
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#include <inttypes.h>
+#include <stdlib.h>
+
+#ifdef RTE_EXEC_ENV_WINDOWS
+#include <malloc.h>
+#endif
+
+#include <rte_common.h>
+#include <rte_debug.h>
+#include <rte_log.h>
+
+#include <rte_lcore_var.h>
+
+#include "eal_private.h"
+
+#define LCORE_BUFFER_SIZE (RTE_MAX_LCORE_VAR * RTE_MAX_LCORE)
+
+static void *lcore_buffer;
+static size_t offset = RTE_MAX_LCORE_VAR;
+
+static void *
+lcore_var_alloc(size_t size, size_t align)
+{
+ void *handle;
+ void *value;
+
+ offset = RTE_ALIGN_CEIL(offset, align);
+
+ if (offset + size > RTE_MAX_LCORE_VAR) {
+#ifdef RTE_EXEC_ENV_WINDOWS
+ lcore_buffer = _aligned_malloc(LCORE_BUFFER_SIZE,
+ RTE_CACHE_LINE_SIZE);
+#else
+ lcore_buffer = aligned_alloc(RTE_CACHE_LINE_SIZE,
+ LCORE_BUFFER_SIZE);
+#endif
+ RTE_VERIFY(lcore_buffer != NULL);
+
+ offset = 0;
+ }
+
+ handle = RTE_PTR_ADD(lcore_buffer, offset);
+
+ offset += size;
+
+ RTE_LCORE_VAR_FOREACH_VALUE(value, handle)
+ memset(value, 0, size);
+
+ EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with a "
+ "%"PRIuPTR"-byte alignment", size, align);
+
+ return handle;
+}
+
+void *
+rte_lcore_var_alloc(size_t size, size_t align)
+{
+ /* Having the per-lcore buffer size aligned on cache lines
+ * assures as well as having the base pointer aligned on cache
+ * size assures that aligned offsets also translate to alipgned
+ * pointers across all values.
+ */
+ RTE_BUILD_BUG_ON(RTE_MAX_LCORE_VAR % RTE_CACHE_LINE_SIZE != 0);
+ RTE_ASSERT(align <= RTE_CACHE_LINE_SIZE);
+ RTE_ASSERT(size <= RTE_MAX_LCORE_VAR);
+
+ /* '0' means asking for worst-case alignment requirements */
+ if (align == 0)
+ align = alignof(max_align_t);
+
+ RTE_ASSERT(rte_is_power_of_2(align));
+
+ return lcore_var_alloc(size, align);
+}
@@ -18,6 +18,7 @@ sources += files(
'eal_common_interrupts.c',
'eal_common_launch.c',
'eal_common_lcore.c',
+ 'eal_common_lcore_var.c',
'eal_common_mcfg.c',
'eal_common_memalloc.c',
'eal_common_memory.c',
@@ -27,6 +27,7 @@ headers += files(
'rte_keepalive.h',
'rte_launch.h',
'rte_lcore.h',
+ 'rte_lcore_var.h',
'rte_lock_annotations.h',
'rte_malloc.h',
'rte_mcslock.h',
new file mode 100644
@@ -0,0 +1,385 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#ifndef _RTE_LCORE_VAR_H_
+#define _RTE_LCORE_VAR_H_
+
+/**
+ * @file
+ *
+ * RTE Lcore variables
+ *
+ * This API provides a mechanism to create and access per-lcore id
+ * variables in a space- and cycle-efficient manner.
+ *
+ * A per-lcore id variable (or lcore variable for short) has one value
+ * for each EAL thread and registered non-EAL thread. There is one
+ * instance for each current and future lcore id-equipped thread, with
+ * a total of RTE_MAX_LCORE instances. The value of an lcore variable
+ * for a particular lcore id is independent from other values (for
+ * other lcore ids) within the same lcore variable.
+ *
+ * In order to access the values of an lcore variable, a handle is
+ * used. The type of the handle is a pointer to the value's type
+ * (e.g., for @c uint32_t lcore variable, the handle is a
+ * <code>uint32_t *</code>. The handle type is used to inform the
+ * access macros the type of the values. A handle may be passed
+ * between modules and threads just like any pointer, but its value
+ * must be treated as a an opaque identifier. An allocated handle
+ * never has the value NULL.
+ *
+ * @b Creation
+ *
+ * An lcore variable is created in two steps:
+ * 1. Define an lcore variable handle by using @ref RTE_LCORE_VAR_HANDLE.
+ * 2. Allocate lcore variable storage and initialize the handle with
+ * a unique identifier by @ref RTE_LCORE_VAR_ALLOC or
+ * @ref RTE_LCORE_VAR_INIT. Allocation generally occurs the time of
+ * module initialization, but may be done at any time.
+ *
+ * An lcore variable is not tied to the owning thread's lifetime. It's
+ * available for use by any thread immediately after having been
+ * allocated, and continues to be available throughout the lifetime of
+ * the EAL.
+ *
+ * Lcore variables cannot and need not be freed.
+ *
+ * @b Access
+ *
+ * The value of any lcore variable for any lcore id may be accessed
+ * from any thread (including unregistered threads), but it should
+ * only be *frequently* read from or written to by the owner.
+ *
+ * Values of the same lcore variable but owned by two different lcore
+ * ids may be frequently read or written by the owners without risking
+ * false sharing.
+ *
+ * An appropriate synchronization mechanism (e.g., atomic loads and
+ * stores) should employed to assure there are no data races between
+ * the owning thread and any non-owner threads accessing the same
+ * lcore variable instance.
+ *
+ * The value of the lcore variable for a particular lcore id is
+ * accessed using @ref RTE_LCORE_VAR_LCORE_VALUE.
+ *
+ * A common pattern is for an EAL thread or a registered non-EAL
+ * thread to access its own lcore variable value. For this purpose, a
+ * short-hand exists in the form of @ref RTE_LCORE_VAR_VALUE.
+ *
+ * Although the handle (as defined by @ref RTE_LCORE_VAR_HANDLE) is a
+ * pointer with the same type as the value, it may not be directly
+ * dereferenced and must be treated as an opaque identifier.
+ *
+ * Lcore variable handles and value pointers may be freely passed
+ * between different threads.
+ *
+ * @b Storage
+ *
+ * An lcore variable's values may by of a primitive type like @c int,
+ * but would more typically be a @c struct.
+ *
+ * The lcore variable handle introduces a per-variable (not
+ * per-value/per-lcore id) overhead of @c sizeof(void *) bytes, so
+ * there are some memory footprint gains to be made by organizing all
+ * per-lcore id data for a particular module as one lcore variable
+ * (e.g., as a struct).
+ *
+ * An application may choose to define an lcore variable handle, which
+ * it then it goes on to never allocate.
+ *
+ * The size of an lcore variable's value must be less than the DPDK
+ * build-time constant @c RTE_MAX_LCORE_VAR.
+ *
+ * The lcore variable are stored in a series of lcore buffers, which
+ * are allocated from the libc heap. Heap allocation failures are
+ * treated as fatal.
+ *
+ * Lcore variables should generally *not* be @ref __rte_cache_aligned
+ * and need *not* include a @ref RTE_CACHE_GUARD field, since the use
+ * of these constructs are designed to avoid false sharing. In the
+ * case of an lcore variable instance, the thread most recently
+ * accessing nearby data structures should almost-always be the lcore
+ * variables' owner. Adding padding will increase the effective memory
+ * working set size, potentially reducing performance.
+ *
+ * Lcore variable values take on an initial value of zero.
+ *
+ * @b Example
+ *
+ * Below is an example of the use of an lcore variable:
+ *
+ * @code{.c}
+ * struct foo_lcore_state {
+ * int a;
+ * long b;
+ * };
+ *
+ * static RTE_LCORE_VAR_HANDLE(struct foo_lcore_state, lcore_states);
+ *
+ * long foo_get_a_plus_b(void)
+ * {
+ * struct foo_lcore_state *state = RTE_LCORE_VAR_VALUE(lcore_states);
+ *
+ * return state->a + state->b;
+ * }
+ *
+ * RTE_INIT(rte_foo_init)
+ * {
+ * RTE_LCORE_VAR_ALLOC(lcore_states);
+ *
+ * struct foo_lcore_state *state;
+ * RTE_LCORE_VAR_FOREACH_VALUE(state, lcore_states) {
+ * (initialize 'state')
+ * }
+ *
+ * (other initialization)
+ * }
+ * @endcode
+ *
+ *
+ * @b Alternatives
+ *
+ * Lcore variables are designed to replace a pattern exemplified below:
+ * @code{.c}
+ * struct __rte_cache_aligned foo_lcore_state {
+ * int a;
+ * long b;
+ * RTE_CACHE_GUARD;
+ * };
+ *
+ * static struct foo_lcore_state lcore_states[RTE_MAX_LCORE];
+ * @endcode
+ *
+ * This scheme is simple and effective, but has one drawback: the data
+ * is organized so that objects related to all lcores for a particular
+ * module is kept close in memory. At a bare minimum, this requires
+ * sizing data structures (e.g., using `__rte_cache_aligned`) to an
+ * even number of cache lines to avoid false sharing. With CPU
+ * hardware prefetching and memory loads resulting from speculative
+ * execution (functions which seemingly are getting more eager faster
+ * than they are getting more intelligent), one or more "guard" cache
+ * lines may be required to separate one lcore's data from another's.
+ *
+ * Lcore variables have the upside of working with, not against, the
+ * CPU's assumptions and for example next-line prefetchers may well
+ * work the way its designers intended (i.e., to the benefit, not
+ * detriment, of system performance).
+ *
+ * Another alternative to @ref rte_lcore_var.h is the @ref
+ * rte_per_lcore.h API, which makes use of thread-local storage (TLS,
+ * e.g., GCC __thread or C11 _Thread_local). The main differences
+ * between by using the various forms of TLS (e.g., @ref
+ * RTE_DEFINE_PER_LCORE or _Thread_local) and the use of lcore
+ * variables are:
+ *
+ * * The existence and non-existence of a thread-local variable
+ * instance follow that of particular thread's. The data cannot be
+ * accessed before the thread has been created, nor after it has
+ * exited. As a result, thread-local variables must be initialized in
+ * a "lazy" manner (e.g., at the point of thread creation). Lcore
+ * variables may be accessed immediately after having been
+ * allocated (which may be prior any thread beyond the main
+ * thread is running).
+ * * A thread-local variable is duplicated across all threads in the
+ * process, including unregistered non-EAL threads (i.e.,
+ * "regular" threads). For DPDK applications heavily relying on
+ * multi-threading (in conjunction to DPDK's "one thread per core"
+ * pattern), either by having many concurrent threads or
+ * creating/destroying threads at a high rate, an excessive use of
+ * thread-local variables may cause inefficiencies (e.g.,
+ * increased thread creation overhead due to thread-local storage
+ * initialization or increased total RAM footprint usage). Lcore
+ * variables *only* exist for threads with an lcore id.
+ * * If data in thread-local storage may be shared between threads
+ * (i.e., can a pointer to a thread-local variable be passed to
+ * and successfully dereferenced by non-owning thread) depends on
+ * the details of the TLS implementation. With GCC __thread and
+ * GCC _Thread_local, such data sharing is supported. In the C11
+ * standard, the result of accessing another thread's
+ * _Thread_local object is implementation-defined. Lcore variable
+ * instances may be accessed reliably by any thread.
+ */
+
+#include <stddef.h>
+#include <stdalign.h>
+
+#include <rte_common.h>
+#include <rte_config.h>
+#include <rte_lcore.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Given the lcore variable type, produces the type of the lcore
+ * variable handle.
+ */
+#define RTE_LCORE_VAR_HANDLE_TYPE(type) \
+ type *
+
+/**
+ * Define an lcore variable handle.
+ *
+ * This macro defines a variable which is used as a handle to access
+ * the various instances of a per-lcore id variable.
+ *
+ * The aim with this macro is to make clear at the point of
+ * declaration that this is an lcore handle, rather than a regular
+ * pointer.
+ *
+ * Add @b static as a prefix in case the lcore variable is only to be
+ * accessed from a particular translation unit.
+ */
+#define RTE_LCORE_VAR_HANDLE(type, name) \
+ RTE_LCORE_VAR_HANDLE_TYPE(type) name
+
+/**
+ * Allocate space for an lcore variable, and initialize its handle.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, size, align) \
+ handle = rte_lcore_var_alloc(size, align)
+
+/**
+ * Allocate space for an lcore variable, and initialize its handle,
+ * with values aligned for any type of object.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_ALLOC_SIZE(handle, size) \
+ RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, size, 0)
+
+/**
+ * Allocate space for an lcore variable of the size and alignment requirements
+ * suggested by the handle pointer type, and initialize its handle.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_ALLOC(handle) \
+ RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(handle, sizeof(*(handle)), \
+ alignof(typeof(*(handle))))
+
+/**
+ * Allocate an explicitly-sized, explicitly-aligned lcore variable by
+ * means of a @ref RTE_INIT constructor.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, align) \
+ RTE_INIT(rte_lcore_var_init_ ## name) \
+ { \
+ RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, align); \
+ }
+
+/**
+ * Allocate an explicitly-sized lcore variable by means of a @ref
+ * RTE_INIT constructor.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_INIT_SIZE(name, size) \
+ RTE_LCORE_VAR_INIT_SIZE_ALIGN(name, size, 0)
+
+/**
+ * Allocate an lcore variable by means of a @ref RTE_INIT constructor.
+ *
+ * The values of the lcore variable are initialized to zero.
+ */
+#define RTE_LCORE_VAR_INIT(name) \
+ RTE_INIT(rte_lcore_var_init_ ## name) \
+ { \
+ RTE_LCORE_VAR_ALLOC(name); \
+ }
+
+/**
+ * Get void pointer to lcore variable instance with the specified
+ * lcore id.
+ *
+ * @param lcore_id
+ * The lcore id specifying which of the @c RTE_MAX_LCORE value
+ * instances should be accessed. The lcore id need not be valid
+ * (e.g., may be @ref LCORE_ID_ANY), but in such a case, the pointer
+ * is also not valid (and thus should not be dereferenced).
+ * @param handle
+ * The lcore variable handle.
+ */
+static inline void *
+rte_lcore_var_lcore_ptr(unsigned int lcore_id, void *handle)
+{
+ return RTE_PTR_ADD(handle, lcore_id * RTE_MAX_LCORE_VAR);
+}
+
+/**
+ * Get pointer to lcore variable instance with the specified lcore id.
+ *
+ * @param lcore_id
+ * The lcore id specifying which of the @c RTE_MAX_LCORE value
+ * instances should be accessed. The lcore id need not be valid
+ * (e.g., may be @ref LCORE_ID_ANY), but in such a case, the pointer
+ * is also not valid (and thus should not be dereferenced).
+ * @param handle
+ * The lcore variable handle.
+ */
+#define RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle) \
+ ((typeof(handle))rte_lcore_var_lcore_ptr(lcore_id, handle))
+
+/**
+ * Get pointer to lcore variable instance of the current thread.
+ *
+ * May only be used by EAL threads and registered non-EAL threads.
+ */
+#define RTE_LCORE_VAR_VALUE(handle) \
+ RTE_LCORE_VAR_LCORE_VALUE(rte_lcore_id(), handle)
+
+/**
+ * Iterate over each lcore id's value for an lcore variable.
+ *
+ * @param value
+ * A pointer successively set to point to lcore variable value
+ * corresponding to every lcore id (up to @c RTE_MAX_LCORE).
+ * @param handle
+ * The lcore variable handle.
+ */
+#define RTE_LCORE_VAR_FOREACH_VALUE(value, handle) \
+ for (unsigned int lcore_id = \
+ (((value) = RTE_LCORE_VAR_LCORE_VALUE(0, handle)), 0); \
+ lcore_id < RTE_MAX_LCORE; \
+ lcore_id++, (value) = RTE_LCORE_VAR_LCORE_VALUE(lcore_id, handle))
+
+/**
+ * Allocate space in the per-lcore id buffers for an lcore variable.
+ *
+ * The pointer returned is only an opaque identifer of the variable. To
+ * get an actual pointer to a particular instance of the variable use
+ * @ref RTE_LCORE_VAR_VALUE or @ref RTE_LCORE_VAR_LCORE_VALUE.
+ *
+ * The lcore variable values' memory is set to zero.
+ *
+ * The allocation is always successful, barring a fatal exhaustion of
+ * the per-lcore id buffer space.
+ *
+ * rte_lcore_var_alloc() is not multi-thread safe.
+ *
+ * @param size
+ * The size (in bytes) of the variable's per-lcore id value. Must be > 0.
+ * @param align
+ * If 0, the values will be suitably aligned for any kind of type
+ * (i.e., alignof(max_align_t)). Otherwise, the values will be aligned
+ * on a multiple of *align*, which must be a power of 2 and equal or
+ * less than @c RTE_CACHE_LINE_SIZE.
+ * @return
+ * The variable's handle, stored in a void pointer value. The value
+ * is always non-NULL.
+ */
+__rte_experimental
+void *
+rte_lcore_var_alloc(size_t size, size_t align);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_LCORE_VAR_H_ */
@@ -396,6 +396,8 @@ EXPERIMENTAL {
# added in 24.03
rte_vfio_get_device_info; # WINDOWS_NO_EXPORT
+
+ rte_lcore_var_alloc;
};
INTERNAL {