[RFC,1/5] eal: add static per-lcore memory allocation facility

Message ID 20240208181644.455233-2-mattias.ronnblom@ericsson.com (mailing list archive)
State New
Delegated to: Thomas Monjalon
Headers
Series Lcore variables |

Checks

Context Check Description
ci/checkpatch warning coding style issues

Commit Message

Mattias Rönnblom Feb. 8, 2024, 6:16 p.m. UTC
  Introduce DPDK per-lcore id variables, or lcore variables for short.

An lcore variable has one value for every current and future lcore
id-equipped thread.

The primary <rte_lcore_var.h> use case is for statically allocating
small chunks of often-used data, which is related logically, but where
there are performance benefits to reap from having updates being local
to an lcore.

Lcore variables are similar to thread-local storage (TLS, e.g., C11
_Thread_local), but decoupling the values' life time with that of the
threads.

Lcore variables are also similar in terms of functionality provided by
FreeBSD kernel's DPCPU_*() family of macros and the associated
build-time machinery. DPCPU uses linker scripts, which effectively
prevents the reuse of its, otherwise seemingly viable, approach.

The currently-prevailing way to solve the same problem as lcore
variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
lcore variables over this approach is that data related to the same
lcore now is close (spatially, in memory), rather than data used by
the same module, which in turn avoid excessive use of padding,
polluting caches with unused data.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
 config/rte_config.h                   |   1 +
 doc/api/doxy-api-index.md             |   1 +
 lib/eal/common/eal_common_lcore_var.c |  80 ++++++
 lib/eal/common/meson.build            |   1 +
 lib/eal/include/meson.build           |   1 +
 lib/eal/include/rte_lcore_var.h       | 352 ++++++++++++++++++++++++++
 lib/eal/version.map                   |   4 +
 7 files changed, 440 insertions(+)
 create mode 100644 lib/eal/common/eal_common_lcore_var.c
 create mode 100644 lib/eal/include/rte_lcore_var.h
  

Comments

Morten Brørup Feb. 9, 2024, 8:25 a.m. UTC | #1
> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
> Sent: Thursday, 8 February 2024 19.17
> 
> Introduce DPDK per-lcore id variables, or lcore variables for short.
> 
> An lcore variable has one value for every current and future lcore
> id-equipped thread.
> 
> The primary <rte_lcore_var.h> use case is for statically allocating
> small chunks of often-used data, which is related logically, but where
> there are performance benefits to reap from having updates being local
> to an lcore.
> 
> Lcore variables are similar to thread-local storage (TLS, e.g., C11
> _Thread_local), but decoupling the values' life time with that of the
> threads.
> 
> Lcore variables are also similar in terms of functionality provided by
> FreeBSD kernel's DPCPU_*() family of macros and the associated
> build-time machinery. DPCPU uses linker scripts, which effectively
> prevents the reuse of its, otherwise seemingly viable, approach.
> 
> The currently-prevailing way to solve the same problem as lcore
> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
> lcore variables over this approach is that data related to the same
> lcore now is close (spatially, in memory), rather than data used by
> the same module, which in turn avoid excessive use of padding,
> polluting caches with unused data.
> 
> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> ---

This looks very promising. :-)

Here's a bunch of comments, questions and suggestions.


* Question: Performance.
What is the cost of accessing an lcore variable vs a variable in TLS?
I suppose the relative cost diminishes if the variable is a larger struct, compared to a simple uint64_t.

Some of my suggestions below might also affect performance.


* Advantage: Provides direct access to worker thread variables.
With the current alternative (thread-local storage), the main thread cannot access the TLS variables of the worker threads,
unless worker threads publish global access pointers.
Lcore variables of any lcore thread can be direcctly accessed by any thread, which simplifies code.


* Advantage: Roadmap towards hugemem.
It would be nice if the lcore variable memory was allocated in hugemem, to reduce TLB misses.
The current alternative (thread-local storage) is also not using hugement, so not a degradation.

Lcore variables are available very early at startup, so I guess the RTE memory allocator is not yet available.
Hugemem could be allocated using O/S allocation, so there is a possible road towards using hugemem.

Either way, using hugement would require one more indirection (the pointer to the allocated hugemem).
I don't know which has better performance, using hugemem or avoiding the additional pointer dereferencing.


* Suggestion: Consider adding an entry for unregistered non-EAL threads.
Please consider making room for one more entry, shared by all unregistered non-EAL threads, i.e.
making the array size RTE_MAX_LCORE + 1 and indexing by (rte_lcore_id() < RTE_MAX_LCORE ? rte_lcore_id() : RTE_MAX_LCORE).

It would be convenient for the use cases where a variable shared by the unregistered non-EAL threads don't need special treatment.

Obviously, this might affect performance.
If the performance cost is not negligble, the addtional entry (and indexing branch) could be disabled at build time.


* Suggestion: Do not fix the alignment at 16 byte.
Pass an alignment parameter to rte_lcore_var_alloc() and use alignof() when calling it:

+#include <stdalign.h>
+
+#define RTE_LCORE_VAR_ALLOC(name)			\
+	name = rte_lcore_var_alloc(sizeof(*(name)), alignof(*(name)))
+
+#define RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, alignment)	\
+	name = rte_lcore_var_alloc(size, alignment)
+
+#define RTE_LCORE_VAR_ALLOC_SIZE(name, size)	\
+	name = rte_lcore_var_alloc(size, RTE_LCORE_VAR_ALIGNMENT_DEFAULT)
+
+ +++ /cconfig/rte_config.h
+#define RTE_LCORE_VAR_ALIGNMENT_DEFAULT 16


* Concern: RTE_LCORE_VAR_FOREACH() resembles RTE_LCORE_FOREACH(), but behaves differently.

> +/**
> + * Iterate over each lcore id's value for a lcore variable.
> + */
> +#define RTE_LCORE_VAR_FOREACH(var, name)				\
> +	for (unsigned int lcore_id =					\
> +		     (((var) = RTE_LCORE_VAR_LCORE_PTR(0, name)), 0);	\
> +	     lcore_id < RTE_MAX_LCORE;					\
> +	     lcore_id++, (var) = RTE_LCORE_VAR_LCORE_PTR(lcore_id, name))
> +

The macro name RTE_LCORE_VAR_FOREACH() resembles RTE_LCORE_FOREACH(i), which only iterates on running cores.
You might want to give it a name that differs more.

If it wasn't for API breakage, I would suggest renaming RTE_LCORE_FOREACH() instead, but that's not realistic. ;-)

Small detail: "var" is a pointer, so consider renaming it to "ptr" and adding _PTR to the macro name.
  
Mattias Rönnblom Feb. 9, 2024, 11:46 a.m. UTC | #2
On 2024-02-09 09:25, Morten Brørup wrote:
>> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
>> Sent: Thursday, 8 February 2024 19.17
>>
>> Introduce DPDK per-lcore id variables, or lcore variables for short.
>>
>> An lcore variable has one value for every current and future lcore
>> id-equipped thread.
>>
>> The primary <rte_lcore_var.h> use case is for statically allocating
>> small chunks of often-used data, which is related logically, but where
>> there are performance benefits to reap from having updates being local
>> to an lcore.
>>
>> Lcore variables are similar to thread-local storage (TLS, e.g., C11
>> _Thread_local), but decoupling the values' life time with that of the
>> threads.
>>
>> Lcore variables are also similar in terms of functionality provided by
>> FreeBSD kernel's DPCPU_*() family of macros and the associated
>> build-time machinery. DPCPU uses linker scripts, which effectively
>> prevents the reuse of its, otherwise seemingly viable, approach.
>>
>> The currently-prevailing way to solve the same problem as lcore
>> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-sized
>> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
>> lcore variables over this approach is that data related to the same
>> lcore now is close (spatially, in memory), rather than data used by
>> the same module, which in turn avoid excessive use of padding,
>> polluting caches with unused data.
>>
>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>> ---
> 
> This looks very promising. :-)
> 
> Here's a bunch of comments, questions and suggestions.
> 
> 
> * Question: Performance.
> What is the cost of accessing an lcore variable vs a variable in TLS?
> I suppose the relative cost diminishes if the variable is a larger struct, compared to a simple uint64_t.
> 

In case all the relevant data is available in a cache close to the core, 
both options carry quite low overhead.

Accessing a lcore variable will always require a TLS lookup, in the form 
of retrieving the lcore_id of the current thread. In that sense, there 
will likely be a number of extra instructions required to do the lcore 
variable address lookup (i.e., doing the load from rte_lcore_var table 
based on the lcore_id you just looked up, and adding the variable's offset).

A TLS lookup will incur an extra overhead of less than a clock cycle, 
compared to accessing a non-TLS static variable, in case static linking 
is used. For shared objects, TLS is much more expensive (something often 
visible in dynamically linked DPDK app flame graphs, in the form of the 
__tls_addr symbol). Then you need to add ~3 cc/access. This on a micro 
benchmark running on a x86_64 Raptor Lake P-core.

(To visialize the difference between shared object and not, one can use 
Compiler Explorer and -fPIC versus -fPIE.)

Things get more complicated if you access the same variable in the same 
section code, since then it can be left on the stack/in a register by 
the compiler, especially if LTO is used. In other words, if you do 
rte_lcore_id() several times in a row, only the first one will cost you 
anything. This happens fairly often in DPDK, with rte_lcore_id().

Finally, if you do something like

diff --git a/lib/eal/common/rte_random.c b/lib/eal/common/rte_random.c
index af9fffd81b..a65c30d27e 100644
--- a/lib/eal/common/rte_random.c
+++ b/lib/eal/common/rte_random.c
@@ -125,14 +125,7 @@ __rte_rand_lfsr258(struct rte_rand_state *state)
  static __rte_always_inline
  struct rte_rand_state *__rte_rand_get_state(void)
  {
-       unsigned int idx;
-
-       idx = rte_lcore_id();
-
-       if (unlikely(idx == LCORE_ID_ANY))
-               return &unregistered_rand_state;
-
-       return RTE_LCORE_VAR_PTR(rand_state);
+       return &unregistered_rand_state;
  }

  uint64_t

...and re-run the rand_perf_autotest, at least I see no difference at 
all (in a statically linked build). Both results in rte_rand() using ~11 
cc/call. What that suggests is that TLS overhead is very small, and that 
any extra instructions required by lcore variables doesn't add much, if 
anything at all, at least in this particular case.

> Some of my suggestions below might also affect performance.
> 
> 
> * Advantage: Provides direct access to worker thread variables.
> With the current alternative (thread-local storage), the main thread cannot access the TLS variables of the worker threads,
> unless worker threads publish global access pointers.
> Lcore variables of any lcore thread can be direcctly accessed by any thread, which simplifies code.
> 
> 
> * Advantage: Roadmap towards hugemem.
> It would be nice if the lcore variable memory was allocated in hugemem, to reduce TLB misses.
> The current alternative (thread-local storage) is also not using hugement, so not a degradation.
> 

I agree, but the thing is it's hard to figure out how much memory is 
required for these kind of variables, given how DPDK is built and 
linked. In an OS kernel, you can just take all the symbols, put them in 
a special section, and size that section. Such a thing can't easily be 
done with DPDK, since shared object builds are supported, plus that this 
facility should be available not only to DPDK modules, but also the 
application, so relying on linker scripts isn't really feasible (not 
probably not even feasible for DPDK itself).

In that scenario, you want to size up the per-lcore buffer to be so 
large, you don't have to worry about overruns. That will waste memory. 
If you use huge page memory, paging can't help you to avoid 
pre-allocating actual physical memory.

That said, even large (by static per-lcore data standards) buffers are 
potentially small enough not to grow the amount of memory used by a DPDK 
process too much. You need to provision for RTE_MAX_LCORE of them though.

The value of lcore variables should be small, and thus incur few TLB 
misses, so you may not gain much from huge pages. In my world, it's more 
about "fitting often-used per-lcore data into L1 or L2 CPU caches", 
rather than the easier "fitting often-used per-lcore data into a working 
set size reasonably expected to be covered by hardware TLB/caches".

> Lcore variables are available very early at startup, so I guess the RTE memory allocator is not yet available.
> Hugemem could be allocated using O/S allocation, so there is a possible road towards using hugemem.
> 

With the current design, that true. I'm not sure it's a strict 
requirement though, but it does makes things simpler.

> Either way, using hugement would require one more indirection (the pointer to the allocated hugemem).
> I don't know which has better performance, using hugemem or avoiding the additional pointer dereferencing.
> 
> 
> * Suggestion: Consider adding an entry for unregistered non-EAL threads.
> Please consider making room for one more entry, shared by all unregistered non-EAL threads, i.e.
> making the array size RTE_MAX_LCORE + 1 and indexing by (rte_lcore_id() < RTE_MAX_LCORE ? rte_lcore_id() : RTE_MAX_LCORE).
> 
> It would be convenient for the use cases where a variable shared by the unregistered non-EAL threads don't need special treatment.
> 

I thought about this, but it would require a conditional in the lookup 
macro, as you show. More importantly, it would make the whole 
<rte_lcore_var.h> thing less elegant and harder to understand. It's bad 
enough that "per-lcore" is actually "per-lcore id" (or the equivalent 
"per-EAL thread and unregistered EAL-thread"). Adding a "btw it's <what 
I said before> + 1" is not an improvement.

But useful? Sure.

I think you may still need other data for dealing with unregistered 
threads, for example a mutex or spin lock to deal with concurrency 
issues that arises with shared data.

There may also be cases were you are best off by simply disallowing 
unregistered threads from calling into that API.

> Obviously, this might affect performance.
> If the performance cost is not negligble, the addtional entry (and indexing branch) could be disabled at build time.
> 
> 
> * Suggestion: Do not fix the alignment at 16 byte.
> Pass an alignment parameter to rte_lcore_var_alloc() and use alignof() when calling it:
> 
> +#include <stdalign.h>
> +
> +#define RTE_LCORE_VAR_ALLOC(name)			\
> +	name = rte_lcore_var_alloc(sizeof(*(name)), alignof(*(name)))
> +
> +#define RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, alignment)	\
> +	name = rte_lcore_var_alloc(size, alignment)
> +
> +#define RTE_LCORE_VAR_ALLOC_SIZE(name, size)	\
> +	name = rte_lcore_var_alloc(size, RTE_LCORE_VAR_ALIGNMENT_DEFAULT)
> +
> + +++ /cconfig/rte_config.h
> +#define RTE_LCORE_VAR_ALIGNMENT_DEFAULT 16
> 
> 

That seems like a very good idea. I'll look into it.

> * Concern: RTE_LCORE_VAR_FOREACH() resembles RTE_LCORE_FOREACH(), but behaves differently.
> 
>> +/**
>> + * Iterate over each lcore id's value for a lcore variable.
>> + */
>> +#define RTE_LCORE_VAR_FOREACH(var, name)				\
>> +	for (unsigned int lcore_id =					\
>> +		     (((var) = RTE_LCORE_VAR_LCORE_PTR(0, name)), 0);	\
>> +	     lcore_id < RTE_MAX_LCORE;					\
>> +	     lcore_id++, (var) = RTE_LCORE_VAR_LCORE_PTR(lcore_id, name))
>> +
> 
> The macro name RTE_LCORE_VAR_FOREACH() resembles RTE_LCORE_FOREACH(i), which only iterates on running cores.
> You might want to give it a name that differs more.
> 

True.

Maybe RTE_LCORE_VAR_FOREACH_VALUE() is better? Still room for confusion, 
for sure.

Being consistent with <rte_lcore.h> is not so easy, since it's not even 
consistent with itself. For example, rte_lcore_count() returns the 
number of lcores (EAL threads) *plus the number of registered non-EAL 
threads*, and RTE_LCORE_FOREACH() give a different count. :)

> If it wasn't for API breakage, I would suggest renaming RTE_LCORE_FOREACH() instead, but that's not realistic. ;-)
> 
> Small detail: "var" is a pointer, so consider renaming it to "ptr" and adding _PTR to the macro name.

The "var" name comes from how <sys/queue.h> names things. I think I had 
it as "ptr" initially. I'll change it back.

Thanks a lot Morten.
  
Morten Brørup Feb. 9, 2024, 1:04 p.m. UTC | #3
> From: Mattias Rönnblom [mailto:hofors@lysator.liu.se]
> Sent: Friday, 9 February 2024 12.46
> 
> On 2024-02-09 09:25, Morten Brørup wrote:
> >> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
> >> Sent: Thursday, 8 February 2024 19.17
> >>
> >> Introduce DPDK per-lcore id variables, or lcore variables for short.
> >>
> >> An lcore variable has one value for every current and future lcore
> >> id-equipped thread.
> >>
> >> The primary <rte_lcore_var.h> use case is for statically allocating
> >> small chunks of often-used data, which is related logically, but
> where
> >> there are performance benefits to reap from having updates being
> local
> >> to an lcore.
> >>
> >> Lcore variables are similar to thread-local storage (TLS, e.g., C11
> >> _Thread_local), but decoupling the values' life time with that of
> the
> >> threads.
> >>
> >> Lcore variables are also similar in terms of functionality provided
> by
> >> FreeBSD kernel's DPCPU_*() family of macros and the associated
> >> build-time machinery. DPCPU uses linker scripts, which effectively
> >> prevents the reuse of its, otherwise seemingly viable, approach.
> >>
> >> The currently-prevailing way to solve the same problem as lcore
> >> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-
> sized
> >> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
> >> lcore variables over this approach is that data related to the same
> >> lcore now is close (spatially, in memory), rather than data used by
> >> the same module, which in turn avoid excessive use of padding,
> >> polluting caches with unused data.
> >>
> >> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> >> ---
> >
> > This looks very promising. :-)
> >
> > Here's a bunch of comments, questions and suggestions.
> >
> >
> > * Question: Performance.
> > What is the cost of accessing an lcore variable vs a variable in TLS?
> > I suppose the relative cost diminishes if the variable is a larger
> struct, compared to a simple uint64_t.
> >
> 
> In case all the relevant data is available in a cache close to the
> core,
> both options carry quite low overhead.
> 
> Accessing a lcore variable will always require a TLS lookup, in the
> form
> of retrieving the lcore_id of the current thread. In that sense, there
> will likely be a number of extra instructions required to do the lcore
> variable address lookup (i.e., doing the load from rte_lcore_var table
> based on the lcore_id you just looked up, and adding the variable's
> offset).
> 
> A TLS lookup will incur an extra overhead of less than a clock cycle,
> compared to accessing a non-TLS static variable, in case static linking
> is used. For shared objects, TLS is much more expensive (something
> often
> visible in dynamically linked DPDK app flame graphs, in the form of the
> __tls_addr symbol). Then you need to add ~3 cc/access. This on a micro
> benchmark running on a x86_64 Raptor Lake P-core.
> 
> (To visialize the difference between shared object and not, one can use
> Compiler Explorer and -fPIC versus -fPIE.)
> 
> Things get more complicated if you access the same variable in the same
> section code, since then it can be left on the stack/in a register by
> the compiler, especially if LTO is used. In other words, if you do
> rte_lcore_id() several times in a row, only the first one will cost you
> anything. This happens fairly often in DPDK, with rte_lcore_id().
> 
> Finally, if you do something like
> 
> diff --git a/lib/eal/common/rte_random.c b/lib/eal/common/rte_random.c
> index af9fffd81b..a65c30d27e 100644
> --- a/lib/eal/common/rte_random.c
> +++ b/lib/eal/common/rte_random.c
> @@ -125,14 +125,7 @@ __rte_rand_lfsr258(struct rte_rand_state *state)
>   static __rte_always_inline
>   struct rte_rand_state *__rte_rand_get_state(void)
>   {
> -       unsigned int idx;
> -
> -       idx = rte_lcore_id();
> -
> -       if (unlikely(idx == LCORE_ID_ANY))
> -               return &unregistered_rand_state;
> -
> -       return RTE_LCORE_VAR_PTR(rand_state);
> +       return &unregistered_rand_state;
>   }
> 
>   uint64_t
> 
> ...and re-run the rand_perf_autotest, at least I see no difference at
> all (in a statically linked build). Both results in rte_rand() using
> ~11
> cc/call. What that suggests is that TLS overhead is very small, and
> that
> any extra instructions required by lcore variables doesn't add much, if
> anything at all, at least in this particular case.

Excellent. Thank you for a thorough and detailed answer, Mattias.

> 
> > Some of my suggestions below might also affect performance.
> >
> >
> > * Advantage: Provides direct access to worker thread variables.
> > With the current alternative (thread-local storage), the main thread
> cannot access the TLS variables of the worker threads,
> > unless worker threads publish global access pointers.
> > Lcore variables of any lcore thread can be direcctly accessed by any
> thread, which simplifies code.
> >
> >
> > * Advantage: Roadmap towards hugemem.
> > It would be nice if the lcore variable memory was allocated in
> hugemem, to reduce TLB misses.
> > The current alternative (thread-local storage) is also not using
> hugement, so not a degradation.
> >
> 
> I agree, but the thing is it's hard to figure out how much memory is
> required for these kind of variables, given how DPDK is built and
> linked. In an OS kernel, you can just take all the symbols, put them in
> a special section, and size that section. Such a thing can't easily be
> done with DPDK, since shared object builds are supported, plus that
> this
> facility should be available not only to DPDK modules, but also the
> application, so relying on linker scripts isn't really feasible (not
> probably not even feasible for DPDK itself).
> 
> In that scenario, you want to size up the per-lcore buffer to be so
> large, you don't have to worry about overruns. That will waste memory.
> If you use huge page memory, paging can't help you to avoid
> pre-allocating actual physical memory.

Good point.
I had noticed that RTE_MAX_LCORE_VAR was 1 MB (per RTE_MAX_LCORE), but I hadn't considered how paging helps us use less physical memory than that.

> 
> That said, even large (by static per-lcore data standards) buffers are
> potentially small enough not to grow the amount of memory used by a
> DPDK
> process too much. You need to provision for RTE_MAX_LCORE of them
> though.
> 
> The value of lcore variables should be small, and thus incur few TLB
> misses, so you may not gain much from huge pages. In my world, it's
> more
> about "fitting often-used per-lcore data into L1 or L2 CPU caches",
> rather than the easier "fitting often-used per-lcore data into a
> working
> set size reasonably expected to be covered by hardware TLB/caches".

Yes, I suppose that lcore variables are intended to be small, and large per-lcore structures should keep following the current design patterns for allocation and access.

Perhaps this guideline is worth mentioning in the documentation.

> 
> > Lcore variables are available very early at startup, so I guess the
> RTE memory allocator is not yet available.
> > Hugemem could be allocated using O/S allocation, so there is a
> possible road towards using hugemem.
> >
> 
> With the current design, that true. I'm not sure it's a strict
> requirement though, but it does makes things simpler.
> 
> > Either way, using hugement would require one more indirection (the
> pointer to the allocated hugemem).
> > I don't know which has better performance, using hugemem or avoiding
> the additional pointer dereferencing.
> >
> >
> > * Suggestion: Consider adding an entry for unregistered non-EAL
> threads.
> > Please consider making room for one more entry, shared by all
> unregistered non-EAL threads, i.e.
> > making the array size RTE_MAX_LCORE + 1 and indexing by
> (rte_lcore_id() < RTE_MAX_LCORE ? rte_lcore_id() : RTE_MAX_LCORE).
> >
> > It would be convenient for the use cases where a variable shared by
> the unregistered non-EAL threads don't need special treatment.
> >
> 
> I thought about this, but it would require a conditional in the lookup
> macro, as you show. More importantly, it would make the whole
> <rte_lcore_var.h> thing less elegant and harder to understand. It's bad
> enough that "per-lcore" is actually "per-lcore id" (or the equivalent
> "per-EAL thread and unregistered EAL-thread"). Adding a "btw it's <what
> I said before> + 1" is not an improvement.

We could promote "one more entry for unregistered non-EAL threads" design pattern (for relevant use cases only!) by extending EAL with one more TLS variable, maintained like _thread_id, but set to RTE_MAX_LCORE when _tread_id is set to -1:

+++ eal_common_thread.c:
  RTE_DEFINE_PER_LCORE(int, _thread_id) = -1;
+ RTE_DEFINE_PER_LCORE(int, _thread_idx) = RTE_MAX_LCORE;

and

+++ rte_lcore.h:
static inline unsigned
rte_lcore_id(void)
{
	return RTE_PER_LCORE(_lcore_id);
}
+ static inline unsigned
+ rte_lcore_idx(void)
+ {
+ 	return RTE_PER_LCORE(_lcore_idx);
+ }

That would eliminate the (rte_lcore_id() < RTE_MAX_LCORE ? rte_lcore_id() : RTE_MAX_LCORE) conditional, also where currently used.

> 
> But useful? Sure.
> 
> I think you may still need other data for dealing with unregistered
> threads, for example a mutex or spin lock to deal with concurrency
> issues that arises with shared data.

Adding the extra entry is only for the benefit of use cases where special handling is not required. It will make the code for those use cases much cleaner. I think it is useful.

Use cases requiring special handling should still do the special handling they do today.

> 
> There may also be cases were you are best off by simply disallowing
> unregistered threads from calling into that API.
> 
> > Obviously, this might affect performance.
> > If the performance cost is not negligble, the addtional entry (and
> indexing branch) could be disabled at build time.
> >
> >
> > * Suggestion: Do not fix the alignment at 16 byte.
> > Pass an alignment parameter to rte_lcore_var_alloc() and use
> alignof() when calling it:
> >
> > +#include <stdalign.h>
> > +
> > +#define RTE_LCORE_VAR_ALLOC(name)			\
> > +	name = rte_lcore_var_alloc(sizeof(*(name)), alignof(*(name)))
> > +
> > +#define RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, alignment)	\
> > +	name = rte_lcore_var_alloc(size, alignment)
> > +
> > +#define RTE_LCORE_VAR_ALLOC_SIZE(name, size)	\
> > +	name = rte_lcore_var_alloc(size, RTE_LCORE_VAR_ALIGNMENT_DEFAULT)
> > +
> > + +++ /cconfig/rte_config.h
> > +#define RTE_LCORE_VAR_ALIGNMENT_DEFAULT 16
> >
> >
> 
> That seems like a very good idea. I'll look into it.
> 
> > * Concern: RTE_LCORE_VAR_FOREACH() resembles RTE_LCORE_FOREACH(), but
> behaves differently.
> >
> >> +/**
> >> + * Iterate over each lcore id's value for a lcore variable.
> >> + */
> >> +#define RTE_LCORE_VAR_FOREACH(var, name)				\
> >> +	for (unsigned int lcore_id =					\
> >> +		     (((var) = RTE_LCORE_VAR_LCORE_PTR(0, name)), 0);	\
> >> +	     lcore_id < RTE_MAX_LCORE;					\
> >> +	     lcore_id++, (var) = RTE_LCORE_VAR_LCORE_PTR(lcore_id, name))
> >> +
> >
> > The macro name RTE_LCORE_VAR_FOREACH() resembles
> RTE_LCORE_FOREACH(i), which only iterates on running cores.
> > You might want to give it a name that differs more.
> >
> 
> True.
> 
> Maybe RTE_LCORE_VAR_FOREACH_VALUE() is better? Still room for
> confusion,
> for sure.
> 
> Being consistent with <rte_lcore.h> is not so easy, since it's not even
> consistent with itself. For example, rte_lcore_count() returns the
> number of lcores (EAL threads) *plus the number of registered non-EAL
> threads*, and RTE_LCORE_FOREACH() give a different count. :)

Naming is hard. I don't have a good name, and can only offer inspiration...

<rte_lcore.h> has RTE_LCORE_FOREACH() and its RTE_LCORE_FOREACH_WORKER() variant with _WORKER appended.

Perhaps RTE_LCORE_VAR_FOREACH_ALL(), with _ALL appended to indicate a variant.

> 
> > If it wasn't for API breakage, I would suggest renaming
> RTE_LCORE_FOREACH() instead, but that's not realistic. ;-)
> >
> > Small detail: "var" is a pointer, so consider renaming it to "ptr"
> and adding _PTR to the macro name.
> 
> The "var" name comes from how <sys/queue.h> names things. I think I had
> it as "ptr" initially. I'll change it back.

Thanks.

> 
> Thanks a lot Morten.
  
Mattias Rönnblom Feb. 19, 2024, 7:49 a.m. UTC | #4
On 2024-02-09 14:04, Morten Brørup wrote:
>> From: Mattias Rönnblom [mailto:hofors@lysator.liu.se]
>> Sent: Friday, 9 February 2024 12.46
>>
>> On 2024-02-09 09:25, Morten Brørup wrote:
>>>> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
>>>> Sent: Thursday, 8 February 2024 19.17
>>>>
>>>> Introduce DPDK per-lcore id variables, or lcore variables for short.
>>>>
>>>> An lcore variable has one value for every current and future lcore
>>>> id-equipped thread.
>>>>
>>>> The primary <rte_lcore_var.h> use case is for statically allocating
>>>> small chunks of often-used data, which is related logically, but
>> where
>>>> there are performance benefits to reap from having updates being
>> local
>>>> to an lcore.
>>>>
>>>> Lcore variables are similar to thread-local storage (TLS, e.g., C11
>>>> _Thread_local), but decoupling the values' life time with that of
>> the
>>>> threads.
>>>>
>>>> Lcore variables are also similar in terms of functionality provided
>> by
>>>> FreeBSD kernel's DPCPU_*() family of macros and the associated
>>>> build-time machinery. DPCPU uses linker scripts, which effectively
>>>> prevents the reuse of its, otherwise seemingly viable, approach.
>>>>
>>>> The currently-prevailing way to solve the same problem as lcore
>>>> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-
>> sized
>>>> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
>>>> lcore variables over this approach is that data related to the same
>>>> lcore now is close (spatially, in memory), rather than data used by
>>>> the same module, which in turn avoid excessive use of padding,
>>>> polluting caches with unused data.
>>>>
>>>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>>>> ---
>>>
>>> This looks very promising. :-)
>>>
>>> Here's a bunch of comments, questions and suggestions.
>>>
>>>
>>> * Question: Performance.
>>> What is the cost of accessing an lcore variable vs a variable in TLS?
>>> I suppose the relative cost diminishes if the variable is a larger
>> struct, compared to a simple uint64_t.
>>>
>>
>> In case all the relevant data is available in a cache close to the
>> core,
>> both options carry quite low overhead.
>>
>> Accessing a lcore variable will always require a TLS lookup, in the
>> form
>> of retrieving the lcore_id of the current thread. In that sense, there
>> will likely be a number of extra instructions required to do the lcore
>> variable address lookup (i.e., doing the load from rte_lcore_var table
>> based on the lcore_id you just looked up, and adding the variable's
>> offset).
>>
>> A TLS lookup will incur an extra overhead of less than a clock cycle,
>> compared to accessing a non-TLS static variable, in case static linking
>> is used. For shared objects, TLS is much more expensive (something
>> often
>> visible in dynamically linked DPDK app flame graphs, in the form of the
>> __tls_addr symbol). Then you need to add ~3 cc/access. This on a micro
>> benchmark running on a x86_64 Raptor Lake P-core.
>>
>> (To visialize the difference between shared object and not, one can use
>> Compiler Explorer and -fPIC versus -fPIE.)
>>
>> Things get more complicated if you access the same variable in the same
>> section code, since then it can be left on the stack/in a register by
>> the compiler, especially if LTO is used. In other words, if you do
>> rte_lcore_id() several times in a row, only the first one will cost you
>> anything. This happens fairly often in DPDK, with rte_lcore_id().
>>
>> Finally, if you do something like
>>
>> diff --git a/lib/eal/common/rte_random.c b/lib/eal/common/rte_random.c
>> index af9fffd81b..a65c30d27e 100644
>> --- a/lib/eal/common/rte_random.c
>> +++ b/lib/eal/common/rte_random.c
>> @@ -125,14 +125,7 @@ __rte_rand_lfsr258(struct rte_rand_state *state)
>>    static __rte_always_inline
>>    struct rte_rand_state *__rte_rand_get_state(void)
>>    {
>> -       unsigned int idx;
>> -
>> -       idx = rte_lcore_id();
>> -
>> -       if (unlikely(idx == LCORE_ID_ANY))
>> -               return &unregistered_rand_state;
>> -
>> -       return RTE_LCORE_VAR_PTR(rand_state);
>> +       return &unregistered_rand_state;
>>    }
>>
>>    uint64_t
>>
>> ...and re-run the rand_perf_autotest, at least I see no difference at
>> all (in a statically linked build). Both results in rte_rand() using
>> ~11
>> cc/call. What that suggests is that TLS overhead is very small, and
>> that
>> any extra instructions required by lcore variables doesn't add much, if
>> anything at all, at least in this particular case.
> 
> Excellent. Thank you for a thorough and detailed answer, Mattias.
> 
>>
>>> Some of my suggestions below might also affect performance.
>>>
>>>
>>> * Advantage: Provides direct access to worker thread variables.
>>> With the current alternative (thread-local storage), the main thread
>> cannot access the TLS variables of the worker threads,
>>> unless worker threads publish global access pointers.
>>> Lcore variables of any lcore thread can be direcctly accessed by any
>> thread, which simplifies code.
>>>
>>>
>>> * Advantage: Roadmap towards hugemem.
>>> It would be nice if the lcore variable memory was allocated in
>> hugemem, to reduce TLB misses.
>>> The current alternative (thread-local storage) is also not using
>> hugement, so not a degradation.
>>>
>>
>> I agree, but the thing is it's hard to figure out how much memory is
>> required for these kind of variables, given how DPDK is built and
>> linked. In an OS kernel, you can just take all the symbols, put them in
>> a special section, and size that section. Such a thing can't easily be
>> done with DPDK, since shared object builds are supported, plus that
>> this
>> facility should be available not only to DPDK modules, but also the
>> application, so relying on linker scripts isn't really feasible (not
>> probably not even feasible for DPDK itself).
>>
>> In that scenario, you want to size up the per-lcore buffer to be so
>> large, you don't have to worry about overruns. That will waste memory.
>> If you use huge page memory, paging can't help you to avoid
>> pre-allocating actual physical memory.
> 
> Good point.
> I had noticed that RTE_MAX_LCORE_VAR was 1 MB (per RTE_MAX_LCORE), but I hadn't considered how paging helps us use less physical memory than that.
> 
>>
>> That said, even large (by static per-lcore data standards) buffers are
>> potentially small enough not to grow the amount of memory used by a
>> DPDK
>> process too much. You need to provision for RTE_MAX_LCORE of them
>> though.
>>
>> The value of lcore variables should be small, and thus incur few TLB
>> misses, so you may not gain much from huge pages. In my world, it's
>> more
>> about "fitting often-used per-lcore data into L1 or L2 CPU caches",
>> rather than the easier "fitting often-used per-lcore data into a
>> working
>> set size reasonably expected to be covered by hardware TLB/caches".
> 
> Yes, I suppose that lcore variables are intended to be small, and large per-lcore structures should keep following the current design patterns for allocation and access.
> 

It seems to me that support for per-lcore heaps should be the solution 
for supporting use cases requiring many, larger and/or dynamic objects 
on a per-lcore basis.

Ideally, you would design both that mechanism and lcore variables 
together, but then if you couple enough amount of improvements together 
you will never get anywhere. An instance of where perfect is the enemy 
of good, perhaps.

> Perhaps this guideline is worth mentioning in the documentation.
> 

What is missing, more specifically? The size limitation and the static 
nature of lcore variables is described, and what current design patterns 
they expected to (partly) replace is also covered.

>>
>>> Lcore variables are available very early at startup, so I guess the
>> RTE memory allocator is not yet available.
>>> Hugemem could be allocated using O/S allocation, so there is a
>> possible road towards using hugemem.
>>>
>>
>> With the current design, that true. I'm not sure it's a strict
>> requirement though, but it does makes things simpler.
>>
>>> Either way, using hugement would require one more indirection (the
>> pointer to the allocated hugemem).
>>> I don't know which has better performance, using hugemem or avoiding
>> the additional pointer dereferencing.
>>>
>>>
>>> * Suggestion: Consider adding an entry for unregistered non-EAL
>> threads.
>>> Please consider making room for one more entry, shared by all
>> unregistered non-EAL threads, i.e.
>>> making the array size RTE_MAX_LCORE + 1 and indexing by
>> (rte_lcore_id() < RTE_MAX_LCORE ? rte_lcore_id() : RTE_MAX_LCORE).
>>>
>>> It would be convenient for the use cases where a variable shared by
>> the unregistered non-EAL threads don't need special treatment.
>>>
>>
>> I thought about this, but it would require a conditional in the lookup
>> macro, as you show. More importantly, it would make the whole
>> <rte_lcore_var.h> thing less elegant and harder to understand. It's bad
>> enough that "per-lcore" is actually "per-lcore id" (or the equivalent
>> "per-EAL thread and unregistered EAL-thread"). Adding a "btw it's <what
>> I said before> + 1" is not an improvement.
> 
> We could promote "one more entry for unregistered non-EAL threads" design pattern (for relevant use cases only!) by extending EAL with one more TLS variable, maintained like _thread_id, but set to RTE_MAX_LCORE when _tread_id is set to -1:
> 
> +++ eal_common_thread.c:
>    RTE_DEFINE_PER_LCORE(int, _thread_id) = -1;
> + RTE_DEFINE_PER_LCORE(int, _thread_idx) = RTE_MAX_LCORE;
> 
> and
> 
> +++ rte_lcore.h:
> static inline unsigned
> rte_lcore_id(void)
> {
> 	return RTE_PER_LCORE(_lcore_id);
> }
> + static inline unsigned
> + rte_lcore_idx(void)
> + {
> + 	return RTE_PER_LCORE(_lcore_idx);
> + }
> 
> That would eliminate the (rte_lcore_id() < RTE_MAX_LCORE ? rte_lcore_id() : RTE_MAX_LCORE) conditional, also where currently used.
> 

Wouldn't that effectively give a shared lcore id to all unregistered 
threads?

We definitely shouldn't further complicate anything related to the DPDK 
threading model, in my opinion.

If a module needs one or more variable instances that aren't per lcore, 
use regular static allocation instead. I would favor clarity over 
convenience here, at least until we know better (see below as well).

>>
>> But useful? Sure.
>>
>> I think you may still need other data for dealing with unregistered
>> threads, for example a mutex or spin lock to deal with concurrency
>> issues that arises with shared data.
> 
> Adding the extra entry is only for the benefit of use cases where special handling is not required. It will make the code for those use cases much cleaner. I think it is useful.
> 

It will make it shorter, but not less clean, I would argue.

> Use cases requiring special handling should still do the special handling they do today.
> 

For DPDK modules using lcore variables and which treat unregistered 
threads as "full citizens", I expect special handling of unregistered 
threads to be the norm. Take rte_random.h as an example. Current API 
does not guarantee MT safety for concurrent calls of unregistered 
threads. It probably should, and it should probably be by means of a 
mutex (not spinlock).

The reason I'm not running off to make a rte_random.c patch is that's 
it's unclear to me what is the role of unregistered threads in DPDK. I'm 
reasonably comfortable with a model where there are many threads that 
basically don't interact with the DPDK APIs (except maybe some very 
narrow exposure, like the preemption-safe ring variant). One example of 
such a design would be big slow control plane which uses multi-threading 
and the Linux process scheduler for work scheduling, hosted in the same 
process as a DPDK data plane app.

What I find more strange is a scenario where there are unregistered 
threads which interacts with a wide variety of DPDK APIs, does so 
at-high-rates/with-high-performance-requirements and are expected to be 
preemption-safe. So they are basically EAL threads without a lcore id.

Support for that latter scenario has also been voiced, in previous 
discussions, from what I recall.

I think it's hard to answer the question of a "unregistered thread 
spare" for lcore variables without first knowing what the future should 
look like for unregistered threads in DPDK, in terms of being able to 
call into DPDK APIs, preemption-safety guarantees, etc.

It seems that until you have a clearer picture of how generally to treat 
unregistered threads, you are best off with just a per-lcore id instance 
of lcore variables.

>>
>> There may also be cases were you are best off by simply disallowing
>> unregistered threads from calling into that API.
>>
>>> Obviously, this might affect performance.
>>> If the performance cost is not negligble, the addtional entry (and
>> indexing branch) could be disabled at build time.
>>>
>>>
>>> * Suggestion: Do not fix the alignment at 16 byte.
>>> Pass an alignment parameter to rte_lcore_var_alloc() and use
>> alignof() when calling it:
>>>
>>> +#include <stdalign.h>
>>> +
>>> +#define RTE_LCORE_VAR_ALLOC(name)			\
>>> +	name = rte_lcore_var_alloc(sizeof(*(name)), alignof(*(name)))
>>> +
>>> +#define RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, alignment)	\
>>> +	name = rte_lcore_var_alloc(size, alignment)
>>> +
>>> +#define RTE_LCORE_VAR_ALLOC_SIZE(name, size)	\
>>> +	name = rte_lcore_var_alloc(size, RTE_LCORE_VAR_ALIGNMENT_DEFAULT)
>>> +
>>> + +++ /cconfig/rte_config.h
>>> +#define RTE_LCORE_VAR_ALIGNMENT_DEFAULT 16
>>>
>>>
>>
>> That seems like a very good idea. I'll look into it.
>>
>>> * Concern: RTE_LCORE_VAR_FOREACH() resembles RTE_LCORE_FOREACH(), but
>> behaves differently.
>>>
>>>> +/**
>>>> + * Iterate over each lcore id's value for a lcore variable.
>>>> + */
>>>> +#define RTE_LCORE_VAR_FOREACH(var, name)				\
>>>> +	for (unsigned int lcore_id =					\
>>>> +		     (((var) = RTE_LCORE_VAR_LCORE_PTR(0, name)), 0);	\
>>>> +	     lcore_id < RTE_MAX_LCORE;					\
>>>> +	     lcore_id++, (var) = RTE_LCORE_VAR_LCORE_PTR(lcore_id, name))
>>>> +
>>>
>>> The macro name RTE_LCORE_VAR_FOREACH() resembles
>> RTE_LCORE_FOREACH(i), which only iterates on running cores.
>>> You might want to give it a name that differs more.
>>>
>>
>> True.
>>
>> Maybe RTE_LCORE_VAR_FOREACH_VALUE() is better? Still room for
>> confusion,
>> for sure.
>>
>> Being consistent with <rte_lcore.h> is not so easy, since it's not even
>> consistent with itself. For example, rte_lcore_count() returns the
>> number of lcores (EAL threads) *plus the number of registered non-EAL
>> threads*, and RTE_LCORE_FOREACH() give a different count. :)
> 
> Naming is hard. I don't have a good name, and can only offer inspiration...
> 
> <rte_lcore.h> has RTE_LCORE_FOREACH() and its RTE_LCORE_FOREACH_WORKER() variant with _WORKER appended.
> 
> Perhaps RTE_LCORE_VAR_FOREACH_ALL(), with _ALL appended to indicate a variant.
> 
>>
>>> If it wasn't for API breakage, I would suggest renaming
>> RTE_LCORE_FOREACH() instead, but that's not realistic. ;-)
>>>
>>> Small detail: "var" is a pointer, so consider renaming it to "ptr"
>> and adding _PTR to the macro name.
>>
>> The "var" name comes from how <sys/queue.h> names things. I think I had
>> it as "ptr" initially. I'll change it back.
> 
> Thanks.
> 
>>
>> Thanks a lot Morten.
  
Morten Brørup Feb. 19, 2024, 11:10 a.m. UTC | #5
> From: Mattias Rönnblom [mailto:hofors@lysator.liu.se]
> Sent: Monday, 19 February 2024 08.49
> 
> On 2024-02-09 14:04, Morten Brørup wrote:
> >> From: Mattias Rönnblom [mailto:hofors@lysator.liu.se]
> >> Sent: Friday, 9 February 2024 12.46
> >>
> >> On 2024-02-09 09:25, Morten Brørup wrote:
> >>>> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
> >>>> Sent: Thursday, 8 February 2024 19.17
> >>>>
> >>>> Introduce DPDK per-lcore id variables, or lcore variables for
> short.
> >>>>
> >>>> An lcore variable has one value for every current and future lcore
> >>>> id-equipped thread.
> >>>>
> >>>> The primary <rte_lcore_var.h> use case is for statically
> allocating
> >>>> small chunks of often-used data, which is related logically, but
> >> where
> >>>> there are performance benefits to reap from having updates being
> >> local
> >>>> to an lcore.
> >>>>
> >>>> Lcore variables are similar to thread-local storage (TLS, e.g.,
> C11
> >>>> _Thread_local), but decoupling the values' life time with that of
> >> the
> >>>> threads.
> >>>>
> >>>> Lcore variables are also similar in terms of functionality
> provided
> >> by
> >>>> FreeBSD kernel's DPCPU_*() family of macros and the associated
> >>>> build-time machinery. DPCPU uses linker scripts, which effectively
> >>>> prevents the reuse of its, otherwise seemingly viable, approach.
> >>>>
> >>>> The currently-prevailing way to solve the same problem as lcore
> >>>> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-
> >> sized
> >>>> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
> >>>> lcore variables over this approach is that data related to the
> same
> >>>> lcore now is close (spatially, in memory), rather than data used
> by
> >>>> the same module, which in turn avoid excessive use of padding,
> >>>> polluting caches with unused data.
> >>>>
> >>>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> >>>> ---
> >>>
> >>> This looks very promising. :-)
> >>>
> >>> Here's a bunch of comments, questions and suggestions.
> >>>
> >>>
> >>> * Question: Performance.
> >>> What is the cost of accessing an lcore variable vs a variable in
> TLS?
> >>> I suppose the relative cost diminishes if the variable is a larger
> >> struct, compared to a simple uint64_t.
> >>>
> >>
> >> In case all the relevant data is available in a cache close to the
> >> core,
> >> both options carry quite low overhead.
> >>
> >> Accessing a lcore variable will always require a TLS lookup, in the
> >> form
> >> of retrieving the lcore_id of the current thread. In that sense,
> there
> >> will likely be a number of extra instructions required to do the
> lcore
> >> variable address lookup (i.e., doing the load from rte_lcore_var
> table
> >> based on the lcore_id you just looked up, and adding the variable's
> >> offset).
> >>
> >> A TLS lookup will incur an extra overhead of less than a clock
> cycle,
> >> compared to accessing a non-TLS static variable, in case static
> linking
> >> is used. For shared objects, TLS is much more expensive (something
> >> often
> >> visible in dynamically linked DPDK app flame graphs, in the form of
> the
> >> __tls_addr symbol). Then you need to add ~3 cc/access. This on a
> micro
> >> benchmark running on a x86_64 Raptor Lake P-core.
> >>
> >> (To visialize the difference between shared object and not, one can
> use
> >> Compiler Explorer and -fPIC versus -fPIE.)
> >>
> >> Things get more complicated if you access the same variable in the
> same
> >> section code, since then it can be left on the stack/in a register
> by
> >> the compiler, especially if LTO is used. In other words, if you do
> >> rte_lcore_id() several times in a row, only the first one will cost
> you
> >> anything. This happens fairly often in DPDK, with rte_lcore_id().
> >>
> >> Finally, if you do something like
> >>
> >> diff --git a/lib/eal/common/rte_random.c
> b/lib/eal/common/rte_random.c
> >> index af9fffd81b..a65c30d27e 100644
> >> --- a/lib/eal/common/rte_random.c
> >> +++ b/lib/eal/common/rte_random.c
> >> @@ -125,14 +125,7 @@ __rte_rand_lfsr258(struct rte_rand_state
> *state)
> >>    static __rte_always_inline
> >>    struct rte_rand_state *__rte_rand_get_state(void)
> >>    {
> >> -       unsigned int idx;
> >> -
> >> -       idx = rte_lcore_id();
> >> -
> >> -       if (unlikely(idx == LCORE_ID_ANY))
> >> -               return &unregistered_rand_state;
> >> -
> >> -       return RTE_LCORE_VAR_PTR(rand_state);
> >> +       return &unregistered_rand_state;
> >>    }
> >>
> >>    uint64_t
> >>
> >> ...and re-run the rand_perf_autotest, at least I see no difference
> at
> >> all (in a statically linked build). Both results in rte_rand() using
> >> ~11
> >> cc/call. What that suggests is that TLS overhead is very small, and
> >> that
> >> any extra instructions required by lcore variables doesn't add much,
> if
> >> anything at all, at least in this particular case.
> >
> > Excellent. Thank you for a thorough and detailed answer, Mattias.
> >
> >>
> >>> Some of my suggestions below might also affect performance.
> >>>
> >>>
> >>> * Advantage: Provides direct access to worker thread variables.
> >>> With the current alternative (thread-local storage), the main
> thread
> >> cannot access the TLS variables of the worker threads,
> >>> unless worker threads publish global access pointers.
> >>> Lcore variables of any lcore thread can be direcctly accessed by
> any
> >> thread, which simplifies code.
> >>>
> >>>
> >>> * Advantage: Roadmap towards hugemem.
> >>> It would be nice if the lcore variable memory was allocated in
> >> hugemem, to reduce TLB misses.
> >>> The current alternative (thread-local storage) is also not using
> >> hugement, so not a degradation.
> >>>
> >>
> >> I agree, but the thing is it's hard to figure out how much memory is
> >> required for these kind of variables, given how DPDK is built and
> >> linked. In an OS kernel, you can just take all the symbols, put them
> in
> >> a special section, and size that section. Such a thing can't easily
> be
> >> done with DPDK, since shared object builds are supported, plus that
> >> this
> >> facility should be available not only to DPDK modules, but also the
> >> application, so relying on linker scripts isn't really feasible (not
> >> probably not even feasible for DPDK itself).
> >>
> >> In that scenario, you want to size up the per-lcore buffer to be so
> >> large, you don't have to worry about overruns. That will waste
> memory.
> >> If you use huge page memory, paging can't help you to avoid
> >> pre-allocating actual physical memory.
> >
> > Good point.
> > I had noticed that RTE_MAX_LCORE_VAR was 1 MB (per RTE_MAX_LCORE),
> but I hadn't considered how paging helps us use less physical memory
> than that.
> >
> >>
> >> That said, even large (by static per-lcore data standards) buffers
> are
> >> potentially small enough not to grow the amount of memory used by a
> >> DPDK
> >> process too much. You need to provision for RTE_MAX_LCORE of them
> >> though.
> >>
> >> The value of lcore variables should be small, and thus incur few TLB
> >> misses, so you may not gain much from huge pages. In my world, it's
> >> more
> >> about "fitting often-used per-lcore data into L1 or L2 CPU caches",
> >> rather than the easier "fitting often-used per-lcore data into a
> >> working
> >> set size reasonably expected to be covered by hardware TLB/caches".
> >
> > Yes, I suppose that lcore variables are intended to be small, and
> large per-lcore structures should keep following the current design
> patterns for allocation and access.
> >
> 
> It seems to me that support for per-lcore heaps should be the solution
> for supporting use cases requiring many, larger and/or dynamic objects
> on a per-lcore basis.
> 
> Ideally, you would design both that mechanism and lcore variables
> together, but then if you couple enough amount of improvements together
> you will never get anywhere. An instance of where perfect is the enemy
> of good, perhaps.

So true. :-)

> 
> > Perhaps this guideline is worth mentioning in the documentation.
> >
> 
> What is missing, more specifically? The size limitation and the static
> nature of lcore variables is described, and what current design
> patterns
> they expected to (partly) replace is also covered.

Your documentation is fine, and nothing specific is missing here.
I was thinking out loud that the high level DPDK documentation should describe common design patterns.

> 
> >>
> >>> Lcore variables are available very early at startup, so I guess the
> >> RTE memory allocator is not yet available.
> >>> Hugemem could be allocated using O/S allocation, so there is a
> >> possible road towards using hugemem.
> >>>
> >>
> >> With the current design, that true. I'm not sure it's a strict
> >> requirement though, but it does makes things simpler.
> >>
> >>> Either way, using hugement would require one more indirection (the
> >> pointer to the allocated hugemem).
> >>> I don't know which has better performance, using hugemem or
> avoiding
> >> the additional pointer dereferencing.
> >>>
> >>>
> >>> * Suggestion: Consider adding an entry for unregistered non-EAL
> >> threads.
> >>> Please consider making room for one more entry, shared by all
> >> unregistered non-EAL threads, i.e.
> >>> making the array size RTE_MAX_LCORE + 1 and indexing by
> >> (rte_lcore_id() < RTE_MAX_LCORE ? rte_lcore_id() : RTE_MAX_LCORE).
> >>>
> >>> It would be convenient for the use cases where a variable shared by
> >> the unregistered non-EAL threads don't need special treatment.
> >>>
> >>
> >> I thought about this, but it would require a conditional in the
> lookup
> >> macro, as you show. More importantly, it would make the whole
> >> <rte_lcore_var.h> thing less elegant and harder to understand. It's
> bad
> >> enough that "per-lcore" is actually "per-lcore id" (or the
> equivalent
> >> "per-EAL thread and unregistered EAL-thread"). Adding a "btw it's
> <what
> >> I said before> + 1" is not an improvement.
> >
> > We could promote "one more entry for unregistered non-EAL threads"
> design pattern (for relevant use cases only!) by extending EAL with one
> more TLS variable, maintained like _thread_id, but set to RTE_MAX_LCORE
> when _tread_id is set to -1:
> >
> > +++ eal_common_thread.c:
> >    RTE_DEFINE_PER_LCORE(int, _thread_id) = -1;
> > + RTE_DEFINE_PER_LCORE(int, _thread_idx) = RTE_MAX_LCORE;

Ups... wrong reference! I meant to refer to _lcore_id, not _thread_id. Correction:

We could promote "one more entry for unregistered non-EAL threads" design pattern (for relevant use cases only!) by extending EAL with one more TLS variable, maintained like _lcore_id, but set to RTE_MAX_LCORE when _lcore_id is set to LCORE_ID_ANY:

+++ eal_common_thread.c:
  RTE_DEFINE_PER_LCORE(unsigned int, _lcore_id) = LCORE_ID_ANY;
+ RTE_DEFINE_PER_LCORE(unsigned int, _lcore_idx) = RTE_MAX_LCORE;

> >
> > and
> >
> > +++ rte_lcore.h:
> > static inline unsigned
> > rte_lcore_id(void)
> > {
> > 	return RTE_PER_LCORE(_lcore_id);
> > }
> > + static inline unsigned
> > + rte_lcore_idx(void)
> > + {
> > + 	return RTE_PER_LCORE(_lcore_idx);
> > + }
> >
> > That would eliminate the (rte_lcore_id() < RTE_MAX_LCORE ?
> rte_lcore_id() : RTE_MAX_LCORE) conditional, also where currently used.
> >
> 
> Wouldn't that effectively give a shared lcore id to all unregistered
> threads?

Yes, just like the rte_lcore_id() is LCORE_ID_ANY (i.e. UINT32_MAX) for all unregistered threads; but it will be usable for array indexing, behaving as a shadow variable of RTE_PER_LCORE(_lcore_id) for optimizing away the "rte_lcore_id() < RTE_MAX_LCORE ? rte_lcore_id() : RTE_MAX_LCORE" when indexing.

> 
> We definitely shouldn't further complicate anything related to the DPDK
> threading model, in my opinion.
> 
> If a module needs one or more variable instances that aren't per lcore,
> use regular static allocation instead. I would favor clarity over
> convenience here, at least until we know better (see below as well).
> 
> >>
> >> But useful? Sure.
> >>
> >> I think you may still need other data for dealing with unregistered
> >> threads, for example a mutex or spin lock to deal with concurrency
> >> issues that arises with shared data.
> >
> > Adding the extra entry is only for the benefit of use cases where
> special handling is not required. It will make the code for those use
> cases much cleaner. I think it is useful.
> >
> 
> It will make it shorter, but not less clean, I would argue.
> 
> > Use cases requiring special handling should still do the special
> handling they do today.
> >
> 
> For DPDK modules using lcore variables and which treat unregistered
> threads as "full citizens", I expect special handling of unregistered
> threads to be the norm. Take rte_random.h as an example. Current API
> does not guarantee MT safety for concurrent calls of unregistered
> threads. It probably should, and it should probably be by means of a
> mutex (not spinlock).
> 
> The reason I'm not running off to make a rte_random.c patch is that's
> it's unclear to me what is the role of unregistered threads in DPDK.
> I'm
> reasonably comfortable with a model where there are many threads that
> basically don't interact with the DPDK APIs (except maybe some very
> narrow exposure, like the preemption-safe ring variant). One example of
> such a design would be big slow control plane which uses multi-
> threading
> and the Linux process scheduler for work scheduling, hosted in the same
> process as a DPDK data plane app.
> 
> What I find more strange is a scenario where there are unregistered
> threads which interacts with a wide variety of DPDK APIs, does so
> at-high-rates/with-high-performance-requirements and are expected to be
> preemption-safe. So they are basically EAL threads without a lcore id.

Yes, this is happening in the wild.
E.g. our application has a mode where it uses fewer EAL threads, and processes more in non-EAL threads. So to say, the same work is processed either by an EAL thread or a non-EAL thread, depending on the application's mode.
The extra array entry would be useful for such use cases.

> 
> Support for that latter scenario has also been voiced, in previous
> discussions, from what I recall.
> 
> I think it's hard to answer the question of a "unregistered thread
> spare" for lcore variables without first knowing what the future should
> look like for unregistered threads in DPDK, in terms of being able to
> call into DPDK APIs, preemption-safety guarantees, etc.
> 
> It seems that until you have a clearer picture of how generally to
> treat
> unregistered threads, you are best off with just a per-lcore id
> instance
> of lcore variables.

I get your point. It also reduces the risk of bugs caused by incorrect use of the additional entry.

I am arguing for a different angle: Providing the extra entry will help uncovering relevant use cases.

> 
> >>
> >> There may also be cases were you are best off by simply disallowing
> >> unregistered threads from calling into that API.
> >>
> >>> Obviously, this might affect performance.
> >>> If the performance cost is not negligble, the addtional entry (and
> >> indexing branch) could be disabled at build time.
> >>>
> >>>
> >>> * Suggestion: Do not fix the alignment at 16 byte.
> >>> Pass an alignment parameter to rte_lcore_var_alloc() and use
> >> alignof() when calling it:
> >>>
> >>> +#include <stdalign.h>
> >>> +
> >>> +#define RTE_LCORE_VAR_ALLOC(name)			\
> >>> +	name = rte_lcore_var_alloc(sizeof(*(name)), alignof(*(name)))
> >>> +
> >>> +#define RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, alignment)
> 	\
> >>> +	name = rte_lcore_var_alloc(size, alignment)
> >>> +
> >>> +#define RTE_LCORE_VAR_ALLOC_SIZE(name, size)	\
> >>> +	name = rte_lcore_var_alloc(size, RTE_LCORE_VAR_ALIGNMENT_DEFAULT)
> >>> +
> >>> + +++ /cconfig/rte_config.h
> >>> +#define RTE_LCORE_VAR_ALIGNMENT_DEFAULT 16
> >>>
> >>>
> >>
> >> That seems like a very good idea. I'll look into it.
> >>
> >>> * Concern: RTE_LCORE_VAR_FOREACH() resembles RTE_LCORE_FOREACH(),
> but
> >> behaves differently.
> >>>
> >>>> +/**
> >>>> + * Iterate over each lcore id's value for a lcore variable.
> >>>> + */
> >>>> +#define RTE_LCORE_VAR_FOREACH(var, name)				\
> >>>> +	for (unsigned int lcore_id =					\
> >>>> +		     (((var) = RTE_LCORE_VAR_LCORE_PTR(0, name)), 0);
> 	\
> >>>> +	     lcore_id < RTE_MAX_LCORE;
> 	\
> >>>> +	     lcore_id++, (var) = RTE_LCORE_VAR_LCORE_PTR(lcore_id,
> name))
> >>>> +
> >>>
> >>> The macro name RTE_LCORE_VAR_FOREACH() resembles
> >> RTE_LCORE_FOREACH(i), which only iterates on running cores.
> >>> You might want to give it a name that differs more.
> >>>
> >>
> >> True.
> >>
> >> Maybe RTE_LCORE_VAR_FOREACH_VALUE() is better? Still room for
> >> confusion,
> >> for sure.
> >>
> >> Being consistent with <rte_lcore.h> is not so easy, since it's not
> even
> >> consistent with itself. For example, rte_lcore_count() returns the
> >> number of lcores (EAL threads) *plus the number of registered non-
> EAL
> >> threads*, and RTE_LCORE_FOREACH() give a different count. :)
> >
> > Naming is hard. I don't have a good name, and can only offer
> inspiration...
> >
> > <rte_lcore.h> has RTE_LCORE_FOREACH() and its
> RTE_LCORE_FOREACH_WORKER() variant with _WORKER appended.
> >
> > Perhaps RTE_LCORE_VAR_FOREACH_ALL(), with _ALL appended to indicate a
> variant.
> >
> >>
> >>> If it wasn't for API breakage, I would suggest renaming
> >> RTE_LCORE_FOREACH() instead, but that's not realistic. ;-)
> >>>
> >>> Small detail: "var" is a pointer, so consider renaming it to "ptr"
> >> and adding _PTR to the macro name.
> >>
> >> The "var" name comes from how <sys/queue.h> names things. I think I
> had
> >> it as "ptr" initially. I'll change it back.
> >
> > Thanks.
> >
> >>
> >> Thanks a lot Morten.
  
Mattias Rönnblom Feb. 19, 2024, 2:31 p.m. UTC | #6
On 2024-02-19 12:10, Morten Brørup wrote:
>> From: Mattias Rönnblom [mailto:hofors@lysator.liu.se]
>> Sent: Monday, 19 February 2024 08.49
>>
>> On 2024-02-09 14:04, Morten Brørup wrote:
>>>> From: Mattias Rönnblom [mailto:hofors@lysator.liu.se]
>>>> Sent: Friday, 9 February 2024 12.46
>>>>
>>>> On 2024-02-09 09:25, Morten Brørup wrote:
>>>>>> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
>>>>>> Sent: Thursday, 8 February 2024 19.17
>>>>>>
>>>>>> Introduce DPDK per-lcore id variables, or lcore variables for
>> short.
>>>>>>
>>>>>> An lcore variable has one value for every current and future lcore
>>>>>> id-equipped thread.
>>>>>>
>>>>>> The primary <rte_lcore_var.h> use case is for statically
>> allocating
>>>>>> small chunks of often-used data, which is related logically, but
>>>> where
>>>>>> there are performance benefits to reap from having updates being
>>>> local
>>>>>> to an lcore.
>>>>>>
>>>>>> Lcore variables are similar to thread-local storage (TLS, e.g.,
>> C11
>>>>>> _Thread_local), but decoupling the values' life time with that of
>>>> the
>>>>>> threads.
>>>>>>
>>>>>> Lcore variables are also similar in terms of functionality
>> provided
>>>> by
>>>>>> FreeBSD kernel's DPCPU_*() family of macros and the associated
>>>>>> build-time machinery. DPCPU uses linker scripts, which effectively
>>>>>> prevents the reuse of its, otherwise seemingly viable, approach.
>>>>>>
>>>>>> The currently-prevailing way to solve the same problem as lcore
>>>>>> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-
>>>> sized
>>>>>> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit of
>>>>>> lcore variables over this approach is that data related to the
>> same
>>>>>> lcore now is close (spatially, in memory), rather than data used
>> by
>>>>>> the same module, which in turn avoid excessive use of padding,
>>>>>> polluting caches with unused data.
>>>>>>
>>>>>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>>>>>> ---
>>>>>
>>>>> This looks very promising. :-)
>>>>>
>>>>> Here's a bunch of comments, questions and suggestions.
>>>>>
>>>>>
>>>>> * Question: Performance.
>>>>> What is the cost of accessing an lcore variable vs a variable in
>> TLS?
>>>>> I suppose the relative cost diminishes if the variable is a larger
>>>> struct, compared to a simple uint64_t.
>>>>>
>>>>
>>>> In case all the relevant data is available in a cache close to the
>>>> core,
>>>> both options carry quite low overhead.
>>>>
>>>> Accessing a lcore variable will always require a TLS lookup, in the
>>>> form
>>>> of retrieving the lcore_id of the current thread. In that sense,
>> there
>>>> will likely be a number of extra instructions required to do the
>> lcore
>>>> variable address lookup (i.e., doing the load from rte_lcore_var
>> table
>>>> based on the lcore_id you just looked up, and adding the variable's
>>>> offset).
>>>>
>>>> A TLS lookup will incur an extra overhead of less than a clock
>> cycle,
>>>> compared to accessing a non-TLS static variable, in case static
>> linking
>>>> is used. For shared objects, TLS is much more expensive (something
>>>> often
>>>> visible in dynamically linked DPDK app flame graphs, in the form of
>> the
>>>> __tls_addr symbol). Then you need to add ~3 cc/access. This on a
>> micro
>>>> benchmark running on a x86_64 Raptor Lake P-core.
>>>>
>>>> (To visialize the difference between shared object and not, one can
>> use
>>>> Compiler Explorer and -fPIC versus -fPIE.)
>>>>
>>>> Things get more complicated if you access the same variable in the
>> same
>>>> section code, since then it can be left on the stack/in a register
>> by
>>>> the compiler, especially if LTO is used. In other words, if you do
>>>> rte_lcore_id() several times in a row, only the first one will cost
>> you
>>>> anything. This happens fairly often in DPDK, with rte_lcore_id().
>>>>
>>>> Finally, if you do something like
>>>>
>>>> diff --git a/lib/eal/common/rte_random.c
>> b/lib/eal/common/rte_random.c
>>>> index af9fffd81b..a65c30d27e 100644
>>>> --- a/lib/eal/common/rte_random.c
>>>> +++ b/lib/eal/common/rte_random.c
>>>> @@ -125,14 +125,7 @@ __rte_rand_lfsr258(struct rte_rand_state
>> *state)
>>>>     static __rte_always_inline
>>>>     struct rte_rand_state *__rte_rand_get_state(void)
>>>>     {
>>>> -       unsigned int idx;
>>>> -
>>>> -       idx = rte_lcore_id();
>>>> -
>>>> -       if (unlikely(idx == LCORE_ID_ANY))
>>>> -               return &unregistered_rand_state;
>>>> -
>>>> -       return RTE_LCORE_VAR_PTR(rand_state);
>>>> +       return &unregistered_rand_state;
>>>>     }
>>>>
>>>>     uint64_t
>>>>
>>>> ...and re-run the rand_perf_autotest, at least I see no difference
>> at
>>>> all (in a statically linked build). Both results in rte_rand() using
>>>> ~11
>>>> cc/call. What that suggests is that TLS overhead is very small, and
>>>> that
>>>> any extra instructions required by lcore variables doesn't add much,
>> if
>>>> anything at all, at least in this particular case.
>>>
>>> Excellent. Thank you for a thorough and detailed answer, Mattias.
>>>
>>>>
>>>>> Some of my suggestions below might also affect performance.
>>>>>
>>>>>
>>>>> * Advantage: Provides direct access to worker thread variables.
>>>>> With the current alternative (thread-local storage), the main
>> thread
>>>> cannot access the TLS variables of the worker threads,
>>>>> unless worker threads publish global access pointers.
>>>>> Lcore variables of any lcore thread can be direcctly accessed by
>> any
>>>> thread, which simplifies code.
>>>>>
>>>>>
>>>>> * Advantage: Roadmap towards hugemem.
>>>>> It would be nice if the lcore variable memory was allocated in
>>>> hugemem, to reduce TLB misses.
>>>>> The current alternative (thread-local storage) is also not using
>>>> hugement, so not a degradation.
>>>>>
>>>>
>>>> I agree, but the thing is it's hard to figure out how much memory is
>>>> required for these kind of variables, given how DPDK is built and
>>>> linked. In an OS kernel, you can just take all the symbols, put them
>> in
>>>> a special section, and size that section. Such a thing can't easily
>> be
>>>> done with DPDK, since shared object builds are supported, plus that
>>>> this
>>>> facility should be available not only to DPDK modules, but also the
>>>> application, so relying on linker scripts isn't really feasible (not
>>>> probably not even feasible for DPDK itself).
>>>>
>>>> In that scenario, you want to size up the per-lcore buffer to be so
>>>> large, you don't have to worry about overruns. That will waste
>> memory.
>>>> If you use huge page memory, paging can't help you to avoid
>>>> pre-allocating actual physical memory.
>>>
>>> Good point.
>>> I had noticed that RTE_MAX_LCORE_VAR was 1 MB (per RTE_MAX_LCORE),
>> but I hadn't considered how paging helps us use less physical memory
>> than that.
>>>
>>>>
>>>> That said, even large (by static per-lcore data standards) buffers
>> are
>>>> potentially small enough not to grow the amount of memory used by a
>>>> DPDK
>>>> process too much. You need to provision for RTE_MAX_LCORE of them
>>>> though.
>>>>
>>>> The value of lcore variables should be small, and thus incur few TLB
>>>> misses, so you may not gain much from huge pages. In my world, it's
>>>> more
>>>> about "fitting often-used per-lcore data into L1 or L2 CPU caches",
>>>> rather than the easier "fitting often-used per-lcore data into a
>>>> working
>>>> set size reasonably expected to be covered by hardware TLB/caches".
>>>
>>> Yes, I suppose that lcore variables are intended to be small, and
>> large per-lcore structures should keep following the current design
>> patterns for allocation and access.
>>>
>>
>> It seems to me that support for per-lcore heaps should be the solution
>> for supporting use cases requiring many, larger and/or dynamic objects
>> on a per-lcore basis.
>>
>> Ideally, you would design both that mechanism and lcore variables
>> together, but then if you couple enough amount of improvements together
>> you will never get anywhere. An instance of where perfect is the enemy
>> of good, perhaps.
> 
> So true. :-)
> 
>>
>>> Perhaps this guideline is worth mentioning in the documentation.
>>>
>>
>> What is missing, more specifically? The size limitation and the static
>> nature of lcore variables is described, and what current design
>> patterns
>> they expected to (partly) replace is also covered.
> 
> Your documentation is fine, and nothing specific is missing here.
> I was thinking out loud that the high level DPDK documentation should describe common design patterns.
> 
>>
>>>>
>>>>> Lcore variables are available very early at startup, so I guess the
>>>> RTE memory allocator is not yet available.
>>>>> Hugemem could be allocated using O/S allocation, so there is a
>>>> possible road towards using hugemem.
>>>>>
>>>>
>>>> With the current design, that true. I'm not sure it's a strict
>>>> requirement though, but it does makes things simpler.
>>>>
>>>>> Either way, using hugement would require one more indirection (the
>>>> pointer to the allocated hugemem).
>>>>> I don't know which has better performance, using hugemem or
>> avoiding
>>>> the additional pointer dereferencing.
>>>>>
>>>>>
>>>>> * Suggestion: Consider adding an entry for unregistered non-EAL
>>>> threads.
>>>>> Please consider making room for one more entry, shared by all
>>>> unregistered non-EAL threads, i.e.
>>>>> making the array size RTE_MAX_LCORE + 1 and indexing by
>>>> (rte_lcore_id() < RTE_MAX_LCORE ? rte_lcore_id() : RTE_MAX_LCORE).
>>>>>
>>>>> It would be convenient for the use cases where a variable shared by
>>>> the unregistered non-EAL threads don't need special treatment.
>>>>>
>>>>
>>>> I thought about this, but it would require a conditional in the
>> lookup
>>>> macro, as you show. More importantly, it would make the whole
>>>> <rte_lcore_var.h> thing less elegant and harder to understand. It's
>> bad
>>>> enough that "per-lcore" is actually "per-lcore id" (or the
>> equivalent
>>>> "per-EAL thread and unregistered EAL-thread"). Adding a "btw it's
>> <what
>>>> I said before> + 1" is not an improvement.
>>>
>>> We could promote "one more entry for unregistered non-EAL threads"
>> design pattern (for relevant use cases only!) by extending EAL with one
>> more TLS variable, maintained like _thread_id, but set to RTE_MAX_LCORE
>> when _tread_id is set to -1:
>>>
>>> +++ eal_common_thread.c:
>>>     RTE_DEFINE_PER_LCORE(int, _thread_id) = -1;
>>> + RTE_DEFINE_PER_LCORE(int, _thread_idx) = RTE_MAX_LCORE;
> 
> Ups... wrong reference! I meant to refer to _lcore_id, not _thread_id. Correction:
> 

OK. I subconsciously ignored this mistake, and read it as "_lcore_id".

> We could promote "one more entry for unregistered non-EAL threads" design pattern (for relevant use cases only!) by extending EAL with one more TLS variable, maintained like _lcore_id, but set to RTE_MAX_LCORE when _lcore_id is set to LCORE_ID_ANY:
> 
> +++ eal_common_thread.c:
>    RTE_DEFINE_PER_LCORE(unsigned int, _lcore_id) = LCORE_ID_ANY;
> + RTE_DEFINE_PER_LCORE(unsigned int, _lcore_idx) = RTE_MAX_LCORE;
> 
>>>
>>> and
>>>
>>> +++ rte_lcore.h:
>>> static inline unsigned
>>> rte_lcore_id(void)
>>> {
>>> 	return RTE_PER_LCORE(_lcore_id);
>>> }
>>> + static inline unsigned
>>> + rte_lcore_idx(void)
>>> + {
>>> + 	return RTE_PER_LCORE(_lcore_idx);
>>> + }
>>>
>>> That would eliminate the (rte_lcore_id() < RTE_MAX_LCORE ?
>> rte_lcore_id() : RTE_MAX_LCORE) conditional, also where currently used.
>>>
>>
>> Wouldn't that effectively give a shared lcore id to all unregistered
>> threads?
> 
> Yes, just like the rte_lcore_id() is LCORE_ID_ANY (i.e. UINT32_MAX) for all unregistered threads; but it will be usable for array indexing, behaving as a shadow variable of RTE_PER_LCORE(_lcore_id) for optimizing away the "rte_lcore_id() < RTE_MAX_LCORE ? rte_lcore_id() : RTE_MAX_LCORE" when indexing.
> 
>>
>> We definitely shouldn't further complicate anything related to the DPDK
>> threading model, in my opinion.
>>
>> If a module needs one or more variable instances that aren't per lcore,
>> use regular static allocation instead. I would favor clarity over
>> convenience here, at least until we know better (see below as well).
>>
>>>>
>>>> But useful? Sure.
>>>>
>>>> I think you may still need other data for dealing with unregistered
>>>> threads, for example a mutex or spin lock to deal with concurrency
>>>> issues that arises with shared data.
>>>
>>> Adding the extra entry is only for the benefit of use cases where
>> special handling is not required. It will make the code for those use
>> cases much cleaner. I think it is useful.
>>>
>>
>> It will make it shorter, but not less clean, I would argue.
>>
>>> Use cases requiring special handling should still do the special
>> handling they do today.
>>>
>>
>> For DPDK modules using lcore variables and which treat unregistered
>> threads as "full citizens", I expect special handling of unregistered
>> threads to be the norm. Take rte_random.h as an example. Current API
>> does not guarantee MT safety for concurrent calls of unregistered
>> threads. It probably should, and it should probably be by means of a
>> mutex (not spinlock).
>>
>> The reason I'm not running off to make a rte_random.c patch is that's
>> it's unclear to me what is the role of unregistered threads in DPDK.
>> I'm
>> reasonably comfortable with a model where there are many threads that
>> basically don't interact with the DPDK APIs (except maybe some very
>> narrow exposure, like the preemption-safe ring variant). One example of
>> such a design would be big slow control plane which uses multi-
>> threading
>> and the Linux process scheduler for work scheduling, hosted in the same
>> process as a DPDK data plane app.
>>
>> What I find more strange is a scenario where there are unregistered
>> threads which interacts with a wide variety of DPDK APIs, does so
>> at-high-rates/with-high-performance-requirements and are expected to be
>> preemption-safe. So they are basically EAL threads without a lcore id.
> 
> Yes, this is happening in the wild.
> E.g. our application has a mode where it uses fewer EAL threads, and processes more in non-EAL threads. So to say, the same work is processed either by an EAL thread or a non-EAL thread, depending on the application's mode.
> The extra array entry would be useful for such use cases.
> 

Is there some particular reason you can't register those non-EAL threads?

>>
>> Support for that latter scenario has also been voiced, in previous
>> discussions, from what I recall.
>>
>> I think it's hard to answer the question of a "unregistered thread
>> spare" for lcore variables without first knowing what the future should
>> look like for unregistered threads in DPDK, in terms of being able to
>> call into DPDK APIs, preemption-safety guarantees, etc.
>>
>> It seems that until you have a clearer picture of how generally to
>> treat
>> unregistered threads, you are best off with just a per-lcore id
>> instance
>> of lcore variables.
> 
> I get your point. It also reduces the risk of bugs caused by incorrect use of the additional entry.
> 
> I am arguing for a different angle: Providing the extra entry will help uncovering relevant use cases.
> 

Maybe have two "spares" in case you find two new uses cases? :)

No, adding spares doesn't work, unless you rework the API and rename it 
to fit the new purpose of not only providing per-lcore id variables, but 
per-something-else.

>>
>>>>
>>>> There may also be cases were you are best off by simply disallowing
>>>> unregistered threads from calling into that API.
>>>>
>>>>> Obviously, this might affect performance.
>>>>> If the performance cost is not negligble, the addtional entry (and
>>>> indexing branch) could be disabled at build time.
>>>>>
>>>>>
>>>>> * Suggestion: Do not fix the alignment at 16 byte.
>>>>> Pass an alignment parameter to rte_lcore_var_alloc() and use
>>>> alignof() when calling it:
>>>>>
>>>>> +#include <stdalign.h>
>>>>> +
>>>>> +#define RTE_LCORE_VAR_ALLOC(name)			\
>>>>> +	name = rte_lcore_var_alloc(sizeof(*(name)), alignof(*(name)))
>>>>> +
>>>>> +#define RTE_LCORE_VAR_ALLOC_SIZE_ALIGN(name, size, alignment)
>> 	\
>>>>> +	name = rte_lcore_var_alloc(size, alignment)
>>>>> +
>>>>> +#define RTE_LCORE_VAR_ALLOC_SIZE(name, size)	\
>>>>> +	name = rte_lcore_var_alloc(size, RTE_LCORE_VAR_ALIGNMENT_DEFAULT)
>>>>> +
>>>>> + +++ /cconfig/rte_config.h
>>>>> +#define RTE_LCORE_VAR_ALIGNMENT_DEFAULT 16
>>>>>
>>>>>
>>>>
>>>> That seems like a very good idea. I'll look into it.
>>>>
>>>>> * Concern: RTE_LCORE_VAR_FOREACH() resembles RTE_LCORE_FOREACH(),
>> but
>>>> behaves differently.
>>>>>
>>>>>> +/**
>>>>>> + * Iterate over each lcore id's value for a lcore variable.
>>>>>> + */
>>>>>> +#define RTE_LCORE_VAR_FOREACH(var, name)				\
>>>>>> +	for (unsigned int lcore_id =					\
>>>>>> +		     (((var) = RTE_LCORE_VAR_LCORE_PTR(0, name)), 0);
>> 	\
>>>>>> +	     lcore_id < RTE_MAX_LCORE;
>> 	\
>>>>>> +	     lcore_id++, (var) = RTE_LCORE_VAR_LCORE_PTR(lcore_id,
>> name))
>>>>>> +
>>>>>
>>>>> The macro name RTE_LCORE_VAR_FOREACH() resembles
>>>> RTE_LCORE_FOREACH(i), which only iterates on running cores.
>>>>> You might want to give it a name that differs more.
>>>>>
>>>>
>>>> True.
>>>>
>>>> Maybe RTE_LCORE_VAR_FOREACH_VALUE() is better? Still room for
>>>> confusion,
>>>> for sure.
>>>>
>>>> Being consistent with <rte_lcore.h> is not so easy, since it's not
>> even
>>>> consistent with itself. For example, rte_lcore_count() returns the
>>>> number of lcores (EAL threads) *plus the number of registered non-
>> EAL
>>>> threads*, and RTE_LCORE_FOREACH() give a different count. :)
>>>
>>> Naming is hard. I don't have a good name, and can only offer
>> inspiration...
>>>
>>> <rte_lcore.h> has RTE_LCORE_FOREACH() and its
>> RTE_LCORE_FOREACH_WORKER() variant with _WORKER appended.
>>>
>>> Perhaps RTE_LCORE_VAR_FOREACH_ALL(), with _ALL appended to indicate a
>> variant.
>>>
>>>>
>>>>> If it wasn't for API breakage, I would suggest renaming
>>>> RTE_LCORE_FOREACH() instead, but that's not realistic. ;-)
>>>>>
>>>>> Small detail: "var" is a pointer, so consider renaming it to "ptr"
>>>> and adding _PTR to the macro name.
>>>>
>>>> The "var" name comes from how <sys/queue.h> names things. I think I
>> had
>>>> it as "ptr" initially. I'll change it back.
>>>
>>> Thanks.
>>>
>>>>
>>>> Thanks a lot Morten.
  
Morten Brørup Feb. 19, 2024, 3:04 p.m. UTC | #7
> From: Mattias Rönnblom [mailto:hofors@lysator.liu.se]
> Sent: Monday, 19 February 2024 15.32
> 
> On 2024-02-19 12:10, Morten Brørup wrote:
> >> From: Mattias Rönnblom [mailto:hofors@lysator.liu.se]
> >> Sent: Monday, 19 February 2024 08.49
> >>
> >> On 2024-02-09 14:04, Morten Brørup wrote:
> >>>> From: Mattias Rönnblom [mailto:hofors@lysator.liu.se]
> >>>> Sent: Friday, 9 February 2024 12.46
> >>>>
> >>>> On 2024-02-09 09:25, Morten Brørup wrote:
> >>>>>> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
> >>>>>> Sent: Thursday, 8 February 2024 19.17
> >>>>>>
> >>>>>> Introduce DPDK per-lcore id variables, or lcore variables for
> >> short.
> >>>>>>
> >>>>>> An lcore variable has one value for every current and future
> lcore
> >>>>>> id-equipped thread.
> >>>>>>
> >>>>>> The primary <rte_lcore_var.h> use case is for statically
> >> allocating
> >>>>>> small chunks of often-used data, which is related logically, but
> >>>> where
> >>>>>> there are performance benefits to reap from having updates being
> >>>> local
> >>>>>> to an lcore.
> >>>>>>
> >>>>>> Lcore variables are similar to thread-local storage (TLS, e.g.,
> >> C11
> >>>>>> _Thread_local), but decoupling the values' life time with that
> of
> >>>> the
> >>>>>> threads.
> >>>>>>
> >>>>>> Lcore variables are also similar in terms of functionality
> >> provided
> >>>> by
> >>>>>> FreeBSD kernel's DPCPU_*() family of macros and the associated
> >>>>>> build-time machinery. DPCPU uses linker scripts, which
> effectively
> >>>>>> prevents the reuse of its, otherwise seemingly viable, approach.
> >>>>>>
> >>>>>> The currently-prevailing way to solve the same problem as lcore
> >>>>>> variables is to keep a module's per-lcore data as RTE_MAX_LCORE-
> >>>> sized
> >>>>>> array of cache-aligned, RTE_CACHE_GUARDed structs. The benefit
> of
> >>>>>> lcore variables over this approach is that data related to the
> >> same
> >>>>>> lcore now is close (spatially, in memory), rather than data used
> >> by
> >>>>>> the same module, which in turn avoid excessive use of padding,
> >>>>>> polluting caches with unused data.
> >>>>>>
> >>>>>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> >>>>>> ---

[...]

> > Ups... wrong reference! I meant to refer to _lcore_id, not
> _thread_id. Correction:
> >
> 
> OK. I subconsciously ignored this mistake, and read it as "_lcore_id".

:-)

[...]

> >> For DPDK modules using lcore variables and which treat unregistered
> >> threads as "full citizens", I expect special handling of
> unregistered
> >> threads to be the norm. Take rte_random.h as an example. Current API
> >> does not guarantee MT safety for concurrent calls of unregistered
> >> threads. It probably should, and it should probably be by means of a
> >> mutex (not spinlock).
> >>
> >> The reason I'm not running off to make a rte_random.c patch is
> that's
> >> it's unclear to me what is the role of unregistered threads in DPDK.
> >> I'm
> >> reasonably comfortable with a model where there are many threads
> that
> >> basically don't interact with the DPDK APIs (except maybe some very
> >> narrow exposure, like the preemption-safe ring variant). One example
> of
> >> such a design would be big slow control plane which uses multi-
> >> threading
> >> and the Linux process scheduler for work scheduling, hosted in the
> same
> >> process as a DPDK data plane app.
> >>
> >> What I find more strange is a scenario where there are unregistered
> >> threads which interacts with a wide variety of DPDK APIs, does so
> >> at-high-rates/with-high-performance-requirements and are expected to
> be
> >> preemption-safe. So they are basically EAL threads without a lcore
> id.
> >
> > Yes, this is happening in the wild.
> > E.g. our application has a mode where it uses fewer EAL threads, and
> processes more in non-EAL threads. So to say, the same work is
> processed either by an EAL thread or a non-EAL thread, depending on the
> application's mode.
> > The extra array entry would be useful for such use cases.
> >
> 
> Is there some particular reason you can't register those non-EAL
> threads?

Legacy. I suppose we could just do that instead.
Thanks for the suggestion!

> 
> >>
> >> Support for that latter scenario has also been voiced, in previous
> >> discussions, from what I recall.
> >>
> >> I think it's hard to answer the question of a "unregistered thread
> >> spare" for lcore variables without first knowing what the future
> should
> >> look like for unregistered threads in DPDK, in terms of being able
> to
> >> call into DPDK APIs, preemption-safety guarantees, etc.
> >>
> >> It seems that until you have a clearer picture of how generally to
> >> treat
> >> unregistered threads, you are best off with just a per-lcore id
> >> instance
> >> of lcore variables.
> >
> > I get your point. It also reduces the risk of bugs caused by
> incorrect use of the additional entry.
> >
> > I am arguing for a different angle: Providing the extra entry will
> help uncovering relevant use cases.
> >
> 
> Maybe have two "spares" in case you find two new uses cases? :)
> 
> No, adding spares doesn't work, unless you rework the API and rename it
> to fit the new purpose of not only providing per-lcore id variables,
> but per-something-else.
> 

OK. I'm convinced.
  

Patch

diff --git a/config/rte_config.h b/config/rte_config.h
index da265d7dd2..884482e473 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -30,6 +30,7 @@ 
 /* EAL defines */
 #define RTE_CACHE_GUARD_LINES 1
 #define RTE_MAX_HEAPS 32
+#define RTE_MAX_LCORE_VAR 1048576
 #define RTE_MAX_MEMSEG_LISTS 128
 #define RTE_MAX_MEMSEG_PER_LIST 8192
 #define RTE_MAX_MEM_MB_PER_LIST 32768
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index a6a768bd7c..bb06bb7ca1 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -98,6 +98,7 @@  The public API headers are grouped by topics:
   [interrupts](@ref rte_interrupts.h),
   [launch](@ref rte_launch.h),
   [lcore](@ref rte_lcore.h),
+  [lcore-varible](@ref rte_lcore_var.h),
   [per-lcore](@ref rte_per_lcore.h),
   [service cores](@ref rte_service.h),
   [keepalive](@ref rte_keepalive.h),
diff --git a/lib/eal/common/eal_common_lcore_var.c b/lib/eal/common/eal_common_lcore_var.c
new file mode 100644
index 0000000000..5276fe7192
--- /dev/null
+++ b/lib/eal/common/eal_common_lcore_var.c
@@ -0,0 +1,80 @@ 
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+/* XXX: should this file be called eal_common_ldata.c or rte_ldata.c? */
+
+#include <inttypes.h>
+
+#include <rte_common.h>
+#include <rte_debug.h>
+#include <rte_log.h>
+
+#include <rte_lcore_var.h>
+
+#include "eal_private.h"
+
+#define WARN_THRESHOLD 75
+#define MAX_AUTO_ALIGNMENT 16U
+
+/*
+ * Avoid using offset zero, since it would result in a NULL-value
+ * "handle" (offset) pointer, which in principle and per the API
+ * definition shouldn't be an issue, but may confuse some tools and
+ * users.
+ */
+#define INITIAL_OFFSET MAX_AUTO_ALIGNMENT
+
+char rte_lcore_var[RTE_MAX_LCORE][RTE_MAX_LCORE_VAR] __rte_cache_aligned;
+
+static uintptr_t allocated = INITIAL_OFFSET;
+
+static void
+verify_allocation(uintptr_t new_allocated)
+{
+	static bool has_warned;
+
+	RTE_VERIFY(new_allocated < RTE_MAX_LCORE_VAR);
+
+	if (new_allocated > (WARN_THRESHOLD * RTE_MAX_LCORE_VAR) / 100 &&
+	    !has_warned) {
+		EAL_LOG(WARNING, "Per-lcore data usage has exceeded %d%% "
+			"of the maximum capacity (%d bytes)", WARN_THRESHOLD,
+			RTE_MAX_LCORE_VAR);
+		has_warned = true;
+	}
+}
+
+static void *
+lcore_var_alloc(size_t size, size_t alignment)
+{
+	uintptr_t new_allocated = RTE_ALIGN_CEIL(allocated, alignment);
+
+	void *offset = (void *)new_allocated;
+
+	new_allocated += size;
+
+	verify_allocation(new_allocated);
+
+	allocated = new_allocated;
+
+	EAL_LOG(DEBUG, "Allocated %"PRIuPTR" bytes of per-lcore data with a "
+		"%"PRIuPTR"-byte alignment", size, alignment);
+
+	return offset;
+}
+
+void *
+rte_lcore_var_alloc(size_t size)
+{
+	RTE_BUILD_BUG_ON(RTE_MAX_LCORE_VAR % RTE_CACHE_LINE_SIZE != 0);
+
+	/* Allocations are naturally aligned (i.e., the same alignment
+	 * as the object size, up to a maximum of 16 bytes, which
+	 * should satisify alignment requirements of any kind of
+	 * object.
+	 */
+	size_t alignment = RTE_MIN(size, MAX_AUTO_ALIGNMENT);
+
+	return lcore_var_alloc(size, alignment);
+}
diff --git a/lib/eal/common/meson.build b/lib/eal/common/meson.build
index 22a626ba6f..d41403680b 100644
--- a/lib/eal/common/meson.build
+++ b/lib/eal/common/meson.build
@@ -18,6 +18,7 @@  sources += files(
         'eal_common_interrupts.c',
         'eal_common_launch.c',
         'eal_common_lcore.c',
+        'eal_common_lcore_var.c',
         'eal_common_mcfg.c',
         'eal_common_memalloc.c',
         'eal_common_memory.c',
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index e94b056d46..9449253e23 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -27,6 +27,7 @@  headers += files(
         'rte_keepalive.h',
         'rte_launch.h',
         'rte_lcore.h',
+        'rte_lcore_var.h',
         'rte_lock_annotations.h',
         'rte_malloc.h',
         'rte_mcslock.h',
diff --git a/lib/eal/include/rte_lcore_var.h b/lib/eal/include/rte_lcore_var.h
new file mode 100644
index 0000000000..c1854dc6a4
--- /dev/null
+++ b/lib/eal/include/rte_lcore_var.h
@@ -0,0 +1,352 @@ 
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Ericsson AB
+ */
+
+#ifndef _RTE_LCORE_VAR_H_
+#define _RTE_LCORE_VAR_H_
+
+/**
+ * @file
+ *
+ * RTE Per-lcore id variables
+ *
+ * This API provides a mechanism to create and access per-lcore id
+ * variables in a space- and cycle-efficient manner.
+ *
+ * A per-lcore id variable (or lcore variable for short) has one value
+ * for each EAL thread and registered non-EAL thread. In other words,
+ * there's one copy of its value for each and every current and future
+ * lcore id-equipped thread, with the total number of copies amounting
+ * to \c RTE_MAX_LCORE.
+ *
+ * In order to access the values of an lcore variable, a handle is
+ * used. The type of the handle is a pointer to the value's type
+ * (e.g., for \c uint32_t lcore variable, the handle is a
+ * <code>uint32_t *</code>. A handle may be passed between modules and
+ * threads just like any pointer, but its value is not the address of
+ * any particular object, but rather just an opaque identifier, stored
+ * in a typed pointer (to inform the access macro the type of values).
+ *
+ * @b Creation
+ *
+ * An lcore variable is created in two steps:
+ *  1. Define a lcore variable handle by using \ref RTE_LCORE_VAR_HANDLE.
+ *  2. Allocate lcore variable storage and initialize the handle with
+ *     a unique identifier by \ref RTE_LCORE_VAR_ALLOC or
+ *     \ref RTE_LCORE_VAR_INIT. Allocation generally occurs the time of
+ *     module initialization, but may be done at any time.
+ *
+ * An lcore variable is not tied to the owning thread's lifetime. It's
+ * available for use by any thread immediately after having been
+ * allocated, and continues to be available throughout the lifetime of
+ * the EAL.
+ *
+ * Lcore variables cannot and need not be freed.
+ *
+ * @b Access
+ *
+ * The value of any lcore variable for any lcore id may be accessed
+ * from any thread (including unregistered threads), but is should
+ * generally only *frequently* read from or written to by the owner.
+ *
+ * Values of the same lcore variable but owned by to different lcore
+ * ids *may* be frequently read or written by the owners without the
+ * risk of false sharing.
+ *
+ * An appropriate synchronization mechanism (e.g., atomics) should
+ * employed to assure there are no data races between the owning
+ * thread and any non-owner threads accessing the same lcore variable
+ * instance.
+ *
+ * The value of the lcore variable for a particular lcore id may be
+ * retrieved with \ref RTE_LCORE_VAR_LCORE_GET. To get a pointer to the
+ * same object, use \ref RTE_LCORE_VAR_LCORE_PTR.
+ *
+ * To modify the value of an lcore variable for a particular lcore id,
+ * either access the object through the pointer retrieved by \ref
+ * RTE_LCORE_VAR_LCORE_PTR or, for primitive types, use \ref
+ * RTE_LCORE_VAR_LCORE_SET.
+ *
+ * The access macros each has a short-hand which may be used by an EAL
+ * thread or registered non-EAL thread to access the lcore variable
+ * instance of its own lcore id. Those are \ref RTE_LCORE_VAR_GET,
+ * \ref RTE_LCORE_VAR_PTR, and \ref RTE_LCORE_VAR_SET.
+ *
+ * Although the handle (as defined by \ref RTE_LCORE_VAR_HANDLE) is a
+ * pointer with the same type as the value, it may not be directly
+ * dereferenced and must be treated as an opaque identifier. The
+ * *identifier* value is common across all lcore ids.
+ *
+ * @b Storage
+ *
+ * An lcore variable's values may by of a primitive type like \c int,
+ * but would more typically be a \c struct. An application may choose
+ * to define an lcore variable, which it then it goes on to never
+ * allocate.
+ *
+ * The lcore variable handle introduces a per-variable (not
+ * per-value/per-lcore id) overhead of \c sizeof(void *) bytes, so
+ * there are some memory footprint gains to be made by organizing all
+ * per-lcore id data for a particular module as one lcore variable
+ * (e.g., as a struct).
+ *
+ * The sum of all lcore variables, plus any padding required, must be
+ * less than the DPDK build-time constant \c RTE_MAX_LCORE_VAR. A
+ * violation of this maximum results in the process being terminated.
+ *
+ * It's reasonable to expected that \c RTE_MAX_LCORE_VAR is on the
+ * same order of magnitude in size as a thread stack.
+ *
+ * The lcore variable storage buffers are kept in the BSS section in
+ * the resulting binary, where data generally isn't mapped in until
+ * it's accessed. This means that unused portions of the lcore
+ * variable storage area will not occupy any physical memory (with a
+ * granularity of the memory page size [usually 4 kB]).
+ *
+ * Lcore variables should generally *not* be \ref __rte_cache_aligned
+ * and need *not* include a \ref RTE_CACHE_GUARD field, since the use
+ * of these constructs are designed to avoid false sharing. In the
+ * case of an lcore variable instance, all nearby data structures
+ * should almost-always be written to by a single thread (the lcore
+ * variable owner). Adding padding will increase the effective memory
+ * working set size, and potentially reducing performance.
+ *
+ * @b Example
+ *
+ * Below is an example of the use of an lcore variable:
+ *
+ * \code{.c}
+ * struct foo_lcore_state {
+ *         int a;
+ *         long b;
+ * };
+ *
+ * static RTE_LCORE_VAR_HANDLE(struct foo_lcore_state, lcore_states);
+ *
+ * long foo_get_a_plus_b(void)
+ * {
+ *         struct foo_lcore_state *state = RTE_LCORE_VAR_PTR(lcore_states);
+ *
+ *         return state->a + state->b;
+ * }
+ *
+ * RTE_INIT(rte_foo_init)
+ * {
+ *         unsigned int lcore_id;
+ *
+ *         RTE_LCORE_VAR_ALLOC(foo_state);
+ *
+ *         struct foo_lcore_state *state;
+ *         RTE_LCORE_VAR_FOREACH(lcore_states) {
+ *                 (initialize 'state')
+ *         }
+ *
+ *         (other initialization)
+ * }
+ * \endcode
+ *
+ *
+ * @b Alternatives
+ *
+ * Lcore variables are designed to replace a pattern exemplified below:
+ * \code{.c}
+ * struct foo_lcore_state {
+ *         int a;
+ *         long b;
+ *         RTE_CACHE_GUARD;
+ * } __rte_cache_aligned;
+ *
+ * static struct foo_lcore_state lcore_states[RTE_MAX_LCORE];
+ * \endcode
+ *
+ * This scheme is simple and effective, but has one drawback: the data
+ * is organized so that objects related to all lcores for a particular
+ * module is kept close in memory. At a bare minimum, this forces the
+ * use of cache-line alignment to avoid false sharing. With CPU
+ * hardware prefetching and memory loads resulting from speculative
+ * execution (functions which seemingly are getting more eager faster
+ * than they are getting more intelligent), one or more "guard" cache
+ * lines may be required to separate one lcore's data from another's.
+ *
+ * Lcore variables has the upside of working with, not against, the
+ * CPU's assumptions and for example next-line prefetchers may well
+ * work the way its designers intended (i.e., to the benefit, not
+ * detriment, of system performance).
+ *
+ * Another alternative to \ref rte_lcore_var.h is the \ref
+ * rte_per_lcore.h API, which make use of thread-local storage (TLS,
+ * e.g., GCC __thread or C11 _Thread_local). The main differences
+ * between by using the various forms of TLS (e.g., \ref
+ * RTE_DEFINE_PER_LCORE or _Thread_local) and the use of lcore
+ * variables are:
+ *
+ *   * The existence and non-existence of a thread-local variable
+ *     instance follow that of particular thread's. The data cannot be
+ *     accessed before the thread has been created, nor after it has
+ *     exited. One effect of this is thread-local variables must
+ *     initialized in a "lazy" manner (e.g., at the point of thread
+ *     creation). Lcore variables may be accessed immediately after
+ *     having been allocated (which is usually prior any thread beyond
+ *     the main thread is running).
+ *   * A thread-local variable is duplicated across all threads in the
+ *     process, including unregistered non-EAL threads (i.e.,
+ *     "regular" threads). For DPDK applications heavily relying on
+ *     multi-threading (in conjunction to DPDK's "one thread per core"
+ *     pattern), either by having many concurrent threads or
+ *     creating/destroying threads at a high rate, an excessive use of
+ *     thread-local variables may cause inefficiencies (e.g.,
+ *     increased thread creation overhead due to thread-local storage
+ *     initialization or increased total RAM footprint usage). Lcore
+ *     variables *only* exist for threads with an lcore id, and thus
+ *     not for such "regular" threads.
+ *   * If data in thread-local storage may be shared between threads
+ *     (i.e., can a pointer to a thread-local variable be passed to
+ *     and successfully dereferenced by non-owning thread) depends on
+ *     the details of the TLS implementation. With GCC __thread and
+ *     GCC _Thread_local, such data sharing is supported. In the C11
+ *     standard, the result of accessing another thread's
+ *     _Thread_local object is implementation-defined. Lcore variable
+ *     instances may be accessed reliably by any thread.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stddef.h>
+
+#include <rte_common.h>
+#include <rte_config.h>
+#include <rte_lcore.h>
+
+/**
+ * Given the lcore variable type, produces the type of the lcore
+ * variable handle.
+ */
+#define RTE_LCORE_VAR_HANDLE_TYPE(type)		\
+	type *
+
+/**
+ * Define a lcore variable handle.
+ *
+ * This macro defines a variable which is used as a handle to access
+ * the various per-lcore id instances of a per-lcore id variable.
+ *
+ * The aim with this macro is to make clear at the point of
+ * declaration that this is an lcore handler, rather than a regular
+ * pointer.
+ *
+ * Add @b static as a prefix in case the lcore variable are only to be
+ * accessed from a particular translation unit.
+ */
+#define RTE_LCORE_VAR_HANDLE(type, name)	\
+	RTE_LCORE_VAR_HANDLE_TYPE(type) name
+
+/**
+ * Allocate space for an lcore variable, and initialize its handle.
+ */
+#define RTE_LCORE_VAR_ALLOC_SZ(name, size)	\
+	name = rte_lcore_var_alloc(size)
+
+/**
+ * Allocate space for an lcore variable of the size suggested by the
+ * handler pointer type and initialize its handle.
+ */
+#define RTE_LCORE_VAR_ALLOC(name)			\
+	RTE_LCORE_VAR_ALLOC_SZ(name, sizeof(*(name)))
+
+/**
+ * Allocate an explicitly-sized lcore variable by means of a \ref
+ * RTE_INIT constructor.
+ */
+#define RTE_LCORE_VAR_INIT_SZ(name)					\
+	RTE_INIT(rte_lcore_var_init_ ## name)				\
+	{								\
+		RTE_LCORE_VAR_ALLOC_SZ(name);				\
+	}
+
+/**
+ * Allocate an lcore variable by means of a \ref RTE_INIT constructor.
+ */
+#define RTE_LCORE_VAR_INIT(name)					\
+	RTE_INIT(rte_lcore_var_init_ ## name)				\
+	{								\
+		RTE_LCORE_VAR_ALLOC(name);				\
+	}
+
+#define __RTE_LCORE_VAR_LCORE_PTR(lcore_id, name)		\
+	((void *)(&rte_lcore_var[lcore_id][(uintptr_t)(name)]))
+
+/**
+ * Get pointer to lcore variable instance with the specified lcore id.
+ */
+#define RTE_LCORE_VAR_LCORE_PTR(lcore_id, name)				\
+	((typeof(name))__RTE_LCORE_VAR_LCORE_PTR(lcore_id, name))
+
+/**
+ * Get value of a lcore variable instance of the specified lcore id.
+ */
+#define RTE_LCORE_VAR_LCORE_GET(lcore_id, name)		\
+	(*(RTE_LCORE_VAR_LCORE_PTR(lcore_id, name)))
+
+/**
+ * Set the value of a lcore variable instance of the specified lcore id.
+ */
+#define RTE_LCORE_VAR_LCORE_SET(lcore_id, name, value)		\
+	(*(RTE_LCORE_VAR_LCORE_PTR(lcore_id, name)) = (value))
+
+/**
+ * Get pointer to lcore variable instance of the current thread.
+ *
+ * May only be used by EAL threads and registered non-EAL threads.
+ */
+#define RTE_LCORE_VAR_PTR(name) RTE_LCORE_VAR_LCORE_PTR(rte_lcore_id(), name)
+
+/**
+ * Get value of lcore variable instance of the current thread.
+ *
+ * May only be used by EAL threads and registered non-EAL threads.
+ */
+#define RTE_LCORE_VAR_GET(name) RTE_LCORE_VAR_LCORE_GET(rte_lcore_id(), name)
+
+/**
+ * Set value of lcore variable instance of the current thread.
+ *
+ * May only be used by EAL threads and registered non-EAL threads.
+ */
+#define RTE_LCORE_VAR_SET(name, value) \
+	RTE_LCORE_VAR_LCORE_SET(rte_lcore_id(), name, value)
+
+/**
+ * Iterate over each lcore id's value for a lcore variable.
+ */
+#define RTE_LCORE_VAR_FOREACH(var, name)				\
+	for (unsigned int lcore_id =					\
+		     (((var) = RTE_LCORE_VAR_LCORE_PTR(0, name)), 0);	\
+	     lcore_id < RTE_MAX_LCORE;					\
+	     lcore_id++, (var) = RTE_LCORE_VAR_LCORE_PTR(lcore_id, name))
+
+extern char rte_lcore_var[RTE_MAX_LCORE][RTE_MAX_LCORE_VAR];
+
+/**
+ * Allocate space in the per-lcore id buffer for a lcore variable.
+ *
+ * The pointer returned is only an opaque identifer of the variable. To
+ * get an actual pointer to a particular instance of the variable use
+ * \ref RTE_LCORE_VAR_PTR or \ref RTE_LCORE_VAR_LCORE_PTR.
+ *
+ * The allocation is always successful, barring an fatal exhaustion of
+ * the per-lcore id buffer space.
+ *
+ * @return
+ *   The id of the variable, stored in a void pointer value.
+ */
+__rte_experimental
+void *
+rte_lcore_var_alloc(size_t size);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_LCORE_VAR_H_ */
diff --git a/lib/eal/version.map b/lib/eal/version.map
index 5e0cd47c82..e90b86115a 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -393,6 +393,10 @@  EXPERIMENTAL {
 	# added in 23.07
 	rte_memzone_max_get;
 	rte_memzone_max_set;
+
+	# added in 24.03
+	rte_lcore_var_alloc;
+	rte_lcore_var;
 };
 
 INTERNAL {