mempool: add cache guard to per-lcore debug statistics

Message ID 20230904091020.12481-1-mb@smartsharesystems.com (mailing list archive)
State Accepted, archived
Delegated to: Thomas Monjalon
Headers
Series mempool: add cache guard to per-lcore debug statistics |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/loongarch-compilation success Compilation OK
ci/loongarch-unit-testing success Unit Testing PASS
ci/Intel-compilation success Compilation OK
ci/intel-Testing success Testing PASS
ci/iol-mellanox-Performance success Performance Testing PASS
ci/github-robot: build success github build: passed
ci/iol-compile-amd64-testing success Testing PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-sample-apps-testing success Testing PASS
ci/iol-unit-amd64-testing success Testing PASS
ci/intel-Functional success Functional PASS
ci/iol-compile-arm64-testing success Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-broadcom-Functional success Functional Testing PASS
ci/iol-broadcom-Performance success Performance Testing PASS

Commit Message

Morten Brørup Sept. 4, 2023, 9:10 a.m. UTC
  The per-lcore debug statistics, if enabled, are frequently written by
their individual lcores, so add a cache guard to prevent CPU cache
thrashing.

Depends-on: series-29415 ("clarify purpose of empty cache lines")

Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
---
 lib/mempool/rte_mempool.h | 1 +
 1 file changed, 1 insertion(+)
  

Comments

Morten Brørup Sept. 29, 2023, 6:32 p.m. UTC | #1
PING for review.

> From: Morten Brørup [mailto:mb@smartsharesystems.com]
> Sent: Monday, 4 September 2023 11.10
> 
> The per-lcore debug statistics, if enabled, are frequently written by
> their individual lcores, so add a cache guard to prevent CPU cache
> thrashing.
> 
> Depends-on: series-29415 ("clarify purpose of empty cache lines")
> 
> Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
> ---
>  lib/mempool/rte_mempool.h | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
> index a05b25d5b9..f70bf36080 100644
> --- a/lib/mempool/rte_mempool.h
> +++ b/lib/mempool/rte_mempool.h
> @@ -78,6 +78,7 @@ struct rte_mempool_debug_stats {
>  	uint64_t get_fail_objs;        /**< Objects that failed to be
> allocated. */
>  	uint64_t get_success_blks;     /**< Successful allocation number
> of contiguous blocks. */
>  	uint64_t get_fail_blks;        /**< Failed allocation number of
> contiguous blocks. */
> +	RTE_CACHE_GUARD;
>  } __rte_cache_aligned;
>  #endif
> 
> --
> 2.17.1
  
Andrew Rybchenko Sept. 30, 2023, 6:24 a.m. UTC | #2
On 9/4/23 12:10, Morten Brørup wrote:
> The per-lcore debug statistics, if enabled, are frequently written by
> their individual lcores, so add a cache guard to prevent CPU cache
> thrashing.
> 
> Depends-on: series-29415 ("clarify purpose of empty cache lines")
> 
> Signed-off-by: Morten Brørup <mb@smartsharesystems.com>

Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
  
Thomas Monjalon Oct. 11, 2023, 9:28 p.m. UTC | #3
30/09/2023 08:24, Andrew Rybchenko:
> On 9/4/23 12:10, Morten Brørup wrote:
> > The per-lcore debug statistics, if enabled, are frequently written by
> > their individual lcores, so add a cache guard to prevent CPU cache
> > thrashing.
> > 
> > Depends-on: series-29415 ("clarify purpose of empty cache lines")
> > 
> > Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
> 
> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

Applied, thanks.
  

Patch

diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index a05b25d5b9..f70bf36080 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -78,6 +78,7 @@  struct rte_mempool_debug_stats {
 	uint64_t get_fail_objs;        /**< Objects that failed to be allocated. */
 	uint64_t get_success_blks;     /**< Successful allocation number of contiguous blocks. */
 	uint64_t get_fail_blks;        /**< Failed allocation number of contiguous blocks. */
+	RTE_CACHE_GUARD;
 } __rte_cache_aligned;
 #endif