From patchwork Fri Nov 4 11:17:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Morten_Br=C3=B8rup?= X-Patchwork-Id: 119488 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 56D2CA00C5; Fri, 4 Nov 2022 12:17:48 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 302D942D18; Fri, 4 Nov 2022 12:17:48 +0100 (CET) Received: from smartserver.smartsharesystems.com (smartserver.smartsharesystems.com [77.243.40.215]) by mails.dpdk.org (Postfix) with ESMTP id 1D2C942D10 for ; Fri, 4 Nov 2022 12:17:47 +0100 (CET) Received: from dkrd2.smartsharesys.local ([192.168.4.12]) by smartserver.smartsharesystems.com with Microsoft SMTPSVC(6.0.3790.4675); Fri, 4 Nov 2022 12:17:46 +0100 From: =?utf-8?q?Morten_Br=C3=B8rup?= To: olivier.matz@6wind.com, andrew.rybchenko@oktetlabs.ru, mattias.ronnblom@ericsson.com, stephen@networkplumber.org, jerinj@marvell.com, bruce.richardson@intel.com Cc: hofors@lysator.liu.se, thomas@monjalon.net, dev@dpdk.org, =?utf-8?q?Mort?= =?utf-8?q?en_Br=C3=B8rup?= Subject: [PATCH v3 1/3] mempool: split stats from debug Date: Fri, 4 Nov 2022 12:17:38 +0100 Message-Id: <20221104111740.330-1-mb@smartsharesystems.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221031112634.18329-1-mb@smartsharesystems.com> References: <20221031112634.18329-1-mb@smartsharesystems.com> MIME-Version: 1.0 X-OriginalArrivalTime: 04 Nov 2022 11:17:46.0042 (UTC) FILETIME=[10A711A0:01D8F03F] X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Split stats from debug, to make mempool statistics available without the performance cost of continuously validating the debug cookies in the mempool elements. mempool_perf_autotest shows the following improvements in rate_persec. The cost of enabling mempool debug without this patch: -28.1 % and -74.0 %, respectively without and with cache. The cost of enabling mempool stats (without debug) after this patch: -5.8 % and -21.2 %, respectively without and with cache. v3: * Update the Programmer's Guide. * Update the description of the RTE_MEMPOOL_STAT_ADD macro. v2: * Fix checkpatch warning: Use C style comments in rte_include.h, not C++ style. * Do not rename the rte_mempool_debug_stats structure. Signed-off-by: Morten Brørup --- config/rte_config.h | 2 ++ doc/guides/prog_guide/mempool_lib.rst | 6 +++++- lib/mempool/rte_mempool.c | 6 +++--- lib/mempool/rte_mempool.h | 10 +++++----- 4 files changed, 15 insertions(+), 9 deletions(-) diff --git a/config/rte_config.h b/config/rte_config.h index ae56a86394..3c4876d434 100644 --- a/config/rte_config.h +++ b/config/rte_config.h @@ -47,6 +47,8 @@ /* mempool defines */ #define RTE_MEMPOOL_CACHE_MAX_SIZE 512 +/* RTE_LIBRTE_MEMPOOL_STATS is not set */ +/* RTE_LIBRTE_MEMPOOL_DEBUG is not set */ /* mbuf defines */ #define RTE_MBUF_DEFAULT_MEMPOOL_OPS "ring_mp_mc" diff --git a/doc/guides/prog_guide/mempool_lib.rst b/doc/guides/prog_guide/mempool_lib.rst index 55838317b9..4f4ee33463 100644 --- a/doc/guides/prog_guide/mempool_lib.rst +++ b/doc/guides/prog_guide/mempool_lib.rst @@ -20,12 +20,16 @@ Cookies In debug mode, cookies are added at the beginning and end of allocated blocks. The allocated objects then contain overwrite protection fields to help debugging buffer overflows. +Debug mode is disabled by default, but can be enabled by setting ``RTE_LIBRTE_MEMPOOL_DEBUG`` in ``config/rte_config.h``. + Stats ----- -In debug mode, statistics about get from/put in the pool are stored in the mempool structure. +In stats mode, statistics about get from/put in the pool are stored in the mempool structure. Statistics are per-lcore to avoid concurrent access to statistics counters. +Stats mode is disabled by default, but can be enabled by setting ``RTE_LIBRTE_MEMPOOL_STATS`` in ``config/rte_config.h``. + Memory Alignment Constraints on x86 architecture ------------------------------------------------ diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c index 21c94a2b9f..62d1ce764e 100644 --- a/lib/mempool/rte_mempool.c +++ b/lib/mempool/rte_mempool.c @@ -818,7 +818,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size, RTE_CACHE_LINE_MASK) != 0); RTE_BUILD_BUG_ON((sizeof(struct rte_mempool_cache) & RTE_CACHE_LINE_MASK) != 0); -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG +#ifdef RTE_LIBRTE_MEMPOOL_STATS RTE_BUILD_BUG_ON((sizeof(struct rte_mempool_debug_stats) & RTE_CACHE_LINE_MASK) != 0); RTE_BUILD_BUG_ON((offsetof(struct rte_mempool, stats) & @@ -1221,7 +1221,7 @@ rte_mempool_audit(struct rte_mempool *mp) void rte_mempool_dump(FILE *f, struct rte_mempool *mp) { -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG +#ifdef RTE_LIBRTE_MEMPOOL_STATS struct rte_mempool_info info; struct rte_mempool_debug_stats sum; unsigned lcore_id; @@ -1269,7 +1269,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp) fprintf(f, " common_pool_count=%u\n", common_count); /* sum and dump statistics */ -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG +#ifdef RTE_LIBRTE_MEMPOOL_STATS rte_mempool_ops_get_info(mp, &info); memset(&sum, 0, sizeof(sum)); for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h index 3725a72951..2afe332097 100644 --- a/lib/mempool/rte_mempool.h +++ b/lib/mempool/rte_mempool.h @@ -56,7 +56,7 @@ extern "C" { #define RTE_MEMPOOL_HEADER_COOKIE2 0xf2eef2eedadd2e55ULL /**< Header cookie. */ #define RTE_MEMPOOL_TRAILER_COOKIE 0xadd2e55badbadbadULL /**< Trailer cookie.*/ -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG +#ifdef RTE_LIBRTE_MEMPOOL_STATS /** * A structure that stores the mempool statistics (per-lcore). * Note: Cache stats (put_cache_bulk/objs, get_cache_bulk/objs) are not @@ -237,7 +237,7 @@ struct rte_mempool { uint32_t nb_mem_chunks; /**< Number of memory chunks */ struct rte_mempool_memhdr_list mem_list; /**< List of memory chunks */ -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG +#ifdef RTE_LIBRTE_MEMPOOL_STATS /** Per-lcore statistics. */ struct rte_mempool_debug_stats stats[RTE_MAX_LCORE]; #endif @@ -293,16 +293,16 @@ struct rte_mempool { | RTE_MEMPOOL_F_NO_IOVA_CONTIG \ ) /** - * @internal When debug is enabled, store some statistics. + * @internal When stats is enabled, store some statistics. * * @param mp * Pointer to the memory pool. * @param name * Name of the statistics field to increment in the memory pool. * @param n - * Number to add to the object-oriented statistics. + * Number to add to the statistics. */ -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG +#ifdef RTE_LIBRTE_MEMPOOL_STATS #define RTE_MEMPOOL_STAT_ADD(mp, name, n) do { \ unsigned __lcore_id = rte_lcore_id(); \ if (__lcore_id < RTE_MAX_LCORE) { \ From patchwork Fri Nov 4 11:17:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Morten_Br=C3=B8rup?= X-Patchwork-Id: 119489 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CC897A00C5; Fri, 4 Nov 2022 12:17:52 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 31D6F42D2A; Fri, 4 Nov 2022 12:17:49 +0100 (CET) Received: from smartserver.smartsharesystems.com (smartserver.smartsharesystems.com [77.243.40.215]) by mails.dpdk.org (Postfix) with ESMTP id 4249E42D18 for ; Fri, 4 Nov 2022 12:17:47 +0100 (CET) Received: from dkrd2.smartsharesys.local ([192.168.4.12]) by smartserver.smartsharesystems.com with Microsoft SMTPSVC(6.0.3790.4675); Fri, 4 Nov 2022 12:17:46 +0100 From: =?utf-8?q?Morten_Br=C3=B8rup?= To: olivier.matz@6wind.com, andrew.rybchenko@oktetlabs.ru, mattias.ronnblom@ericsson.com, stephen@networkplumber.org, jerinj@marvell.com, bruce.richardson@intel.com Cc: hofors@lysator.liu.se, thomas@monjalon.net, dev@dpdk.org, =?utf-8?q?Mort?= =?utf-8?q?en_Br=C3=B8rup?= Subject: [PATCH v3 2/3] mempool: add stats for unregistered non-EAL threads Date: Fri, 4 Nov 2022 12:17:39 +0100 Message-Id: <20221104111740.330-2-mb@smartsharesystems.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221104111740.330-1-mb@smartsharesystems.com> References: <20221031112634.18329-1-mb@smartsharesystems.com> <20221104111740.330-1-mb@smartsharesystems.com> MIME-Version: 1.0 X-OriginalArrivalTime: 04 Nov 2022 11:17:46.0261 (UTC) FILETIME=[10C87C50:01D8F03F] X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch adds statistics for unregistered non-EAL threads, which was previously not included in the statistics. Add one more entry to the stats array, and use the last index for unregistered non-EAL threads. The unregistered non-EAL thread statistics are incremented atomically. In theory, the EAL thread counters should also be accessed atomically to avoid tearing on 32 bit architectures. However, it was decided to avoid the performance cost of using atomic operations, because: 1. these are debug counters, and 2. statistics counters in DPDK are usually incremented non-atomically. v3 (feedback from Mattias Rönnblom): * Use correct terminology: Unregistered non-EAL threads. * Use atomic counting for the unregistered non-EAL threads. * Reintroduce the conditional instead of offsetting the index by one. v2: * New. No v1 of this patch in the series. Suggested-by: Stephen Hemminger Signed-off-by: Morten Brørup --- lib/mempool/rte_mempool.c | 2 +- lib/mempool/rte_mempool.h | 19 ++++++++++++------- 2 files changed, 13 insertions(+), 8 deletions(-) diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c index 62d1ce764e..e6208125e0 100644 --- a/lib/mempool/rte_mempool.c +++ b/lib/mempool/rte_mempool.c @@ -1272,7 +1272,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp) #ifdef RTE_LIBRTE_MEMPOOL_STATS rte_mempool_ops_get_info(mp, &info); memset(&sum, 0, sizeof(sum)); - for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE + 1; lcore_id++) { sum.put_bulk += mp->stats[lcore_id].put_bulk; sum.put_objs += mp->stats[lcore_id].put_objs; sum.put_common_pool_bulk += mp->stats[lcore_id].put_common_pool_bulk; diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h index 2afe332097..de6fceac5e 100644 --- a/lib/mempool/rte_mempool.h +++ b/lib/mempool/rte_mempool.h @@ -238,8 +238,11 @@ struct rte_mempool { struct rte_mempool_memhdr_list mem_list; /**< List of memory chunks */ #ifdef RTE_LIBRTE_MEMPOOL_STATS - /** Per-lcore statistics. */ - struct rte_mempool_debug_stats stats[RTE_MAX_LCORE]; + /** Per-lcore statistics. + * + * Plus one, for unregistered non-EAL threads. + */ + struct rte_mempool_debug_stats stats[RTE_MAX_LCORE + 1]; #endif } __rte_cache_aligned; @@ -303,11 +306,13 @@ struct rte_mempool { * Number to add to the statistics. */ #ifdef RTE_LIBRTE_MEMPOOL_STATS -#define RTE_MEMPOOL_STAT_ADD(mp, name, n) do { \ - unsigned __lcore_id = rte_lcore_id(); \ - if (__lcore_id < RTE_MAX_LCORE) { \ - mp->stats[__lcore_id].name += n; \ - } \ +#define RTE_MEMPOOL_STAT_ADD(mp, name, n) do { \ + unsigned int __lcore_id = rte_lcore_id(); \ + if (likely(__lcore_id < RTE_MAX_LCORE)) \ + (mp)->stats[__lcore_id].name += n; \ + else \ + __atomic_fetch_add(&((mp)->stats[RTE_MAX_LCORE].name), \ + n, __ATOMIC_RELAXED); \ } while (0) #else #define RTE_MEMPOOL_STAT_ADD(mp, name, n) do {} while (0) From patchwork Fri Nov 4 11:17:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Morten_Br=C3=B8rup?= X-Patchwork-Id: 119490 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 729B8A00C5; Fri, 4 Nov 2022 12:17:58 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3674442D30; Fri, 4 Nov 2022 12:17:50 +0100 (CET) Received: from smartserver.smartsharesystems.com (smartserver.smartsharesystems.com [77.243.40.215]) by mails.dpdk.org (Postfix) with ESMTP id 6884E42D10 for ; Fri, 4 Nov 2022 12:17:47 +0100 (CET) Received: from dkrd2.smartsharesys.local ([192.168.4.12]) by smartserver.smartsharesystems.com with Microsoft SMTPSVC(6.0.3790.4675); Fri, 4 Nov 2022 12:17:46 +0100 From: =?utf-8?q?Morten_Br=C3=B8rup?= To: olivier.matz@6wind.com, andrew.rybchenko@oktetlabs.ru, mattias.ronnblom@ericsson.com, stephen@networkplumber.org, jerinj@marvell.com, bruce.richardson@intel.com Cc: hofors@lysator.liu.se, thomas@monjalon.net, dev@dpdk.org, =?utf-8?q?Mort?= =?utf-8?q?en_Br=C3=B8rup?= Subject: [PATCH v3 3/3] mempool: use cache for frequently updated stats Date: Fri, 4 Nov 2022 12:17:40 +0100 Message-Id: <20221104111740.330-3-mb@smartsharesystems.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221104111740.330-1-mb@smartsharesystems.com> References: <20221031112634.18329-1-mb@smartsharesystems.com> <20221104111740.330-1-mb@smartsharesystems.com> MIME-Version: 1.0 X-OriginalArrivalTime: 04 Nov 2022 11:17:46.0542 (UTC) FILETIME=[10F35CE0:01D8F03F] X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org When built with stats enabled (RTE_LIBRTE_MEMPOOL_STATS defined), the performance of mempools with caches is improved as follows. When accessing objects in the mempool, either the put_bulk and put_objs or the get_success_bulk and get_success_objs statistics counters are likely to be incremented. By adding an alternative set of these counters to the mempool cache structure, accesing the dedicated statistics structure is avoided in the likely cases where these counters are incremented. The trick here is that the cache line holding the mempool cache structure is accessed anyway, in order to access the 'len' or 'flushthresh' fields. Updating some statistics counters in the same cache line has lower performance cost than accessing the statistics counters in the dedicated statistics structure, which resides in another cache line. mempool_perf_autotest with this patch shows the following improvements in rate_persec. The cost of enabling mempool stats (without debug) after this patch: -6.8 % and -6.7 %, respectively without and with cache. v3: * Don't update the the description of the RTE_MEMPOOL_STAT_ADD macro. This change belongs in the first patch of the series. v2: * Move the statistics counters into a stats structure. Signed-off-by: Morten Brørup --- lib/mempool/rte_mempool.c | 9 ++++++ lib/mempool/rte_mempool.h | 67 ++++++++++++++++++++++++++++++++------- 2 files changed, 65 insertions(+), 11 deletions(-) diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c index e6208125e0..a18e39af04 100644 --- a/lib/mempool/rte_mempool.c +++ b/lib/mempool/rte_mempool.c @@ -1286,6 +1286,15 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp) sum.get_success_blks += mp->stats[lcore_id].get_success_blks; sum.get_fail_blks += mp->stats[lcore_id].get_fail_blks; } + if (mp->cache_size != 0) { + /* Add the statistics stored in the mempool caches. */ + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { + sum.put_bulk += mp->local_cache[lcore_id].stats.put_bulk; + sum.put_objs += mp->local_cache[lcore_id].stats.put_objs; + sum.get_success_bulk += mp->local_cache[lcore_id].stats.get_success_bulk; + sum.get_success_objs += mp->local_cache[lcore_id].stats.get_success_objs; + } + } fprintf(f, " stats:\n"); fprintf(f, " put_bulk=%"PRIu64"\n", sum.put_bulk); fprintf(f, " put_objs=%"PRIu64"\n", sum.put_objs); diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h index de6fceac5e..b7ba2542dd 100644 --- a/lib/mempool/rte_mempool.h +++ b/lib/mempool/rte_mempool.h @@ -86,6 +86,19 @@ struct rte_mempool_cache { uint32_t size; /**< Size of the cache */ uint32_t flushthresh; /**< Threshold before we flush excess elements */ uint32_t len; /**< Current cache count */ +#ifdef RTE_LIBRTE_MEMPOOL_STATS + uint32_t unused; + /* + * Alternative location for the most frequently updated mempool statistics (per-lcore), + * providing faster update access when using a mempool cache. + */ + struct { + uint64_t put_bulk; /**< Number of puts. */ + uint64_t put_objs; /**< Number of objects successfully put. */ + uint64_t get_success_bulk; /**< Successful allocation number. */ + uint64_t get_success_objs; /**< Objects successfully allocated. */ + } stats; /**< Statistics */ +#endif /** * Cache objects * @@ -318,6 +331,24 @@ struct rte_mempool { #define RTE_MEMPOOL_STAT_ADD(mp, name, n) do {} while (0) #endif +/** + * @internal When stats is enabled, store some statistics. + * + * @param cache + * Pointer to the memory pool cache. + * @param name + * Name of the statistics field to increment in the memory pool cache. + * @param n + * Number to add to the statistics. + */ +#ifdef RTE_LIBRTE_MEMPOOL_STATS +#define RTE_MEMPOOL_CACHE_STAT_ADD(cache, name, n) do { \ + (cache)->stats.name += n; \ + } while (0) +#else +#define RTE_MEMPOOL_CACHE_STAT_ADD(cache, name, n) do {} while (0) +#endif + /** * @internal Calculate the size of the mempool header. * @@ -1332,13 +1363,17 @@ rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table, { void **cache_objs; + /* No cache provided */ + if (unlikely(cache == NULL)) + goto driver_enqueue; + /* increment stat now, adding in mempool always success */ - RTE_MEMPOOL_STAT_ADD(mp, put_bulk, 1); - RTE_MEMPOOL_STAT_ADD(mp, put_objs, n); + RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1); + RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n); - /* No cache provided or the request itself is too big for the cache */ - if (unlikely(cache == NULL || n > cache->flushthresh)) - goto driver_enqueue; + /* The request itself is too big for the cache */ + if (unlikely(n > cache->flushthresh)) + goto driver_enqueue_stats_incremented; /* * The cache follows the following algorithm: @@ -1363,6 +1398,11 @@ rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table, driver_enqueue: + /* increment stat now, adding in mempool always success */ + RTE_MEMPOOL_STAT_ADD(mp, put_bulk, 1); + RTE_MEMPOOL_STAT_ADD(mp, put_objs, n); + +driver_enqueue_stats_incremented: /* push objects to the backend */ rte_mempool_ops_enqueue_bulk(mp, obj_table, n); } @@ -1469,8 +1509,8 @@ rte_mempool_do_generic_get(struct rte_mempool *mp, void **obj_table, if (remaining == 0) { /* The entire request is satisfied from the cache. */ - RTE_MEMPOOL_STAT_ADD(mp, get_success_bulk, 1); - RTE_MEMPOOL_STAT_ADD(mp, get_success_objs, n); + RTE_MEMPOOL_CACHE_STAT_ADD(cache, get_success_bulk, 1); + RTE_MEMPOOL_CACHE_STAT_ADD(cache, get_success_objs, n); return 0; } @@ -1499,8 +1539,8 @@ rte_mempool_do_generic_get(struct rte_mempool *mp, void **obj_table, cache->len = cache->size; - RTE_MEMPOOL_STAT_ADD(mp, get_success_bulk, 1); - RTE_MEMPOOL_STAT_ADD(mp, get_success_objs, n); + RTE_MEMPOOL_CACHE_STAT_ADD(cache, get_success_bulk, 1); + RTE_MEMPOOL_CACHE_STAT_ADD(cache, get_success_objs, n); return 0; @@ -1522,8 +1562,13 @@ rte_mempool_do_generic_get(struct rte_mempool *mp, void **obj_table, RTE_MEMPOOL_STAT_ADD(mp, get_fail_bulk, 1); RTE_MEMPOOL_STAT_ADD(mp, get_fail_objs, n); } else { - RTE_MEMPOOL_STAT_ADD(mp, get_success_bulk, 1); - RTE_MEMPOOL_STAT_ADD(mp, get_success_objs, n); + if (likely(cache != NULL)) { + RTE_MEMPOOL_CACHE_STAT_ADD(cache, get_success_bulk, 1); + RTE_MEMPOOL_CACHE_STAT_ADD(cache, get_success_objs, n); + } else { + RTE_MEMPOOL_STAT_ADD(mp, get_success_bulk, 1); + RTE_MEMPOOL_STAT_ADD(mp, get_success_objs, n); + } } return ret;