From patchwork Fri Feb 24 18:10:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kamalakshitha Aligeri X-Patchwork-Id: 124544 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4E10B41D66; Fri, 24 Feb 2023 19:11:35 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3D23A40DDA; Fri, 24 Feb 2023 19:11:35 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by mails.dpdk.org (Postfix) with ESMTP id 0AE6D40DDA for ; Fri, 24 Feb 2023 19:11:34 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6A105176C; Fri, 24 Feb 2023 10:12:16 -0800 (PST) Received: from ampere-altra-2-1.usa.Arm.com (ampere-altra-2-1.usa.arm.com [10.118.91.158]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 347183F99C; Fri, 24 Feb 2023 10:11:33 -0800 (PST) From: Kamalakshitha Aligeri To: Yuying.Zhang@intel.com, beilei.xing@intel.com, olivier.matz@6wind.com, andrew.rybchenko@oktetlabs.ru, bruce.richardson@intel.com, mb@smartsharesystems.com, konstantin.ananyev@huawei.com, Honnappa.Nagarahalli@arm.com, ruifeng.wang@arm.com, feifei.wang2@arm.com Cc: dev@dpdk.org, nd@arm.com, Kamalakshitha Aligeri Subject: [PATCH v10 1/2] mempool cache: add zero-copy get and put functions Date: Fri, 24 Feb 2023 18:10:58 +0000 Message-Id: <20230224181059.338206-2-kamalakshitha.aligeri@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230224181059.338206-1-kamalakshitha.aligeri@arm.com> References: <20230224181059.338206-1-kamalakshitha.aligeri@arm.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: = Morten Brørup Zero-copy access to mempool caches is beneficial for PMD performance, and must be provided by the mempool library to fix [Bug 1052] without a performance regression. [Bug 1052]: https://bugs.dpdk.org/show_bug.cgi?id=1052 Bugzilla ID: 1052 Signed-off-by: Morten Brørup Signed-off-by: Kamalakshitha Aligeri Reviewed-by: Ruifeng Wang Acked-by: Morten Brørup --- v10: * Added mempool test cases with zero-copy API's v9: * Also set rte_errno in zero-copy put function, if returning NULL. (Honnappa) * Revert v3 comparison to prevent overflow if n is really huge and len is non-zero. (Olivier) v8: * Actually include the rte_errno header file. Note to self: The changes only take effect on the disk after the file in the text editor has been saved. v7: * Fix typo in function description. (checkpatch) * Zero-copy functions may set rte_errno; include rte_errno header file. (ci/loongarch-compilation) v6: * Improve description of the 'n' parameter to the zero-copy get function. (Konstantin, Bruce) * The caches used for zero-copy may not be user-owned, so remove this word from the function descriptions. (Kamalakshitha) v5: * Bugfix: Compare zero-copy get request to the cache size instead of the flush threshold; otherwise refill could overflow the memory allocated for the cache. (Andrew) * Split the zero-copy put function into an internal function doing the work, and a public function with trace. * Avoid code duplication by rewriting rte_mempool_do_generic_put() to use the internal zero-copy put function. (Andrew) * Corrected the return type of rte_mempool_cache_zc_put_bulk() from void * to void **; it returns a pointer to an array of objects. * Fix coding style: Add missing curly brackets. (Andrew) v4: * Fix checkpatch warnings. v3: * Bugfix: Respect the cache size; compare to the flush threshold instead of RTE_MEMPOOL_CACHE_MAX_SIZE. * Added 'rewind' function for incomplete 'put' operations. (Konstantin) * Replace RTE_ASSERTs with runtime checks of the request size. Instead of failing, return NULL if the request is too big. (Konstantin) * Modified comparison to prevent overflow if n is really huge and len is non-zero. (Andrew) * Updated the comments in the code. v2: * Fix checkpatch warnings. * Fix missing registration of trace points. * The functions are inline, so they don't go into the map file. v1 changes from the RFC: * Removed run-time parameter checks. (Honnappa) This is a hot fast path function; requiring correct application behaviour, i.e. function parameters must be valid. * Added RTE_ASSERT for parameters instead. Code for this is only generated if built with RTE_ENABLE_ASSERT. * Removed fallback when 'cache' parameter is not set. (Honnappa) * Chose the simple get function; i.e. do not move the existing objects in the cache to the top of the new stack, just leave them at the bottom. * Renamed the functions. Other suggestions are welcome, of course. ;-) * Updated the function descriptions. * Added the functions to trace_fp and version.map. app/test/test_mempool.c | 81 +++++++--- lib/mempool/mempool_trace_points.c | 9 ++ lib/mempool/rte_mempool.h | 239 +++++++++++++++++++++++++---- lib/mempool/rte_mempool_trace_fp.h | 23 +++ lib/mempool/version.map | 9 ++ 5 files changed, 311 insertions(+), 50 deletions(-) -- 2.25.1 diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c index 8e493eda47..6d29f5bc7b 100644 --- a/app/test/test_mempool.c +++ b/app/test/test_mempool.c @@ -74,7 +74,7 @@ my_obj_init(struct rte_mempool *mp, __rte_unused void *arg, /* basic tests (done on one core) */ static int -test_mempool_basic(struct rte_mempool *mp, int use_external_cache) +test_mempool_basic(struct rte_mempool *mp, int use_external_cache, int use_zc_api) { uint32_t *objnum; void **objtable; @@ -84,6 +84,7 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache) unsigned i, j; int offset; struct rte_mempool_cache *cache; + void **cache_objs; if (use_external_cache) { /* Create a user-owned mempool cache. */ @@ -100,8 +101,13 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache) rte_mempool_dump(stdout, mp); printf("get an object\n"); - if (rte_mempool_generic_get(mp, &obj, 1, cache) < 0) - GOTO_ERR(ret, out); + if (use_zc_api) { + cache_objs = rte_mempool_cache_zc_get_bulk(cache, mp, 1); + obj = *cache_objs; + } else { + if (rte_mempool_generic_get(mp, &obj, 1, cache) < 0) + GOTO_ERR(ret, out); + } rte_mempool_dump(stdout, mp); /* tests that improve coverage */ @@ -123,21 +129,41 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache) #endif printf("put the object back\n"); - rte_mempool_generic_put(mp, &obj, 1, cache); + if (use_zc_api) { + cache_objs = rte_mempool_cache_zc_put_bulk(cache, mp, 1); + rte_memcpy(cache_objs, &obj, sizeof(void *)); + } else { + rte_mempool_generic_put(mp, &obj, 1, cache); + } rte_mempool_dump(stdout, mp); printf("get 2 objects\n"); - if (rte_mempool_generic_get(mp, &obj, 1, cache) < 0) - GOTO_ERR(ret, out); - if (rte_mempool_generic_get(mp, &obj2, 1, cache) < 0) { - rte_mempool_generic_put(mp, &obj, 1, cache); - GOTO_ERR(ret, out); + if (use_zc_api) { + cache_objs = rte_mempool_cache_zc_get_bulk(cache, mp, 1); + obj = *cache_objs; + cache_objs = rte_mempool_cache_zc_get_bulk(cache, mp, 1); + obj2 = *cache_objs; + } else { + if (rte_mempool_generic_get(mp, &obj, 1, cache) < 0) + GOTO_ERR(ret, out); + if (rte_mempool_generic_get(mp, &obj2, 1, cache) < 0) { + rte_mempool_generic_put(mp, &obj, 1, cache); + GOTO_ERR(ret, out); + } } rte_mempool_dump(stdout, mp); printf("put the objects back\n"); - rte_mempool_generic_put(mp, &obj, 1, cache); - rte_mempool_generic_put(mp, &obj2, 1, cache); + if (use_zc_api) { + cache_objs = rte_mempool_cache_zc_put_bulk(cache, mp, 1); + rte_memcpy(cache_objs, &obj, sizeof(void *)); + cache_objs = rte_mempool_cache_zc_put_bulk(cache, mp, 1); + rte_memcpy(cache_objs, &obj2, sizeof(void *)); + + } else { + rte_mempool_generic_put(mp, &obj, 1, cache); + rte_mempool_generic_put(mp, &obj2, 1, cache); + } rte_mempool_dump(stdout, mp); /* @@ -149,8 +175,13 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache) GOTO_ERR(ret, out); for (i = 0; i < MEMPOOL_SIZE; i++) { - if (rte_mempool_generic_get(mp, &objtable[i], 1, cache) < 0) - break; + if (use_zc_api) { + cache_objs = rte_mempool_cache_zc_get_bulk(cache, mp, 1); + objtable[i] = *cache_objs; + } else { + if (rte_mempool_generic_get(mp, &objtable[i], 1, cache) < 0) + break; + } } /* @@ -170,8 +201,12 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache) if (obj_data[j] != 0) ret = -1; } - - rte_mempool_generic_put(mp, &objtable[i], 1, cache); + if (use_zc_api) { + cache_objs = rte_mempool_cache_zc_put_bulk(cache, mp, 1); + rte_memcpy(cache_objs, &objtable[i], sizeof(void *)); + } else { + rte_mempool_generic_put(mp, &objtable[i], 1, cache); + } } free(objtable); @@ -979,15 +1014,19 @@ test_mempool(void) rte_mempool_list_dump(stdout); /* basic tests without cache */ - if (test_mempool_basic(mp_nocache, 0) < 0) + if (test_mempool_basic(mp_nocache, 0, 0) < 0) + GOTO_ERR(ret, err); + + /* basic tests with zero-copy API's */ + if (test_mempool_basic(mp_cache, 0, 1) < 0) GOTO_ERR(ret, err); - /* basic tests with cache */ - if (test_mempool_basic(mp_cache, 0) < 0) + /* basic tests with user-owned cache and zero-copy API's */ + if (test_mempool_basic(mp_nocache, 1, 1) < 0) GOTO_ERR(ret, err); /* basic tests with user-owned cache */ - if (test_mempool_basic(mp_nocache, 1) < 0) + if (test_mempool_basic(mp_nocache, 1, 0) < 0) GOTO_ERR(ret, err); /* more basic tests without cache */ @@ -1008,10 +1047,10 @@ test_mempool(void) GOTO_ERR(ret, err); /* test the stack handler */ - if (test_mempool_basic(mp_stack, 1) < 0) + if (test_mempool_basic(mp_stack, 1, 0) < 0) GOTO_ERR(ret, err); - if (test_mempool_basic(default_pool, 1) < 0) + if (test_mempool_basic(default_pool, 1, 0) < 0) GOTO_ERR(ret, err); /* test mempool event callbacks */ diff --git a/lib/mempool/mempool_trace_points.c b/lib/mempool/mempool_trace_points.c index 307018094d..8735a07971 100644 --- a/lib/mempool/mempool_trace_points.c +++ b/lib/mempool/mempool_trace_points.c @@ -77,3 +77,12 @@ RTE_TRACE_POINT_REGISTER(rte_mempool_trace_ops_free, RTE_TRACE_POINT_REGISTER(rte_mempool_trace_set_ops_byname, lib.mempool.set.ops.byname) + +RTE_TRACE_POINT_REGISTER(rte_mempool_trace_cache_zc_put_bulk, + lib.mempool.cache.zc.put.bulk) + +RTE_TRACE_POINT_REGISTER(rte_mempool_trace_cache_zc_put_rewind, + lib.mempool.cache.zc.put.rewind) + +RTE_TRACE_POINT_REGISTER(rte_mempool_trace_cache_zc_get_bulk, + lib.mempool.cache.zc.get.bulk) diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h index 9f530db24b..94f895c329 100644 --- a/lib/mempool/rte_mempool.h +++ b/lib/mempool/rte_mempool.h @@ -42,6 +42,7 @@ #include #include #include +#include #include #include #include @@ -1346,6 +1347,199 @@ rte_mempool_cache_flush(struct rte_mempool_cache *cache, cache->len = 0; } + +/** + * @internal used by rte_mempool_cache_zc_put_bulk() and rte_mempool_do_generic_put(). + * + * Zero-copy put objects in a mempool cache backed by the specified mempool. + * + * @param cache + * A pointer to the mempool cache. + * @param mp + * A pointer to the mempool. + * @param n + * The number of objects to be put in the mempool cache. + * @return + * The pointer to where to put the objects in the mempool cache. + * NULL, with rte_errno set to EINVAL, if the request itself is too big + * for the cache, i.e. exceeds the cache flush threshold. + */ +static __rte_always_inline void ** +__rte_mempool_cache_zc_put_bulk(struct rte_mempool_cache *cache, + struct rte_mempool *mp, + unsigned int n) +{ + void **cache_objs; + + RTE_ASSERT(cache != NULL); + RTE_ASSERT(mp != NULL); + + if (cache->len + n <= cache->flushthresh) { + /* + * The objects can be added to the cache without crossing the + * flush threshold. + */ + cache_objs = &cache->objs[cache->len]; + cache->len += n; + } else if (likely(n <= cache->flushthresh)) { + /* + * The request itself fits into the cache. + * But first, the cache must be flushed to the backend, so + * adding the objects does not cross the flush threshold. + */ + cache_objs = &cache->objs[0]; + rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len); + cache->len = n; + } else { + /* The request itself is too big for the cache. */ + rte_errno = EINVAL; + return NULL; + } + + RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1); + RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n); + + return cache_objs; +} + +/** + * @warning + * @b EXPERIMENTAL: This API may change, or be removed, without prior notice. + * + * Zero-copy put objects in a mempool cache backed by the specified mempool. + * + * @param cache + * A pointer to the mempool cache. + * @param mp + * A pointer to the mempool. + * @param n + * The number of objects to be put in the mempool cache. + * @return + * The pointer to where to put the objects in the mempool cache. + * NULL if the request itself is too big for the cache, i.e. + * exceeds the cache flush threshold. + */ +__rte_experimental +static __rte_always_inline void ** +rte_mempool_cache_zc_put_bulk(struct rte_mempool_cache *cache, + struct rte_mempool *mp, + unsigned int n) +{ + RTE_ASSERT(cache != NULL); + RTE_ASSERT(mp != NULL); + + rte_mempool_trace_cache_zc_put_bulk(cache, mp, n); + return __rte_mempool_cache_zc_put_bulk(cache, mp, n); +} + +/** + * @warning + * @b EXPERIMENTAL: This API may change, or be removed, without prior notice. + * + * Zero-copy un-put objects in a mempool cache. + * + * @param cache + * A pointer to the mempool cache. + * @param n + * The number of objects not put in the mempool cache after calling + * rte_mempool_cache_zc_put_bulk(). + */ +__rte_experimental +static __rte_always_inline void +rte_mempool_cache_zc_put_rewind(struct rte_mempool_cache *cache, + unsigned int n) +{ + RTE_ASSERT(cache != NULL); + RTE_ASSERT(n <= cache->len); + + rte_mempool_trace_cache_zc_put_rewind(cache, n); + + cache->len -= n; + + RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, (int)-n); +} + +/** + * @warning + * @b EXPERIMENTAL: This API may change, or be removed, without prior notice. + * + * Zero-copy get objects from a mempool cache backed by the specified mempool. + * + * @param cache + * A pointer to the mempool cache. + * @param mp + * A pointer to the mempool. + * @param n + * The number of objects to be made available for extraction from the mempool cache. + * @return + * The pointer to the objects in the mempool cache. + * NULL on error; i.e. the cache + the pool does not contain 'n' objects. + * With rte_errno set to the error code of the mempool dequeue function, + * or EINVAL if the request itself is too big for the cache, i.e. + * exceeds the cache flush threshold. + */ +__rte_experimental +static __rte_always_inline void * +rte_mempool_cache_zc_get_bulk(struct rte_mempool_cache *cache, + struct rte_mempool *mp, + unsigned int n) +{ + unsigned int len, size; + + RTE_ASSERT(cache != NULL); + RTE_ASSERT(mp != NULL); + + rte_mempool_trace_cache_zc_get_bulk(cache, mp, n); + + len = cache->len; + size = cache->size; + + if (n <= len) { + /* The request can be satisfied from the cache as is. */ + len -= n; + } else if (likely(n <= size)) { + /* + * The request itself can be satisfied from the cache. + * But first, the cache must be filled from the backend; + * fetch size + requested - len objects. + */ + int ret; + + ret = rte_mempool_ops_dequeue_bulk(mp, &cache->objs[len], size + n - len); + if (unlikely(ret < 0)) { + /* + * We are buffer constrained. + * Do not fill the cache, just satisfy the request. + */ + ret = rte_mempool_ops_dequeue_bulk(mp, &cache->objs[len], n - len); + if (unlikely(ret < 0)) { + /* Unable to satisfy the request. */ + + RTE_MEMPOOL_STAT_ADD(mp, get_fail_bulk, 1); + RTE_MEMPOOL_STAT_ADD(mp, get_fail_objs, n); + + rte_errno = -ret; + return NULL; + } + + len = 0; + } else { + len = size; + } + } else { + /* The request itself is too big for the cache. */ + rte_errno = EINVAL; + return NULL; + } + + cache->len = len; + + RTE_MEMPOOL_CACHE_STAT_ADD(cache, get_success_bulk, 1); + RTE_MEMPOOL_CACHE_STAT_ADD(cache, get_success_objs, n); + + return &cache->objs[len]; +} + /** * @internal Put several objects back in the mempool; used internally. * @param mp @@ -1364,32 +1558,25 @@ rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table, { void **cache_objs; - /* No cache provided */ - if (unlikely(cache == NULL)) - goto driver_enqueue; + /* No cache provided? */ + if (unlikely(cache == NULL)) { + /* Increment stats now, adding in mempool always succeeds. */ + RTE_MEMPOOL_STAT_ADD(mp, put_bulk, 1); + RTE_MEMPOOL_STAT_ADD(mp, put_objs, n); - /* increment stat now, adding in mempool always success */ - RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1); - RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n); + goto driver_enqueue; + } - /* The request itself is too big for the cache */ - if (unlikely(n > cache->flushthresh)) - goto driver_enqueue_stats_incremented; + /* Prepare to add the objects to the cache. */ + cache_objs = __rte_mempool_cache_zc_put_bulk(cache, mp, n); - /* - * The cache follows the following algorithm: - * 1. If the objects cannot be added to the cache without crossing - * the flush threshold, flush the cache to the backend. - * 2. Add the objects to the cache. - */ + /* The request itself is too big for the cache? */ + if (unlikely(cache_objs == NULL)) { + /* Increment stats now, adding in mempool always succeeds. */ + RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1); + RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n); - if (cache->len + n <= cache->flushthresh) { - cache_objs = &cache->objs[cache->len]; - cache->len += n; - } else { - cache_objs = &cache->objs[0]; - rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len); - cache->len = n; + goto driver_enqueue; } /* Add the objects to the cache. */ @@ -1399,13 +1586,7 @@ rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table, driver_enqueue: - /* increment stat now, adding in mempool always success */ - RTE_MEMPOOL_STAT_ADD(mp, put_bulk, 1); - RTE_MEMPOOL_STAT_ADD(mp, put_objs, n); - -driver_enqueue_stats_incremented: - - /* push objects to the backend */ + /* Push the objects to the backend. */ rte_mempool_ops_enqueue_bulk(mp, obj_table, n); } diff --git a/lib/mempool/rte_mempool_trace_fp.h b/lib/mempool/rte_mempool_trace_fp.h index ed060e887c..14666457f7 100644 --- a/lib/mempool/rte_mempool_trace_fp.h +++ b/lib/mempool/rte_mempool_trace_fp.h @@ -109,6 +109,29 @@ RTE_TRACE_POINT_FP( rte_trace_point_emit_ptr(mempool); ) +RTE_TRACE_POINT_FP( + rte_mempool_trace_cache_zc_put_bulk, + RTE_TRACE_POINT_ARGS(void *cache, void *mempool, uint32_t nb_objs), + rte_trace_point_emit_ptr(cache); + rte_trace_point_emit_ptr(mempool); + rte_trace_point_emit_u32(nb_objs); +) + +RTE_TRACE_POINT_FP( + rte_mempool_trace_cache_zc_put_rewind, + RTE_TRACE_POINT_ARGS(void *cache, uint32_t nb_objs), + rte_trace_point_emit_ptr(cache); + rte_trace_point_emit_u32(nb_objs); +) + +RTE_TRACE_POINT_FP( + rte_mempool_trace_cache_zc_get_bulk, + RTE_TRACE_POINT_ARGS(void *cache, void *mempool, uint32_t nb_objs), + rte_trace_point_emit_ptr(cache); + rte_trace_point_emit_ptr(mempool); + rte_trace_point_emit_u32(nb_objs); +) + #ifdef __cplusplus } #endif diff --git a/lib/mempool/version.map b/lib/mempool/version.map index dff2d1cb55..06cb83ad9d 100644 --- a/lib/mempool/version.map +++ b/lib/mempool/version.map @@ -49,6 +49,15 @@ EXPERIMENTAL { __rte_mempool_trace_get_contig_blocks; __rte_mempool_trace_default_cache; __rte_mempool_trace_cache_flush; + __rte_mempool_trace_ops_populate; + __rte_mempool_trace_ops_alloc; + __rte_mempool_trace_ops_free; + __rte_mempool_trace_set_ops_byname; + + # added in 23.03 + __rte_mempool_trace_cache_zc_put_bulk; + __rte_mempool_trace_cache_zc_put_rewind; + __rte_mempool_trace_cache_zc_get_bulk; }; INTERNAL { From patchwork Fri Feb 24 18:10:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kamalakshitha Aligeri X-Patchwork-Id: 124545 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AE1A041D66; Fri, 24 Feb 2023 19:11:40 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 70E88410F6; Fri, 24 Feb 2023 19:11:40 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by mails.dpdk.org (Postfix) with ESMTP id 211CA40697 for ; Fri, 24 Feb 2023 19:11:39 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B0F271042; Fri, 24 Feb 2023 10:12:21 -0800 (PST) Received: from ampere-altra-2-1.usa.Arm.com (ampere-altra-2-1.usa.arm.com [10.118.91.158]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 641CB3F99C; Fri, 24 Feb 2023 10:11:38 -0800 (PST) From: Kamalakshitha Aligeri To: Yuying.Zhang@intel.com, beilei.xing@intel.com, olivier.matz@6wind.com, andrew.rybchenko@oktetlabs.ru, bruce.richardson@intel.com, mb@smartsharesystems.com, konstantin.ananyev@huawei.com, Honnappa.Nagarahalli@arm.com, ruifeng.wang@arm.com, feifei.wang2@arm.com Cc: dev@dpdk.org, nd@arm.com, Kamalakshitha Aligeri Subject: [PATCH v10 2/2] net/i40e: replace put function Date: Fri, 24 Feb 2023 18:10:59 +0000 Message-Id: <20230224181059.338206-3-kamalakshitha.aligeri@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230224181059.338206-1-kamalakshitha.aligeri@arm.com> References: <20230224181059.338206-1-kamalakshitha.aligeri@arm.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Integrated zero-copy put API in mempool cache in i40e PMD. On Ampere Altra server, l3fwd single core's performance improves by 5% with the new API Signed-off-by: Kamalakshitha Aligeri Reviewed-by: Ruifeng Wang Reviewed-by: Feifei Wang Reviewed-by: Morten Brørup --- v3: Fixed the way mbufs are accessed from txep (Morten Brorup) v2: Fixed the code for n > RTE_MEMPOOL_CACHE_MAX_SIZE (Morten Brorup) v1: 1. Integrated the zc_put API in i40e PMD 2. Added mempool test cases with the zero-cpoy API's .mailmap | 1 + drivers/net/i40e/i40e_rxtx_vec_common.h | 27 ++++++++++++++++++++----- 2 files changed, 23 insertions(+), 5 deletions(-) -- 2.25.1 diff --git a/.mailmap b/.mailmap index a9f4f28fba..2581d0efe7 100644 --- a/.mailmap +++ b/.mailmap @@ -677,6 +677,7 @@ Kai Ji Kaiwen Deng Kalesh AP Kamalakannan R +Kamalakshitha Aligeri Kamil Bednarczyk Kamil Chalupnik Kamil Rytarowski diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h index fe1a6ec75e..35cdb31b2e 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_common.h +++ b/drivers/net/i40e/i40e_rxtx_vec_common.h @@ -95,18 +95,35 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq) n = txq->tx_rs_thresh; - /* first buffer to free from S/W ring is at index - * tx_next_dd - (tx_rs_thresh-1) - */ + /* first buffer to free from S/W ring is at index + * tx_next_dd - (tx_rs_thresh-1) + */ txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)]; if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) { + struct rte_mempool *mp = txep[0].mbuf->pool; + struct rte_mempool_cache *cache = rte_mempool_default_cache(mp, rte_lcore_id()); + void **cache_objs; + + if (unlikely(!cache)) + goto fallback; + + cache_objs = rte_mempool_cache_zc_put_bulk(cache, mp, n); + if (unlikely(!cache_objs)) + goto fallback; + for (i = 0; i < n; i++) { - free[i] = txep[i].mbuf; + cache_objs[i] = txep[i].mbuf; /* no need to reset txep[i].mbuf in vector path */ } - rte_mempool_put_bulk(free[0]->pool, (void **)free, n); goto done; + +fallback: + for (i = 0; i < n; i++) + free[i] = txep[i].mbuf; + rte_mempool_generic_put(mp, (void **)free, n, cache); + goto done; + } m = rte_pktmbuf_prefree_seg(txep[0].mbuf);