From patchwork Fri Jul 21 16:28:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dharmik Thakkar X-Patchwork-Id: 129690 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7C8CD42EDC; Fri, 21 Jul 2023 18:28:52 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 56B0740EE1; Fri, 21 Jul 2023 18:28:52 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by mails.dpdk.org (Postfix) with ESMTP id 1DCA840DDC for ; Fri, 21 Jul 2023 18:28:50 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 125CD2F4; Fri, 21 Jul 2023 09:29:33 -0700 (PDT) Received: from 2p2660v4-1.austin.arm.com (2p2660v4-1.austin.arm.com [10.118.14.10]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 736FB3F738; Fri, 21 Jul 2023 09:28:49 -0700 (PDT) From: Dharmik Thakkar To: Olivier Matz , Andrew Rybchenko Cc: dev@dpdk.org, nd@arm.com, =?utf-8?q?Morten_Br=C3=B8rup?= , Kamalakshitha Aligeri , Dharmik Thakkar , Ruifeng Wang , Konstantin Ananyev , Chengwen Feng Subject: [PATCH v12 1/2] mempool cache: add zero-copy get and put functions Date: Fri, 21 Jul 2023 11:28:35 -0500 Message-Id: <20230721162836.1208772-1-dharmikjayesh.thakkar@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230705180233.880203-2-kamalakshitha.aligeri@arm.com> References: <20230705180233.880203-2-kamalakshitha.aligeri@arm.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Morten Brørup Zero-copy access to mempool caches is beneficial for PMD performance. Furthermore, having a zero-copy mempool API is considered a precondition for fixing a certain category of bugs, present in some PMDs: For performance reasons, some PMDs had bypassed the mempool API in order to achieve zero-copy access to the mempool cache. This can only be fixed in those PMDs without a performance regression if the mempool library offers zero-copy access APIs, so the PMDs can use the proper mempool API instead of copy-pasting code from the mempool library. Furthermore, the copy-pasted code in those PMDs has not been kept up to date with the improvements of the mempool library, so when they bypass the mempool API, mempool trace is missing and mempool statistics is not updated. Bugzilla ID: 1052 Signed-off-by: Morten Brørup Signed-off-by: Kamalakshitha Aligeri Signed-off-by: Dharmik Thakkar Reviewed-by: Ruifeng Wang Acked-by: Konstantin Ananyev Acked-by: Chengwen Feng --- v12: * Fix CI compilation issues on [patch 2/2] in the series v11: * Changed patch description and version to 23.07 v10: * Added mempool test cases with zero-copy API's v9: * Also set rte_errno in zero-copy put function, if returning NULL. (Honnappa) * Revert v3 comparison to prevent overflow if n is really huge and len is non-zero. (Olivier) v8: * Actually include the rte_errno header file. Note to self: The changes only take effect on the disk after the file in the text editor has been saved. v7: * Fix typo in function description. (checkpatch) * Zero-copy functions may set rte_errno; include rte_errno header file. (ci/loongarch-compilation) v6: * Improve description of the 'n' parameter to the zero-copy get function. (Konstantin, Bruce) * The caches used for zero-copy may not be user-owned, so remove this word from the function descriptions. (Kamalakshitha) v5: * Bugfix: Compare zero-copy get request to the cache size instead of the flush threshold; otherwise refill could overflow the memory allocated for the cache. (Andrew) * Split the zero-copy put function into an internal function doing the work, and a public function with trace. * Avoid code duplication by rewriting rte_mempool_do_generic_put() to use the internal zero-copy put function. (Andrew) * Corrected the return type of rte_mempool_cache_zc_put_bulk() from void * to void **; it returns a pointer to an array of objects. * Fix coding style: Add missing curly brackets. (Andrew) v4: * Fix checkpatch warnings. v3: * Bugfix: Respect the cache size; compare to the flush threshold instead of RTE_MEMPOOL_CACHE_MAX_SIZE. * Added 'rewind' function for incomplete 'put' operations. (Konstantin) * Replace RTE_ASSERTs with runtime checks of the request size. Instead of failing, return NULL if the request is too big. (Konstantin) * Modified comparison to prevent overflow if n is really huge and len is non-zero. (Andrew) * Updated the comments in the code. v2: * Fix checkpatch warnings. * Fix missing registration of trace points. * The functions are inline, so they don't go into the map file. v1 changes from the RFC: * Removed run-time parameter checks. (Honnappa) This is a hot fast path function; requiring correct application behaviour, i.e. function parameters must be valid. * Added RTE_ASSERT for parameters instead. Code for this is only generated if built with RTE_ENABLE_ASSERT. * Removed fallback when 'cache' parameter is not set. (Honnappa) * Chose the simple get function; i.e. do not move the existing objects in the cache to the top of the new stack, just leave them at the bottom. * Renamed the functions. Other suggestions are welcome, of course. ;-) * Updated the function descriptions. * Added the functions to trace_fp and version.map. --- app/test/test_mempool.c | 81 +++++++--- lib/mempool/mempool_trace_points.c | 9 ++ lib/mempool/rte_mempool.h | 239 +++++++++++++++++++++++++---- lib/mempool/rte_mempool_trace_fp.h | 23 +++ lib/mempool/version.map | 9 ++ 5 files changed, 311 insertions(+), 50 deletions(-) diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c index 8e493eda47f4..6d29f5bc7b07 100644 --- a/app/test/test_mempool.c +++ b/app/test/test_mempool.c @@ -74,7 +74,7 @@ my_obj_init(struct rte_mempool *mp, __rte_unused void *arg, /* basic tests (done on one core) */ static int -test_mempool_basic(struct rte_mempool *mp, int use_external_cache) +test_mempool_basic(struct rte_mempool *mp, int use_external_cache, int use_zc_api) { uint32_t *objnum; void **objtable; @@ -84,6 +84,7 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache) unsigned i, j; int offset; struct rte_mempool_cache *cache; + void **cache_objs; if (use_external_cache) { /* Create a user-owned mempool cache. */ @@ -100,8 +101,13 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache) rte_mempool_dump(stdout, mp); printf("get an object\n"); - if (rte_mempool_generic_get(mp, &obj, 1, cache) < 0) - GOTO_ERR(ret, out); + if (use_zc_api) { + cache_objs = rte_mempool_cache_zc_get_bulk(cache, mp, 1); + obj = *cache_objs; + } else { + if (rte_mempool_generic_get(mp, &obj, 1, cache) < 0) + GOTO_ERR(ret, out); + } rte_mempool_dump(stdout, mp); /* tests that improve coverage */ @@ -123,21 +129,41 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache) #endif printf("put the object back\n"); - rte_mempool_generic_put(mp, &obj, 1, cache); + if (use_zc_api) { + cache_objs = rte_mempool_cache_zc_put_bulk(cache, mp, 1); + rte_memcpy(cache_objs, &obj, sizeof(void *)); + } else { + rte_mempool_generic_put(mp, &obj, 1, cache); + } rte_mempool_dump(stdout, mp); printf("get 2 objects\n"); - if (rte_mempool_generic_get(mp, &obj, 1, cache) < 0) - GOTO_ERR(ret, out); - if (rte_mempool_generic_get(mp, &obj2, 1, cache) < 0) { - rte_mempool_generic_put(mp, &obj, 1, cache); - GOTO_ERR(ret, out); + if (use_zc_api) { + cache_objs = rte_mempool_cache_zc_get_bulk(cache, mp, 1); + obj = *cache_objs; + cache_objs = rte_mempool_cache_zc_get_bulk(cache, mp, 1); + obj2 = *cache_objs; + } else { + if (rte_mempool_generic_get(mp, &obj, 1, cache) < 0) + GOTO_ERR(ret, out); + if (rte_mempool_generic_get(mp, &obj2, 1, cache) < 0) { + rte_mempool_generic_put(mp, &obj, 1, cache); + GOTO_ERR(ret, out); + } } rte_mempool_dump(stdout, mp); printf("put the objects back\n"); - rte_mempool_generic_put(mp, &obj, 1, cache); - rte_mempool_generic_put(mp, &obj2, 1, cache); + if (use_zc_api) { + cache_objs = rte_mempool_cache_zc_put_bulk(cache, mp, 1); + rte_memcpy(cache_objs, &obj, sizeof(void *)); + cache_objs = rte_mempool_cache_zc_put_bulk(cache, mp, 1); + rte_memcpy(cache_objs, &obj2, sizeof(void *)); + + } else { + rte_mempool_generic_put(mp, &obj, 1, cache); + rte_mempool_generic_put(mp, &obj2, 1, cache); + } rte_mempool_dump(stdout, mp); /* @@ -149,8 +175,13 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache) GOTO_ERR(ret, out); for (i = 0; i < MEMPOOL_SIZE; i++) { - if (rte_mempool_generic_get(mp, &objtable[i], 1, cache) < 0) - break; + if (use_zc_api) { + cache_objs = rte_mempool_cache_zc_get_bulk(cache, mp, 1); + objtable[i] = *cache_objs; + } else { + if (rte_mempool_generic_get(mp, &objtable[i], 1, cache) < 0) + break; + } } /* @@ -170,8 +201,12 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache) if (obj_data[j] != 0) ret = -1; } - - rte_mempool_generic_put(mp, &objtable[i], 1, cache); + if (use_zc_api) { + cache_objs = rte_mempool_cache_zc_put_bulk(cache, mp, 1); + rte_memcpy(cache_objs, &objtable[i], sizeof(void *)); + } else { + rte_mempool_generic_put(mp, &objtable[i], 1, cache); + } } free(objtable); @@ -979,15 +1014,19 @@ test_mempool(void) rte_mempool_list_dump(stdout); /* basic tests without cache */ - if (test_mempool_basic(mp_nocache, 0) < 0) + if (test_mempool_basic(mp_nocache, 0, 0) < 0) + GOTO_ERR(ret, err); + + /* basic tests with zero-copy API's */ + if (test_mempool_basic(mp_cache, 0, 1) < 0) GOTO_ERR(ret, err); - /* basic tests with cache */ - if (test_mempool_basic(mp_cache, 0) < 0) + /* basic tests with user-owned cache and zero-copy API's */ + if (test_mempool_basic(mp_nocache, 1, 1) < 0) GOTO_ERR(ret, err); /* basic tests with user-owned cache */ - if (test_mempool_basic(mp_nocache, 1) < 0) + if (test_mempool_basic(mp_nocache, 1, 0) < 0) GOTO_ERR(ret, err); /* more basic tests without cache */ @@ -1008,10 +1047,10 @@ test_mempool(void) GOTO_ERR(ret, err); /* test the stack handler */ - if (test_mempool_basic(mp_stack, 1) < 0) + if (test_mempool_basic(mp_stack, 1, 0) < 0) GOTO_ERR(ret, err); - if (test_mempool_basic(default_pool, 1) < 0) + if (test_mempool_basic(default_pool, 1, 0) < 0) GOTO_ERR(ret, err); /* test mempool event callbacks */ diff --git a/lib/mempool/mempool_trace_points.c b/lib/mempool/mempool_trace_points.c index 307018094d73..8735a0797150 100644 --- a/lib/mempool/mempool_trace_points.c +++ b/lib/mempool/mempool_trace_points.c @@ -77,3 +77,12 @@ RTE_TRACE_POINT_REGISTER(rte_mempool_trace_ops_free, RTE_TRACE_POINT_REGISTER(rte_mempool_trace_set_ops_byname, lib.mempool.set.ops.byname) + +RTE_TRACE_POINT_REGISTER(rte_mempool_trace_cache_zc_put_bulk, + lib.mempool.cache.zc.put.bulk) + +RTE_TRACE_POINT_REGISTER(rte_mempool_trace_cache_zc_put_rewind, + lib.mempool.cache.zc.put.rewind) + +RTE_TRACE_POINT_REGISTER(rte_mempool_trace_cache_zc_get_bulk, + lib.mempool.cache.zc.get.bulk) diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h index 160975a7e708..2a7c71ecc682 100644 --- a/lib/mempool/rte_mempool.h +++ b/lib/mempool/rte_mempool.h @@ -42,6 +42,7 @@ #include #include #include +#include #include #include #include @@ -1346,6 +1347,199 @@ rte_mempool_cache_flush(struct rte_mempool_cache *cache, cache->len = 0; } + +/** + * @internal used by rte_mempool_cache_zc_put_bulk() and rte_mempool_do_generic_put(). + * + * Zero-copy put objects in a mempool cache backed by the specified mempool. + * + * @param cache + * A pointer to the mempool cache. + * @param mp + * A pointer to the mempool. + * @param n + * The number of objects to be put in the mempool cache. + * @return + * The pointer to where to put the objects in the mempool cache. + * NULL, with rte_errno set to EINVAL, if the request itself is too big + * for the cache, i.e. exceeds the cache flush threshold. + */ +static __rte_always_inline void ** +__rte_mempool_cache_zc_put_bulk(struct rte_mempool_cache *cache, + struct rte_mempool *mp, + unsigned int n) +{ + void **cache_objs; + + RTE_ASSERT(cache != NULL); + RTE_ASSERT(mp != NULL); + + if (cache->len + n <= cache->flushthresh) { + /* + * The objects can be added to the cache without crossing the + * flush threshold. + */ + cache_objs = &cache->objs[cache->len]; + cache->len += n; + } else if (likely(n <= cache->flushthresh)) { + /* + * The request itself fits into the cache. + * But first, the cache must be flushed to the backend, so + * adding the objects does not cross the flush threshold. + */ + cache_objs = &cache->objs[0]; + rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len); + cache->len = n; + } else { + /* The request itself is too big for the cache. */ + rte_errno = EINVAL; + return NULL; + } + + RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1); + RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n); + + return cache_objs; +} + +/** + * @warning + * @b EXPERIMENTAL: This API may change, or be removed, without prior notice. + * + * Zero-copy put objects in a mempool cache backed by the specified mempool. + * + * @param cache + * A pointer to the mempool cache. + * @param mp + * A pointer to the mempool. + * @param n + * The number of objects to be put in the mempool cache. + * @return + * The pointer to where to put the objects in the mempool cache. + * NULL if the request itself is too big for the cache, i.e. + * exceeds the cache flush threshold. + */ +__rte_experimental +static __rte_always_inline void ** +rte_mempool_cache_zc_put_bulk(struct rte_mempool_cache *cache, + struct rte_mempool *mp, + unsigned int n) +{ + RTE_ASSERT(cache != NULL); + RTE_ASSERT(mp != NULL); + + rte_mempool_trace_cache_zc_put_bulk(cache, mp, n); + return __rte_mempool_cache_zc_put_bulk(cache, mp, n); +} + +/** + * @warning + * @b EXPERIMENTAL: This API may change, or be removed, without prior notice. + * + * Zero-copy un-put objects in a mempool cache. + * + * @param cache + * A pointer to the mempool cache. + * @param n + * The number of objects not put in the mempool cache after calling + * rte_mempool_cache_zc_put_bulk(). + */ +__rte_experimental +static __rte_always_inline void +rte_mempool_cache_zc_put_rewind(struct rte_mempool_cache *cache, + unsigned int n) +{ + RTE_ASSERT(cache != NULL); + RTE_ASSERT(n <= cache->len); + + rte_mempool_trace_cache_zc_put_rewind(cache, n); + + cache->len -= n; + + RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, (int)-n); +} + +/** + * @warning + * @b EXPERIMENTAL: This API may change, or be removed, without prior notice. + * + * Zero-copy get objects from a mempool cache backed by the specified mempool. + * + * @param cache + * A pointer to the mempool cache. + * @param mp + * A pointer to the mempool. + * @param n + * The number of objects to be made available for extraction from the mempool cache. + * @return + * The pointer to the objects in the mempool cache. + * NULL on error; i.e. the cache + the pool does not contain 'n' objects. + * With rte_errno set to the error code of the mempool dequeue function, + * or EINVAL if the request itself is too big for the cache, i.e. + * exceeds the cache flush threshold. + */ +__rte_experimental +static __rte_always_inline void * +rte_mempool_cache_zc_get_bulk(struct rte_mempool_cache *cache, + struct rte_mempool *mp, + unsigned int n) +{ + unsigned int len, size; + + RTE_ASSERT(cache != NULL); + RTE_ASSERT(mp != NULL); + + rte_mempool_trace_cache_zc_get_bulk(cache, mp, n); + + len = cache->len; + size = cache->size; + + if (n <= len) { + /* The request can be satisfied from the cache as is. */ + len -= n; + } else if (likely(n <= size)) { + /* + * The request itself can be satisfied from the cache. + * But first, the cache must be filled from the backend; + * fetch size + requested - len objects. + */ + int ret; + + ret = rte_mempool_ops_dequeue_bulk(mp, &cache->objs[len], size + n - len); + if (unlikely(ret < 0)) { + /* + * We are buffer constrained. + * Do not fill the cache, just satisfy the request. + */ + ret = rte_mempool_ops_dequeue_bulk(mp, &cache->objs[len], n - len); + if (unlikely(ret < 0)) { + /* Unable to satisfy the request. */ + + RTE_MEMPOOL_STAT_ADD(mp, get_fail_bulk, 1); + RTE_MEMPOOL_STAT_ADD(mp, get_fail_objs, n); + + rte_errno = -ret; + return NULL; + } + + len = 0; + } else { + len = size; + } + } else { + /* The request itself is too big for the cache. */ + rte_errno = EINVAL; + return NULL; + } + + cache->len = len; + + RTE_MEMPOOL_CACHE_STAT_ADD(cache, get_success_bulk, 1); + RTE_MEMPOOL_CACHE_STAT_ADD(cache, get_success_objs, n); + + return &cache->objs[len]; +} + /** * @internal Put several objects back in the mempool; used internally. * @param mp @@ -1364,32 +1558,25 @@ rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table, { void **cache_objs; - /* No cache provided */ - if (unlikely(cache == NULL)) - goto driver_enqueue; + /* No cache provided? */ + if (unlikely(cache == NULL)) { + /* Increment stats now, adding in mempool always succeeds. */ + RTE_MEMPOOL_STAT_ADD(mp, put_bulk, 1); + RTE_MEMPOOL_STAT_ADD(mp, put_objs, n); - /* increment stat now, adding in mempool always success */ - RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1); - RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n); + goto driver_enqueue; + } - /* The request itself is too big for the cache */ - if (unlikely(n > cache->flushthresh)) - goto driver_enqueue_stats_incremented; + /* Prepare to add the objects to the cache. */ + cache_objs = __rte_mempool_cache_zc_put_bulk(cache, mp, n); - /* - * The cache follows the following algorithm: - * 1. If the objects cannot be added to the cache without crossing - * the flush threshold, flush the cache to the backend. - * 2. Add the objects to the cache. - */ + /* The request itself is too big for the cache? */ + if (unlikely(cache_objs == NULL)) { + /* Increment stats now, adding in mempool always succeeds. */ + RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1); + RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n); - if (cache->len + n <= cache->flushthresh) { - cache_objs = &cache->objs[cache->len]; - cache->len += n; - } else { - cache_objs = &cache->objs[0]; - rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len); - cache->len = n; + goto driver_enqueue; } /* Add the objects to the cache. */ @@ -1399,13 +1586,7 @@ rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table, driver_enqueue: - /* increment stat now, adding in mempool always success */ - RTE_MEMPOOL_STAT_ADD(mp, put_bulk, 1); - RTE_MEMPOOL_STAT_ADD(mp, put_objs, n); - -driver_enqueue_stats_incremented: - - /* push objects to the backend */ + /* Push the objects to the backend. */ rte_mempool_ops_enqueue_bulk(mp, obj_table, n); } diff --git a/lib/mempool/rte_mempool_trace_fp.h b/lib/mempool/rte_mempool_trace_fp.h index ed060e887ce3..14666457f75a 100644 --- a/lib/mempool/rte_mempool_trace_fp.h +++ b/lib/mempool/rte_mempool_trace_fp.h @@ -109,6 +109,29 @@ RTE_TRACE_POINT_FP( rte_trace_point_emit_ptr(mempool); ) +RTE_TRACE_POINT_FP( + rte_mempool_trace_cache_zc_put_bulk, + RTE_TRACE_POINT_ARGS(void *cache, void *mempool, uint32_t nb_objs), + rte_trace_point_emit_ptr(cache); + rte_trace_point_emit_ptr(mempool); + rte_trace_point_emit_u32(nb_objs); +) + +RTE_TRACE_POINT_FP( + rte_mempool_trace_cache_zc_put_rewind, + RTE_TRACE_POINT_ARGS(void *cache, uint32_t nb_objs), + rte_trace_point_emit_ptr(cache); + rte_trace_point_emit_u32(nb_objs); +) + +RTE_TRACE_POINT_FP( + rte_mempool_trace_cache_zc_get_bulk, + RTE_TRACE_POINT_ARGS(void *cache, void *mempool, uint32_t nb_objs), + rte_trace_point_emit_ptr(cache); + rte_trace_point_emit_ptr(mempool); + rte_trace_point_emit_u32(nb_objs); +) + #ifdef __cplusplus } #endif diff --git a/lib/mempool/version.map b/lib/mempool/version.map index dff2d1cb5555..093d3f0a016b 100644 --- a/lib/mempool/version.map +++ b/lib/mempool/version.map @@ -49,6 +49,15 @@ EXPERIMENTAL { __rte_mempool_trace_get_contig_blocks; __rte_mempool_trace_default_cache; __rte_mempool_trace_cache_flush; + __rte_mempool_trace_ops_populate; + __rte_mempool_trace_ops_alloc; + __rte_mempool_trace_ops_free; + __rte_mempool_trace_set_ops_byname; + + # added in 23.07 + __rte_mempool_trace_cache_zc_put_bulk; + __rte_mempool_trace_cache_zc_put_rewind; + __rte_mempool_trace_cache_zc_get_bulk; }; INTERNAL { From patchwork Fri Jul 21 16:28:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dharmik Thakkar X-Patchwork-Id: 129691 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5DCCD42EDC; Fri, 21 Jul 2023 18:28:58 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7AF3C42D10; Fri, 21 Jul 2023 18:28:53 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by mails.dpdk.org (Postfix) with ESMTP id 5E1F240EE7 for ; Fri, 21 Jul 2023 18:28:50 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3A0B3D75; Fri, 21 Jul 2023 09:29:33 -0700 (PDT) Received: from 2p2660v4-1.austin.arm.com (2p2660v4-1.austin.arm.com [10.118.14.10]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A98DC3F738; Fri, 21 Jul 2023 09:28:49 -0700 (PDT) From: Dharmik Thakkar To: Yuying Zhang , Beilei Xing Cc: dev@dpdk.org, nd@arm.com, Kamalakshitha Aligeri , Dharmik Thakkar , Ruifeng Wang , Feifei Wang , =?utf-8?q?Morten_Br=C3=B8rup?= , Konstantin Ananyev Subject: [PATCH v12 2/2] net/i40e: replace put function Date: Fri, 21 Jul 2023 11:28:36 -0500 Message-Id: <20230721162836.1208772-2-dharmikjayesh.thakkar@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230721162836.1208772-1-dharmikjayesh.thakkar@arm.com> References: <20230705180233.880203-2-kamalakshitha.aligeri@arm.com> <20230721162836.1208772-1-dharmikjayesh.thakkar@arm.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Kamalakshitha Aligeri Integrated zero-copy put API in mempool cache in i40e PMD. On Ampere Altra server, l3fwd single core's performance improves by 5% with the new API Signed-off-by: Kamalakshitha Aligeri Signed-off-by: Dharmik Thakkar Reviewed-by: Ruifeng Wang Reviewed-by: Feifei Wang Acked-by: Morten Brørup Acked-by: Konstantin Ananyev --- drivers/net/i40e/i40e_rxtx_vec_common.h | 29 ++++++++++++++++++++----- 1 file changed, 23 insertions(+), 6 deletions(-) diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h index fe1a6ec75ef5..bb746fc36823 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_common.h +++ b/drivers/net/i40e/i40e_rxtx_vec_common.h @@ -85,7 +85,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq) uint32_t n; uint32_t i; int nb_free = 0; - struct rte_mbuf *m, *free[RTE_I40E_TX_MAX_FREE_BUF_SZ]; + struct rte_mbuf *m, *free[RTE_I40E_TX_MAX_FREE_BUF_SZ] = {0}; /* check DD bits on threshold descriptor */ if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz & @@ -95,18 +95,35 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq) n = txq->tx_rs_thresh; - /* first buffer to free from S/W ring is at index - * tx_next_dd - (tx_rs_thresh-1) - */ + /* first buffer to free from S/W ring is at index + * tx_next_dd - (tx_rs_thresh-1) + */ txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)]; if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) { + struct rte_mempool *mp = txep[0].mbuf->pool; + struct rte_mempool_cache *cache = rte_mempool_default_cache(mp, rte_lcore_id()); + void **cache_objs; + + if (unlikely(!cache)) + goto fallback; + + cache_objs = rte_mempool_cache_zc_put_bulk(cache, mp, n); + if (unlikely(!cache_objs)) + goto fallback; + for (i = 0; i < n; i++) { - free[i] = txep[i].mbuf; + cache_objs[i] = txep[i].mbuf; /* no need to reset txep[i].mbuf in vector path */ } - rte_mempool_put_bulk(free[0]->pool, (void **)free, n); goto done; + +fallback: + for (i = 0; i < n; i++) + free[i] = txep[i].mbuf; + rte_mempool_generic_put(mp, (void **)free, n, cache); + goto done; + } m = rte_pktmbuf_prefree_seg(txep[0].mbuf);