From patchwork Tue Nov 5 15:37:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Olivier Matz X-Patchwork-Id: 62478 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3919DA04A2; Tue, 5 Nov 2019 16:37:36 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6DA771BFBA; Tue, 5 Nov 2019 16:37:24 +0100 (CET) Received: from proxy.6wind.com (host.76.145.23.62.rev.coltfrance.com [62.23.145.76]) by dpdk.org (Postfix) with ESMTP id 5D44E1BFA4 for ; Tue, 5 Nov 2019 16:37:19 +0100 (CET) Received: from glumotte.dev.6wind.com. (unknown [10.16.0.195]) by proxy.6wind.com (Postfix) with ESMTP id 2AA9E33B0EE; Tue, 5 Nov 2019 16:37:19 +0100 (CET) From: Olivier Matz To: dev@dpdk.org Cc: Anatoly Burakov , Andrew Rybchenko , Ferruh Yigit , "Giridharan, Ganesan" , Jerin Jacob Kollanukkaran , "Kiran Kumar Kokkilagadda" , Stephen Hemminger , Thomas Monjalon , Vamsi Krishna Attunuru , Hemant Agrawal , Nipun Gupta , David Marchand Date: Tue, 5 Nov 2019 16:37:00 +0100 Message-Id: <20191105153707.14645-2-olivier.matz@6wind.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191105153707.14645-1-olivier.matz@6wind.com> References: <20190719133845.32432-1-olivier.matz@6wind.com> <20191105153707.14645-1-olivier.matz@6wind.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v4 1/7] mempool: allow unaligned addr/len in populate virt X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" rte_mempool_populate_virt() currently requires that both addr and length are page-aligned. Remove this uneeded constraint which can be annoying with big hugepages (ex: 1GB). Signed-off-by: Olivier Matz Reviewed-by: Andrew Rybchenko Acked-by: Nipun Gupta --- lib/librte_mempool/rte_mempool.c | 23 +++++++++++------------ lib/librte_mempool/rte_mempool.h | 3 +-- 2 files changed, 12 insertions(+), 14 deletions(-) diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c index 0f29e8712..88e49c751 100644 --- a/lib/librte_mempool/rte_mempool.c +++ b/lib/librte_mempool/rte_mempool.c @@ -368,17 +368,11 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr, size_t off, phys_len; int ret, cnt = 0; - /* address and len must be page-aligned */ - if (RTE_PTR_ALIGN_CEIL(addr, pg_sz) != addr) - return -EINVAL; - if (RTE_ALIGN_CEIL(len, pg_sz) != len) - return -EINVAL; - if (mp->flags & MEMPOOL_F_NO_IOVA_CONTIG) return rte_mempool_populate_iova(mp, addr, RTE_BAD_IOVA, len, free_cb, opaque); - for (off = 0; off + pg_sz <= len && + for (off = 0; off < len && mp->populated_size < mp->size; off += phys_len) { iova = rte_mem_virt2iova(addr + off); @@ -389,12 +383,18 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr, } /* populate with the largest group of contiguous pages */ - for (phys_len = pg_sz; off + phys_len < len; phys_len += pg_sz) { + for (phys_len = RTE_MIN( + (size_t)(RTE_PTR_ALIGN_CEIL(addr + off + 1, pg_sz) - + (addr + off)), + len - off); + off + phys_len < len; + phys_len = RTE_MIN(phys_len + pg_sz, len - off)) { rte_iova_t iova_tmp; iova_tmp = rte_mem_virt2iova(addr + off + phys_len); - if (iova_tmp != iova + phys_len) + if (iova_tmp == RTE_BAD_IOVA || + iova_tmp != iova + phys_len) break; } @@ -575,8 +575,7 @@ rte_mempool_populate_default(struct rte_mempool *mp) * have */ mz = rte_memzone_reserve_aligned(mz_name, 0, - mp->socket_id, flags, - RTE_MAX(pg_sz, align)); + mp->socket_id, flags, align); } if (mz == NULL) { ret = -rte_errno; @@ -601,7 +600,7 @@ rte_mempool_populate_default(struct rte_mempool *mp) (void *)(uintptr_t)mz); else ret = rte_mempool_populate_virt(mp, mz->addr, - RTE_ALIGN_FLOOR(mz->len, pg_sz), pg_sz, + mz->len, pg_sz, rte_mempool_memchunk_mz_free, (void *)(uintptr_t)mz); if (ret < 0) { diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h index 8053f7a04..0fe8aa7b8 100644 --- a/lib/librte_mempool/rte_mempool.h +++ b/lib/librte_mempool/rte_mempool.h @@ -1042,9 +1042,8 @@ int rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr, * A pointer to the mempool structure. * @param addr * The virtual address of memory that should be used to store objects. - * Must be page-aligned. * @param len - * The length of memory in bytes. Must be page-aligned. + * The length of memory in bytes. * @param pg_sz * The size of memory pages in this virtual area. * @param free_cb From patchwork Tue Nov 5 15:37:01 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Olivier Matz X-Patchwork-Id: 62477 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id A7FE3A04A2; Tue, 5 Nov 2019 16:37:28 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C3A811BFB0; Tue, 5 Nov 2019 16:37:22 +0100 (CET) Received: from proxy.6wind.com (host.76.145.23.62.rev.coltfrance.com [62.23.145.76]) by dpdk.org (Postfix) with ESMTP id 57DE71BF9F for ; Tue, 5 Nov 2019 16:37:19 +0100 (CET) Received: from glumotte.dev.6wind.com. (unknown [10.16.0.195]) by proxy.6wind.com (Postfix) with ESMTP id 3A8A533B0EF; Tue, 5 Nov 2019 16:37:19 +0100 (CET) From: Olivier Matz To: dev@dpdk.org Cc: Anatoly Burakov , Andrew Rybchenko , Ferruh Yigit , "Giridharan, Ganesan" , Jerin Jacob Kollanukkaran , "Kiran Kumar Kokkilagadda" , Stephen Hemminger , Thomas Monjalon , Vamsi Krishna Attunuru , Hemant Agrawal , Nipun Gupta , David Marchand Date: Tue, 5 Nov 2019 16:37:01 +0100 Message-Id: <20191105153707.14645-3-olivier.matz@6wind.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191105153707.14645-1-olivier.matz@6wind.com> References: <20190719133845.32432-1-olivier.matz@6wind.com> <20191105153707.14645-1-olivier.matz@6wind.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v4 2/7] mempool: reduce wasted space on mempool populate X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The size returned by rte_mempool_op_calc_mem_size_default() is aligned to the specified page size. Therefore, with big pages, the returned size can be much more that what we really need to populate the mempool. For instance, populating a mempool that requires 1.1GB of memory with 1GB hugepages can result in allocating 2GB of memory. This problem is hidden most of the time due to the allocation method of rte_mempool_populate_default(): when try_iova_contig_mempool=true, it first tries to allocate an iova contiguous area, without the alignment constraint. If it fails, it fallbacks to an aligned allocation that does not require to be iova-contiguous. This can also fallback into several smaller aligned allocations. This commit changes rte_mempool_op_calc_mem_size_default() to relax the alignment constraint to a cache line and to return a smaller size. Signed-off-by: Olivier Matz Reviewed-by: Andrew Rybdhenko Acked-by: Nipun Gupta --- lib/librte_mempool/rte_mempool.c | 7 ++--- lib/librte_mempool/rte_mempool.h | 9 +++---- lib/librte_mempool/rte_mempool_ops.c | 4 ++- lib/librte_mempool/rte_mempool_ops_default.c | 28 +++++++++++++++----- 4 files changed, 30 insertions(+), 18 deletions(-) diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c index 88e49c751..4e0d576f5 100644 --- a/lib/librte_mempool/rte_mempool.c +++ b/lib/librte_mempool/rte_mempool.c @@ -477,11 +477,8 @@ rte_mempool_populate_default(struct rte_mempool *mp) * wasting some space this way, but it's much nicer than looping around * trying to reserve each and every page size. * - * However, since size calculation will produce page-aligned sizes, it - * makes sense to first try and see if we can reserve the entire memzone - * in one contiguous chunk as well (otherwise we might end up wasting a - * 1G page on a 10MB memzone). If we fail to get enough contiguous - * memory, then we'll go and reserve space page-by-page. + * If we fail to get enough contiguous memory, then we'll go and + * reserve space in smaller chunks. * * We also have to take into account the fact that memory that we're * going to allocate from can belong to an externally allocated memory diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h index 0fe8aa7b8..78b687bb6 100644 --- a/lib/librte_mempool/rte_mempool.h +++ b/lib/librte_mempool/rte_mempool.h @@ -458,7 +458,7 @@ typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp); * @param[out] align * Location for required memory chunk alignment. * @return - * Required memory size aligned at page boundary. + * Required memory size. */ typedef ssize_t (*rte_mempool_calc_mem_size_t)(const struct rte_mempool *mp, uint32_t obj_num, uint32_t pg_shift, @@ -477,11 +477,8 @@ typedef ssize_t (*rte_mempool_calc_mem_size_t)(const struct rte_mempool *mp, * that pages are grouped in subsets of physically continuous pages big * enough to store at least one object. * - * Minimum size of memory chunk is a maximum of the page size and total - * element size. - * - * Required memory chunk alignment is a maximum of page size and cache - * line size. + * Minimum size of memory chunk is the total element size. + * Required memory chunk alignment is the cache line size. */ ssize_t rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp, uint32_t obj_num, uint32_t pg_shift, diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c index e02eb702c..22c5251eb 100644 --- a/lib/librte_mempool/rte_mempool_ops.c +++ b/lib/librte_mempool/rte_mempool_ops.c @@ -100,7 +100,9 @@ rte_mempool_ops_get_count(const struct rte_mempool *mp) return ops->get_count(mp); } -/* wrapper to notify new memory area to external mempool */ +/* wrapper to calculate the memory size required to store given number + * of objects + */ ssize_t rte_mempool_ops_calc_mem_size(const struct rte_mempool *mp, uint32_t obj_num, uint32_t pg_shift, diff --git a/lib/librte_mempool/rte_mempool_ops_default.c b/lib/librte_mempool/rte_mempool_ops_default.c index 4e2bfc82d..f6aea7662 100644 --- a/lib/librte_mempool/rte_mempool_ops_default.c +++ b/lib/librte_mempool/rte_mempool_ops_default.c @@ -12,7 +12,7 @@ rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp, size_t *min_chunk_size, size_t *align) { size_t total_elt_sz; - size_t obj_per_page, pg_num, pg_sz; + size_t obj_per_page, pg_sz, objs_in_last_page; size_t mem_size; total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size; @@ -33,14 +33,30 @@ rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp, mem_size = RTE_ALIGN_CEIL(total_elt_sz, pg_sz) * obj_num; } else { - pg_num = (obj_num + obj_per_page - 1) / obj_per_page; - mem_size = pg_num << pg_shift; + /* In the best case, the allocator will return a + * page-aligned address. For example, with 5 objs, + * the required space is as below: + * | page0 | page1 | page2 (last) | + * |obj0 |obj1 |xxx|obj2 |obj3 |xxx|obj4| + * <------------- mem_size -------------> + */ + objs_in_last_page = ((obj_num - 1) % obj_per_page) + 1; + /* room required for the last page */ + mem_size = objs_in_last_page * total_elt_sz; + /* room required for other pages */ + mem_size += ((obj_num - objs_in_last_page) / + obj_per_page) << pg_shift; + + /* In the worst case, the allocator returns a + * non-aligned pointer, wasting up to + * total_elt_sz. Add a margin for that. + */ + mem_size += total_elt_sz - 1; } } - *min_chunk_size = RTE_MAX((size_t)1 << pg_shift, total_elt_sz); - - *align = RTE_MAX((size_t)RTE_CACHE_LINE_SIZE, (size_t)1 << pg_shift); + *min_chunk_size = total_elt_sz; + *align = RTE_CACHE_LINE_SIZE; return mem_size; } From patchwork Tue Nov 5 15:37:02 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Olivier Matz X-Patchwork-Id: 62481 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5FDB8A04A2; Tue, 5 Nov 2019 16:38:03 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id ACACB1BFCF; Tue, 5 Nov 2019 16:37:29 +0100 (CET) Received: from proxy.6wind.com (host.76.145.23.62.rev.coltfrance.com [62.23.145.76]) by dpdk.org (Postfix) with ESMTP id 6BAC31BFA8 for ; Tue, 5 Nov 2019 16:37:19 +0100 (CET) Received: from glumotte.dev.6wind.com. (unknown [10.16.0.195]) by proxy.6wind.com (Postfix) with ESMTP id 4A3C933B0F0; Tue, 5 Nov 2019 16:37:19 +0100 (CET) From: Olivier Matz To: dev@dpdk.org Cc: Anatoly Burakov , Andrew Rybchenko , Ferruh Yigit , "Giridharan, Ganesan" , Jerin Jacob Kollanukkaran , "Kiran Kumar Kokkilagadda" , Stephen Hemminger , Thomas Monjalon , Vamsi Krishna Attunuru , Hemant Agrawal , Nipun Gupta , David Marchand Date: Tue, 5 Nov 2019 16:37:02 +0100 Message-Id: <20191105153707.14645-4-olivier.matz@6wind.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191105153707.14645-1-olivier.matz@6wind.com> References: <20190719133845.32432-1-olivier.matz@6wind.com> <20191105153707.14645-1-olivier.matz@6wind.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v4 3/7] mempool: remove optimistic IOVA-contiguous allocation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The previous commit reduced the amount of required memory when populating the mempool with non iova-contiguous memory. Since there is no big advantage to have a fully iova-contiguous mempool if it is not explicitly asked, remove this code, it simplifies the populate function. Signed-off-by: Olivier Matz Reviewed-by: Andrew Rybchenko Acked-by: Nipun Gupta --- lib/librte_mempool/rte_mempool.c | 47 ++++++-------------------------- 1 file changed, 8 insertions(+), 39 deletions(-) diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c index 4e0d576f5..213e574fc 100644 --- a/lib/librte_mempool/rte_mempool.c +++ b/lib/librte_mempool/rte_mempool.c @@ -430,7 +430,6 @@ rte_mempool_populate_default(struct rte_mempool *mp) unsigned mz_id, n; int ret; bool need_iova_contig_obj; - bool try_iova_contig_mempool; bool alloc_in_ext_mem; ret = mempool_ops_alloc_once(mp); @@ -483,9 +482,7 @@ rte_mempool_populate_default(struct rte_mempool *mp) * We also have to take into account the fact that memory that we're * going to allocate from can belong to an externally allocated memory * area, in which case the assumption of IOVA as VA mode being - * synonymous with IOVA contiguousness will not hold. We should also try - * to go for contiguous memory even if we're in no-huge mode, because - * external memory may in fact be IOVA-contiguous. + * synonymous with IOVA contiguousness will not hold. */ /* check if we can retrieve a valid socket ID */ @@ -494,7 +491,6 @@ rte_mempool_populate_default(struct rte_mempool *mp) return -EINVAL; alloc_in_ext_mem = (ret == 1); need_iova_contig_obj = !(mp->flags & MEMPOOL_F_NO_IOVA_CONTIG); - try_iova_contig_mempool = false; if (!need_iova_contig_obj) { pg_sz = 0; @@ -503,7 +499,6 @@ rte_mempool_populate_default(struct rte_mempool *mp) pg_sz = 0; pg_shift = 0; } else if (rte_eal_has_hugepages() || alloc_in_ext_mem) { - try_iova_contig_mempool = true; pg_sz = get_min_page_size(mp->socket_id); pg_shift = rte_bsf32(pg_sz); } else { @@ -513,14 +508,9 @@ rte_mempool_populate_default(struct rte_mempool *mp) for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) { size_t min_chunk_size; - unsigned int flags; - if (try_iova_contig_mempool || pg_sz == 0) - mem_size = rte_mempool_ops_calc_mem_size(mp, n, - 0, &min_chunk_size, &align); - else - mem_size = rte_mempool_ops_calc_mem_size(mp, n, - pg_shift, &min_chunk_size, &align); + mem_size = rte_mempool_ops_calc_mem_size( + mp, n, pg_shift, &min_chunk_size, &align); if (mem_size < 0) { ret = mem_size; @@ -534,36 +524,15 @@ rte_mempool_populate_default(struct rte_mempool *mp) goto fail; } - flags = mz_flags; - /* if we're trying to reserve contiguous memory, add appropriate * memzone flag. */ - if (try_iova_contig_mempool) - flags |= RTE_MEMZONE_IOVA_CONTIG; + if (min_chunk_size == (size_t)mem_size) + mz_flags |= RTE_MEMZONE_IOVA_CONTIG; mz = rte_memzone_reserve_aligned(mz_name, mem_size, - mp->socket_id, flags, align); - - /* if we were trying to allocate contiguous memory, failed and - * minimum required contiguous chunk fits minimum page, adjust - * memzone size to the page size, and try again. - */ - if (mz == NULL && try_iova_contig_mempool && - min_chunk_size <= pg_sz) { - try_iova_contig_mempool = false; - flags &= ~RTE_MEMZONE_IOVA_CONTIG; - - mem_size = rte_mempool_ops_calc_mem_size(mp, n, - pg_shift, &min_chunk_size, &align); - if (mem_size < 0) { - ret = mem_size; - goto fail; - } + mp->socket_id, mz_flags, align); - mz = rte_memzone_reserve_aligned(mz_name, mem_size, - mp->socket_id, flags, align); - } /* don't try reserving with 0 size if we were asked to reserve * IOVA-contiguous memory. */ @@ -572,7 +541,7 @@ rte_mempool_populate_default(struct rte_mempool *mp) * have */ mz = rte_memzone_reserve_aligned(mz_name, 0, - mp->socket_id, flags, align); + mp->socket_id, mz_flags, align); } if (mz == NULL) { ret = -rte_errno; @@ -590,7 +559,7 @@ rte_mempool_populate_default(struct rte_mempool *mp) else iova = RTE_BAD_IOVA; - if (try_iova_contig_mempool || pg_sz == 0) + if (pg_sz == 0 || (mz_flags & RTE_MEMZONE_IOVA_CONTIG)) ret = rte_mempool_populate_iova(mp, mz->addr, iova, mz->len, rte_mempool_memchunk_mz_free, From patchwork Tue Nov 5 15:37:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Olivier Matz X-Patchwork-Id: 62479 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id D4574A04A2; Tue, 5 Nov 2019 16:37:44 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2B5431BFC2; Tue, 5 Nov 2019 16:37:26 +0100 (CET) Received: from proxy.6wind.com (host.76.145.23.62.rev.coltfrance.com [62.23.145.76]) by dpdk.org (Postfix) with ESMTP id 7A27D1BFA9 for ; Tue, 5 Nov 2019 16:37:19 +0100 (CET) Received: from glumotte.dev.6wind.com. (unknown [10.16.0.195]) by proxy.6wind.com (Postfix) with ESMTP id 5ACF933B0F1; Tue, 5 Nov 2019 16:37:19 +0100 (CET) From: Olivier Matz To: dev@dpdk.org Cc: Anatoly Burakov , Andrew Rybchenko , Ferruh Yigit , "Giridharan, Ganesan" , Jerin Jacob Kollanukkaran , "Kiran Kumar Kokkilagadda" , Stephen Hemminger , Thomas Monjalon , Vamsi Krishna Attunuru , Hemant Agrawal , Nipun Gupta , David Marchand Date: Tue, 5 Nov 2019 16:37:03 +0100 Message-Id: <20191105153707.14645-5-olivier.matz@6wind.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191105153707.14645-1-olivier.matz@6wind.com> References: <20190719133845.32432-1-olivier.matz@6wind.com> <20191105153707.14645-1-olivier.matz@6wind.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v4 4/7] mempool: introduce function to get mempool page size X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In rte_mempool_populate_default(), we determine the page size, which is needed for calc_size and allocation of memory. Move this in a function and export it, it will be used in a next commit. Signed-off-by: Olivier Matz Reviewed-by: Andrew Rybchenko Acked-by: Nipun Gupta --- lib/librte_mempool/rte_mempool.c | 51 ++++++++++++++-------- lib/librte_mempool/rte_mempool.h | 11 +++++ lib/librte_mempool/rte_mempool_version.map | 4 ++ 3 files changed, 47 insertions(+), 19 deletions(-) diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c index 213e574fc..758c5410b 100644 --- a/lib/librte_mempool/rte_mempool.c +++ b/lib/librte_mempool/rte_mempool.c @@ -414,6 +414,33 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr, return ret; } +/* Get the minimal page size used in a mempool before populating it. */ +int +rte_mempool_get_page_size(struct rte_mempool *mp, size_t *pg_sz) +{ + bool need_iova_contig_obj; + bool alloc_in_ext_mem; + int ret; + + /* check if we can retrieve a valid socket ID */ + ret = rte_malloc_heap_socket_is_external(mp->socket_id); + if (ret < 0) + return -EINVAL; + alloc_in_ext_mem = (ret == 1); + need_iova_contig_obj = !(mp->flags & MEMPOOL_F_NO_IOVA_CONTIG); + + if (!need_iova_contig_obj) + *pg_sz = 0; + else if (!alloc_in_ext_mem && rte_eal_iova_mode() == RTE_IOVA_VA) + *pg_sz = 0; + else if (rte_eal_has_hugepages() || alloc_in_ext_mem) + *pg_sz = get_min_page_size(mp->socket_id); + else + *pg_sz = getpagesize(); + + return 0; +} + /* Default function to populate the mempool: allocate memory in memzones, * and populate them. Return the number of objects added, or a negative * value on error. @@ -425,12 +452,11 @@ rte_mempool_populate_default(struct rte_mempool *mp) char mz_name[RTE_MEMZONE_NAMESIZE]; const struct rte_memzone *mz; ssize_t mem_size; - size_t align, pg_sz, pg_shift; + size_t align, pg_sz, pg_shift = 0; rte_iova_t iova; unsigned mz_id, n; int ret; bool need_iova_contig_obj; - bool alloc_in_ext_mem; ret = mempool_ops_alloc_once(mp); if (ret != 0) @@ -485,26 +511,13 @@ rte_mempool_populate_default(struct rte_mempool *mp) * synonymous with IOVA contiguousness will not hold. */ - /* check if we can retrieve a valid socket ID */ - ret = rte_malloc_heap_socket_is_external(mp->socket_id); - if (ret < 0) - return -EINVAL; - alloc_in_ext_mem = (ret == 1); need_iova_contig_obj = !(mp->flags & MEMPOOL_F_NO_IOVA_CONTIG); + ret = rte_mempool_get_page_size(mp, &pg_sz); + if (ret < 0) + return ret; - if (!need_iova_contig_obj) { - pg_sz = 0; - pg_shift = 0; - } else if (!alloc_in_ext_mem && rte_eal_iova_mode() == RTE_IOVA_VA) { - pg_sz = 0; - pg_shift = 0; - } else if (rte_eal_has_hugepages() || alloc_in_ext_mem) { - pg_sz = get_min_page_size(mp->socket_id); - pg_shift = rte_bsf32(pg_sz); - } else { - pg_sz = getpagesize(); + if (pg_sz != 0) pg_shift = rte_bsf32(pg_sz); - } for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) { size_t min_chunk_size; diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h index 78b687bb6..9e41f595e 100644 --- a/lib/librte_mempool/rte_mempool.h +++ b/lib/librte_mempool/rte_mempool.h @@ -1688,6 +1688,17 @@ uint32_t rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags, void rte_mempool_walk(void (*func)(struct rte_mempool *, void *arg), void *arg); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * @internal Get page size used for mempool object allocation. + * This function is internal to mempool library and mempool drivers. + */ +__rte_experimental +int +rte_mempool_get_page_size(struct rte_mempool *mp, size_t *pg_sz); + #ifdef __cplusplus } #endif diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map index 17cbca460..aa75cf086 100644 --- a/lib/librte_mempool/rte_mempool_version.map +++ b/lib/librte_mempool/rte_mempool_version.map @@ -56,5 +56,9 @@ DPDK_18.05 { EXPERIMENTAL { global: + # added in 18.05 rte_mempool_ops_get_info; + + # added in 19.11 + rte_mempool_get_page_size; }; From patchwork Tue Nov 5 15:37:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Olivier Matz X-Patchwork-Id: 62480 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1067DA04A2; Tue, 5 Nov 2019 16:37:54 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id EA31A1BFC9; Tue, 5 Nov 2019 16:37:27 +0100 (CET) Received: from proxy.6wind.com (host.76.145.23.62.rev.coltfrance.com [62.23.145.76]) by dpdk.org (Postfix) with ESMTP id 7F6891BFAC for ; Tue, 5 Nov 2019 16:37:19 +0100 (CET) Received: from glumotte.dev.6wind.com. (unknown [10.16.0.195]) by proxy.6wind.com (Postfix) with ESMTP id 6A50A33B0F2; Tue, 5 Nov 2019 16:37:19 +0100 (CET) From: Olivier Matz To: dev@dpdk.org Cc: Anatoly Burakov , Andrew Rybchenko , Ferruh Yigit , "Giridharan, Ganesan" , Jerin Jacob Kollanukkaran , "Kiran Kumar Kokkilagadda" , Stephen Hemminger , Thomas Monjalon , Vamsi Krishna Attunuru , Hemant Agrawal , Nipun Gupta , David Marchand Date: Tue, 5 Nov 2019 16:37:04 +0100 Message-Id: <20191105153707.14645-6-olivier.matz@6wind.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191105153707.14645-1-olivier.matz@6wind.com> References: <20190719133845.32432-1-olivier.matz@6wind.com> <20191105153707.14645-1-olivier.matz@6wind.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v4 5/7] mempool: introduce helpers for populate and calc mem size X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Introduce new functions that can used by mempool drivers to calculate required memory size and to populate mempool. For now, these helpers just replace the *_default() functions without change. They will be enhanced in next commit. Signed-off-by: Olivier Matz Acked-by: Nipun Gupta Reviewed-by: Andrew Rybchenko --- drivers/mempool/bucket/Makefile | 1 + drivers/mempool/bucket/meson.build | 2 + drivers/mempool/bucket/rte_mempool_bucket.c | 2 +- drivers/mempool/dpaa/dpaa_mempool.c | 2 +- drivers/mempool/dpaa2/dpaa2_hw_mempool.c | 2 +- drivers/mempool/octeontx/Makefile | 2 + drivers/mempool/octeontx/meson.build | 2 + .../mempool/octeontx/rte_mempool_octeontx.c | 4 +- drivers/mempool/octeontx2/Makefile | 2 + drivers/mempool/octeontx2/meson.build | 2 + drivers/mempool/octeontx2/otx2_mempool_ops.c | 4 +- lib/librte_mempool/rte_mempool.h | 71 ++++++++++++++++++- lib/librte_mempool/rte_mempool_ops_default.c | 31 ++++++-- lib/librte_mempool/rte_mempool_version.map | 2 + 14 files changed, 113 insertions(+), 16 deletions(-) diff --git a/drivers/mempool/bucket/Makefile b/drivers/mempool/bucket/Makefile index 7364916bc..d6660b09c 100644 --- a/drivers/mempool/bucket/Makefile +++ b/drivers/mempool/bucket/Makefile @@ -15,6 +15,7 @@ LIB = librte_mempool_bucket.a CFLAGS += -O3 CFLAGS += $(WERROR_FLAGS) +CFLAGS += -DALLOW_EXPERIMENTAL_API LDLIBS += -lrte_eal -lrte_mempool -lrte_ring diff --git a/drivers/mempool/bucket/meson.build b/drivers/mempool/bucket/meson.build index 618d79128..2fd77f9d4 100644 --- a/drivers/mempool/bucket/meson.build +++ b/drivers/mempool/bucket/meson.build @@ -6,4 +6,6 @@ # This software was jointly developed between OKTET Labs (under contract # for Solarflare) and Solarflare Communications, Inc. +allow_experimental_apis = true + sources = files('rte_mempool_bucket.c') diff --git a/drivers/mempool/bucket/rte_mempool_bucket.c b/drivers/mempool/bucket/rte_mempool_bucket.c index 78d2b9d04..dfeaf4e45 100644 --- a/drivers/mempool/bucket/rte_mempool_bucket.c +++ b/drivers/mempool/bucket/rte_mempool_bucket.c @@ -585,7 +585,7 @@ bucket_populate(struct rte_mempool *mp, unsigned int max_objs, hdr->fill_cnt = 0; hdr->lcore_id = LCORE_ID_ANY; - rc = rte_mempool_op_populate_default(mp, + rc = rte_mempool_op_populate_helper(mp, RTE_MIN(bd->obj_per_bucket, max_objs - n_objs), iter + bucket_header_sz, diff --git a/drivers/mempool/dpaa/dpaa_mempool.c b/drivers/mempool/dpaa/dpaa_mempool.c index a25697f05..27736e6c2 100644 --- a/drivers/mempool/dpaa/dpaa_mempool.c +++ b/drivers/mempool/dpaa/dpaa_mempool.c @@ -341,7 +341,7 @@ dpaa_populate(struct rte_mempool *mp, unsigned int max_objs, */ TAILQ_INSERT_HEAD(&rte_dpaa_memsegs, ms, next); - return rte_mempool_op_populate_default(mp, max_objs, vaddr, paddr, len, + return rte_mempool_op_populate_helper(mp, max_objs, vaddr, paddr, len, obj_cb, obj_cb_arg); } diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c index f26c30b00..8f8dbeada 100644 --- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c +++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c @@ -421,7 +421,7 @@ dpaa2_populate(struct rte_mempool *mp, unsigned int max_objs, /* Insert entry into the PA->VA Table */ dpaax_iova_table_update(paddr, vaddr, len); - return rte_mempool_op_populate_default(mp, max_objs, vaddr, paddr, len, + return rte_mempool_op_populate_helper(mp, max_objs, vaddr, paddr, len, obj_cb, obj_cb_arg); } diff --git a/drivers/mempool/octeontx/Makefile b/drivers/mempool/octeontx/Makefile index a3e1dce88..25efc5cf2 100644 --- a/drivers/mempool/octeontx/Makefile +++ b/drivers/mempool/octeontx/Makefile @@ -11,6 +11,8 @@ LIB = librte_mempool_octeontx.a CFLAGS += $(WERROR_FLAGS) CFLAGS += -I$(RTE_SDK)/drivers/common/octeontx/ +CFLAGS += -DALLOW_EXPERIMENTAL_API + EXPORT_MAP := rte_mempool_octeontx_version.map LIBABIVER := 1 diff --git a/drivers/mempool/octeontx/meson.build b/drivers/mempool/octeontx/meson.build index 3baaf7db2..195435950 100644 --- a/drivers/mempool/octeontx/meson.build +++ b/drivers/mempool/octeontx/meson.build @@ -1,6 +1,8 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2017 Cavium, Inc +allow_experimental_apis = true + sources = files('octeontx_fpavf.c', 'rte_mempool_octeontx.c' ) diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c index ab94dfe91..fff33e5c6 100644 --- a/drivers/mempool/octeontx/rte_mempool_octeontx.c +++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c @@ -137,7 +137,7 @@ octeontx_fpavf_calc_mem_size(const struct rte_mempool *mp, * Simply need space for one more object to be able to * fulfil alignment requirements. */ - mem_size = rte_mempool_op_calc_mem_size_default(mp, obj_num + 1, + mem_size = rte_mempool_op_calc_mem_size_helper(mp, obj_num + 1, pg_shift, min_chunk_size, align); if (mem_size >= 0) { @@ -184,7 +184,7 @@ octeontx_fpavf_populate(struct rte_mempool *mp, unsigned int max_objs, if (ret < 0) return ret; - return rte_mempool_op_populate_default(mp, max_objs, vaddr, iova, len, + return rte_mempool_op_populate_helper(mp, max_objs, vaddr, iova, len, obj_cb, obj_cb_arg); } diff --git a/drivers/mempool/octeontx2/Makefile b/drivers/mempool/octeontx2/Makefile index 87cce22c6..4961a4a35 100644 --- a/drivers/mempool/octeontx2/Makefile +++ b/drivers/mempool/octeontx2/Makefile @@ -23,6 +23,8 @@ CFLAGS += -diag-disable 2259 endif endif +CFLAGS += -DALLOW_EXPERIMENTAL_API + EXPORT_MAP := rte_mempool_octeontx2_version.map LIBABIVER := 1 diff --git a/drivers/mempool/octeontx2/meson.build b/drivers/mempool/octeontx2/meson.build index 9fde40f0e..8255377df 100644 --- a/drivers/mempool/octeontx2/meson.build +++ b/drivers/mempool/octeontx2/meson.build @@ -2,6 +2,8 @@ # Copyright(C) 2019 Marvell International Ltd. # +allow_experimental_apis = true + sources = files('otx2_mempool_ops.c', 'otx2_mempool.c', 'otx2_mempool_irq.c', diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c index d769575f4..3aea92a01 100644 --- a/drivers/mempool/octeontx2/otx2_mempool_ops.c +++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c @@ -717,7 +717,7 @@ otx2_npa_calc_mem_size(const struct rte_mempool *mp, uint32_t obj_num, * Simply need space for one more object to be able to * fulfill alignment requirements. */ - return rte_mempool_op_calc_mem_size_default(mp, obj_num + 1, pg_shift, + return rte_mempool_op_calc_mem_size_helper(mp, obj_num + 1, pg_shift, min_chunk_size, align); } @@ -749,7 +749,7 @@ otx2_npa_populate(struct rte_mempool *mp, unsigned int max_objs, void *vaddr, if (npa_lf_aura_range_update_check(mp->pool_id) < 0) return -EBUSY; - return rte_mempool_op_populate_default(mp, max_objs, vaddr, iova, len, + return rte_mempool_op_populate_helper(mp, max_objs, vaddr, iova, len, obj_cb, obj_cb_arg); } diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h index 9e41f595e..ad7cc6ad2 100644 --- a/lib/librte_mempool/rte_mempool.h +++ b/lib/librte_mempool/rte_mempool.h @@ -465,8 +465,13 @@ typedef ssize_t (*rte_mempool_calc_mem_size_t)(const struct rte_mempool *mp, size_t *min_chunk_size, size_t *align); /** - * Default way to calculate memory size required to store given number of - * objects. + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * @internal Helper to calculate memory size required to store given + * number of objects. + * + * This function is internal to mempool library and mempool drivers. * * If page boundaries may be ignored, it is just a product of total * object size including header and trailer and number of objects. @@ -479,6 +484,32 @@ typedef ssize_t (*rte_mempool_calc_mem_size_t)(const struct rte_mempool *mp, * * Minimum size of memory chunk is the total element size. * Required memory chunk alignment is the cache line size. + * + * @param[in] mp + * A pointer to the mempool structure. + * @param[in] obj_num + * Number of objects to be added in mempool. + * @param[in] pg_shift + * LOG2 of the physical pages size. If set to 0, ignore page boundaries. + * @param[out] min_chunk_size + * Location for minimum size of the memory chunk which may be used to + * store memory pool objects. + * @param[out] align + * Location for required memory chunk alignment. + * @return + * Required memory size. + */ +__rte_experimental +ssize_t rte_mempool_op_calc_mem_size_helper(const struct rte_mempool *mp, + uint32_t obj_num, uint32_t pg_shift, + size_t *min_chunk_size, size_t *align); + +/** + * Default way to calculate memory size required to store given number of + * objects. + * + * Equivalent to rte_mempool_op_calc_mem_size_helper(mp, obj_num, pg_shift, + * min_chunk_size, align). */ ssize_t rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp, uint32_t obj_num, uint32_t pg_shift, @@ -533,8 +564,42 @@ typedef int (*rte_mempool_populate_t)(struct rte_mempool *mp, rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg); /** - * Default way to populate memory pool object using provided memory + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * @internal Helper to populate memory pool object using provided memory * chunk: just slice objects one by one. + * + * This function is internal to mempool library and mempool drivers. + * + * @param[in] mp + * A pointer to the mempool structure. + * @param[in] max_objs + * Maximum number of objects to be added in mempool. + * @param[in] vaddr + * The virtual address of memory that should be used to store objects. + * @param[in] iova + * The IO address corresponding to vaddr, or RTE_BAD_IOVA. + * @param[in] len + * The length of memory in bytes. + * @param[in] obj_cb + * Callback function to be executed for each populated object. + * @param[in] obj_cb_arg + * An opaque pointer passed to the callback function. + * @return + * The number of objects added in mempool. + */ +__rte_experimental +int rte_mempool_op_populate_helper(struct rte_mempool *mp, + unsigned int max_objs, + void *vaddr, rte_iova_t iova, size_t len, + rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg); + +/** + * Default way to populate memory pool object using provided memory chunk. + * + * Equivalent to rte_mempool_op_populate_helper(mp, max_objs, vaddr, iova, + * len, obj_cb, obj_cb_arg). */ int rte_mempool_op_populate_default(struct rte_mempool *mp, unsigned int max_objs, diff --git a/lib/librte_mempool/rte_mempool_ops_default.c b/lib/librte_mempool/rte_mempool_ops_default.c index f6aea7662..0bfc63497 100644 --- a/lib/librte_mempool/rte_mempool_ops_default.c +++ b/lib/librte_mempool/rte_mempool_ops_default.c @@ -7,9 +7,9 @@ #include ssize_t -rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp, - uint32_t obj_num, uint32_t pg_shift, - size_t *min_chunk_size, size_t *align) +rte_mempool_op_calc_mem_size_helper(const struct rte_mempool *mp, + uint32_t obj_num, uint32_t pg_shift, + size_t *min_chunk_size, size_t *align) { size_t total_elt_sz; size_t obj_per_page, pg_sz, objs_in_last_page; @@ -61,10 +61,19 @@ rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp, return mem_size; } +ssize_t +rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp, + uint32_t obj_num, uint32_t pg_shift, + size_t *min_chunk_size, size_t *align) +{ + return rte_mempool_op_calc_mem_size_helper(mp, obj_num, pg_shift, + min_chunk_size, align); +} + int -rte_mempool_op_populate_default(struct rte_mempool *mp, unsigned int max_objs, - void *vaddr, rte_iova_t iova, size_t len, - rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg) +rte_mempool_op_populate_helper(struct rte_mempool *mp, unsigned int max_objs, + void *vaddr, rte_iova_t iova, size_t len, + rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg) { size_t total_elt_sz; size_t off; @@ -84,3 +93,13 @@ rte_mempool_op_populate_default(struct rte_mempool *mp, unsigned int max_objs, return i; } + +int +rte_mempool_op_populate_default(struct rte_mempool *mp, unsigned int max_objs, + void *vaddr, rte_iova_t iova, size_t len, + rte_mempool_populate_obj_cb_t *obj_cb, + void *obj_cb_arg) +{ + return rte_mempool_op_populate_helper(mp, max_objs, vaddr, iova, + len, obj_cb, obj_cb_arg); +} diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map index aa75cf086..ce9e791e7 100644 --- a/lib/librte_mempool/rte_mempool_version.map +++ b/lib/librte_mempool/rte_mempool_version.map @@ -61,4 +61,6 @@ EXPERIMENTAL { # added in 19.11 rte_mempool_get_page_size; + rte_mempool_op_calc_mem_size_helper; + rte_mempool_op_populate_helper; }; From patchwork Tue Nov 5 15:37:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Olivier Matz X-Patchwork-Id: 62483 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8A26DA04A2; Tue, 5 Nov 2019 16:38:20 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4A3461BFDB; Tue, 5 Nov 2019 16:37:33 +0100 (CET) Received: from proxy.6wind.com (host.76.145.23.62.rev.coltfrance.com [62.23.145.76]) by dpdk.org (Postfix) with ESMTP id 8FD291BF6B for ; Tue, 5 Nov 2019 16:37:19 +0100 (CET) Received: from glumotte.dev.6wind.com. (unknown [10.16.0.195]) by proxy.6wind.com (Postfix) with ESMTP id 7ADB033B0F3; Tue, 5 Nov 2019 16:37:19 +0100 (CET) From: Olivier Matz To: dev@dpdk.org Cc: Anatoly Burakov , Andrew Rybchenko , Ferruh Yigit , "Giridharan, Ganesan" , Jerin Jacob Kollanukkaran , "Kiran Kumar Kokkilagadda" , Stephen Hemminger , Thomas Monjalon , Vamsi Krishna Attunuru , Hemant Agrawal , Nipun Gupta , David Marchand Date: Tue, 5 Nov 2019 16:37:05 +0100 Message-Id: <20191105153707.14645-7-olivier.matz@6wind.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191105153707.14645-1-olivier.matz@6wind.com> References: <20190719133845.32432-1-olivier.matz@6wind.com> <20191105153707.14645-1-olivier.matz@6wind.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v4 6/7] mempool: prevent objects from being across pages X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When populating a mempool, ensure that objects are not located across several pages, except if user did not request iova contiguous objects. Signed-off-by: Vamsi Krishna Attunuru Signed-off-by: Olivier Matz Acked-by: Nipun Gupta Acked-by: Andrew Rybchenko --- drivers/mempool/bucket/rte_mempool_bucket.c | 10 ++- drivers/mempool/dpaa/dpaa_mempool.c | 4 +- drivers/mempool/dpaa2/dpaa2_hw_mempool.c | 4 +- .../mempool/octeontx/rte_mempool_octeontx.c | 21 +++--- drivers/mempool/octeontx2/otx2_mempool_ops.c | 21 +++--- lib/librte_mempool/rte_mempool.c | 23 ++----- lib/librte_mempool/rte_mempool.h | 27 ++++++-- lib/librte_mempool/rte_mempool_ops_default.c | 66 +++++++++++++++---- 8 files changed, 119 insertions(+), 57 deletions(-) diff --git a/drivers/mempool/bucket/rte_mempool_bucket.c b/drivers/mempool/bucket/rte_mempool_bucket.c index dfeaf4e45..5ce1ef16f 100644 --- a/drivers/mempool/bucket/rte_mempool_bucket.c +++ b/drivers/mempool/bucket/rte_mempool_bucket.c @@ -401,6 +401,11 @@ bucket_alloc(struct rte_mempool *mp) struct bucket_data *bd; unsigned int i; unsigned int bucket_header_size; + size_t pg_sz; + + rc = rte_mempool_get_page_size(mp, &pg_sz); + if (rc < 0) + return rc; bd = rte_zmalloc_socket("bucket_pool", sizeof(*bd), RTE_CACHE_LINE_SIZE, mp->socket_id); @@ -416,7 +421,8 @@ bucket_alloc(struct rte_mempool *mp) RTE_BUILD_BUG_ON(sizeof(struct bucket_header) > RTE_CACHE_LINE_SIZE); bd->header_size = mp->header_size + bucket_header_size; bd->total_elt_size = mp->header_size + mp->elt_size + mp->trailer_size; - bd->bucket_mem_size = RTE_DRIVER_MEMPOOL_BUCKET_SIZE_KB * 1024; + bd->bucket_mem_size = RTE_MIN(pg_sz, + (size_t)(RTE_DRIVER_MEMPOOL_BUCKET_SIZE_KB * 1024)); bd->obj_per_bucket = (bd->bucket_mem_size - bucket_header_size) / bd->total_elt_size; bd->bucket_page_mask = ~(rte_align64pow2(bd->bucket_mem_size) - 1); @@ -585,7 +591,7 @@ bucket_populate(struct rte_mempool *mp, unsigned int max_objs, hdr->fill_cnt = 0; hdr->lcore_id = LCORE_ID_ANY; - rc = rte_mempool_op_populate_helper(mp, + rc = rte_mempool_op_populate_helper(mp, 0, RTE_MIN(bd->obj_per_bucket, max_objs - n_objs), iter + bucket_header_sz, diff --git a/drivers/mempool/dpaa/dpaa_mempool.c b/drivers/mempool/dpaa/dpaa_mempool.c index 27736e6c2..3a2528331 100644 --- a/drivers/mempool/dpaa/dpaa_mempool.c +++ b/drivers/mempool/dpaa/dpaa_mempool.c @@ -341,8 +341,8 @@ dpaa_populate(struct rte_mempool *mp, unsigned int max_objs, */ TAILQ_INSERT_HEAD(&rte_dpaa_memsegs, ms, next); - return rte_mempool_op_populate_helper(mp, max_objs, vaddr, paddr, len, - obj_cb, obj_cb_arg); + return rte_mempool_op_populate_helper(mp, 0, max_objs, vaddr, paddr, + len, obj_cb, obj_cb_arg); } static const struct rte_mempool_ops dpaa_mpool_ops = { diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c index 8f8dbeada..36c93decf 100644 --- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c +++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c @@ -421,8 +421,8 @@ dpaa2_populate(struct rte_mempool *mp, unsigned int max_objs, /* Insert entry into the PA->VA Table */ dpaax_iova_table_update(paddr, vaddr, len); - return rte_mempool_op_populate_helper(mp, max_objs, vaddr, paddr, len, - obj_cb, obj_cb_arg); + return rte_mempool_op_populate_helper(mp, 0, max_objs, vaddr, paddr, + len, obj_cb, obj_cb_arg); } static const struct rte_mempool_ops dpaa2_mpool_ops = { diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c index fff33e5c6..bd0070020 100644 --- a/drivers/mempool/octeontx/rte_mempool_octeontx.c +++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c @@ -132,14 +132,15 @@ octeontx_fpavf_calc_mem_size(const struct rte_mempool *mp, size_t *min_chunk_size, size_t *align) { ssize_t mem_size; + size_t total_elt_sz; - /* - * Simply need space for one more object to be able to - * fulfil alignment requirements. + /* Need space for one more obj on each chunk to fulfill + * alignment requirements. */ - mem_size = rte_mempool_op_calc_mem_size_helper(mp, obj_num + 1, - pg_shift, - min_chunk_size, align); + total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size; + mem_size = rte_mempool_op_calc_mem_size_helper(mp, obj_num, pg_shift, + total_elt_sz, min_chunk_size, + align); if (mem_size >= 0) { /* * Memory area which contains objects must be physically @@ -168,7 +169,7 @@ octeontx_fpavf_populate(struct rte_mempool *mp, unsigned int max_objs, total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size; /* align object start address to a multiple of total_elt_sz */ - off = total_elt_sz - ((uintptr_t)vaddr % total_elt_sz); + off = total_elt_sz - ((((uintptr_t)vaddr - 1) % total_elt_sz) + 1); if (len < off) return -EINVAL; @@ -184,8 +185,10 @@ octeontx_fpavf_populate(struct rte_mempool *mp, unsigned int max_objs, if (ret < 0) return ret; - return rte_mempool_op_populate_helper(mp, max_objs, vaddr, iova, len, - obj_cb, obj_cb_arg); + return rte_mempool_op_populate_helper(mp, + RTE_MEMPOOL_POPULATE_F_ALIGN_OBJ, + max_objs, vaddr, iova, len, + obj_cb, obj_cb_arg); } static struct rte_mempool_ops octeontx_fpavf_ops = { diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c index 3aea92a01..ea4b1c45d 100644 --- a/drivers/mempool/octeontx2/otx2_mempool_ops.c +++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c @@ -713,12 +713,15 @@ static ssize_t otx2_npa_calc_mem_size(const struct rte_mempool *mp, uint32_t obj_num, uint32_t pg_shift, size_t *min_chunk_size, size_t *align) { - /* - * Simply need space for one more object to be able to - * fulfill alignment requirements. + size_t total_elt_sz; + + /* Need space for one more obj on each chunk to fulfill + * alignment requirements. */ - return rte_mempool_op_calc_mem_size_helper(mp, obj_num + 1, pg_shift, - min_chunk_size, align); + total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size; + return rte_mempool_op_calc_mem_size_helper(mp, obj_num, pg_shift, + total_elt_sz, min_chunk_size, + align); } static int @@ -735,7 +738,7 @@ otx2_npa_populate(struct rte_mempool *mp, unsigned int max_objs, void *vaddr, total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size; /* Align object start address to a multiple of total_elt_sz */ - off = total_elt_sz - ((uintptr_t)vaddr % total_elt_sz); + off = total_elt_sz - ((((uintptr_t)vaddr - 1) % total_elt_sz) + 1); if (len < off) return -EINVAL; @@ -749,8 +752,10 @@ otx2_npa_populate(struct rte_mempool *mp, unsigned int max_objs, void *vaddr, if (npa_lf_aura_range_update_check(mp->pool_id) < 0) return -EBUSY; - return rte_mempool_op_populate_helper(mp, max_objs, vaddr, iova, len, - obj_cb, obj_cb_arg); + return rte_mempool_op_populate_helper(mp, + RTE_MEMPOOL_POPULATE_F_ALIGN_OBJ, + max_objs, vaddr, iova, len, + obj_cb, obj_cb_arg); } static struct rte_mempool_ops otx2_npa_ops = { diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c index 758c5410b..d3db9273d 100644 --- a/lib/librte_mempool/rte_mempool.c +++ b/lib/librte_mempool/rte_mempool.c @@ -431,8 +431,6 @@ rte_mempool_get_page_size(struct rte_mempool *mp, size_t *pg_sz) if (!need_iova_contig_obj) *pg_sz = 0; - else if (!alloc_in_ext_mem && rte_eal_iova_mode() == RTE_IOVA_VA) - *pg_sz = 0; else if (rte_eal_has_hugepages() || alloc_in_ext_mem) *pg_sz = get_min_page_size(mp->socket_id); else @@ -481,17 +479,15 @@ rte_mempool_populate_default(struct rte_mempool *mp) * then just set page shift and page size to 0, because the user has * indicated that there's no need to care about anything. * - * if we do need contiguous objects, there is also an option to reserve - * the entire mempool memory as one contiguous block of memory, in - * which case the page shift and alignment wouldn't matter as well. + * if we do need contiguous objects (if a mempool driver has its + * own calc_size() method returning min_chunk_size = mem_size), + * there is also an option to reserve the entire mempool memory + * as one contiguous block of memory. * * if we require contiguous objects, but not necessarily the entire - * mempool reserved space to be contiguous, then there are two options. - * - * if our IO addresses are virtual, not actual physical (IOVA as VA - * case), then no page shift needed - our memory allocation will give us - * contiguous IO memory as far as the hardware is concerned, so - * act as if we're getting contiguous memory. + * mempool reserved space to be contiguous, pg_sz will be != 0, + * and the default ops->populate() will take care of not placing + * objects across pages. * * if our IO addresses are physical, we may get memory from bigger * pages, or we might get memory from smaller pages, and how much of it @@ -504,11 +500,6 @@ rte_mempool_populate_default(struct rte_mempool *mp) * * If we fail to get enough contiguous memory, then we'll go and * reserve space in smaller chunks. - * - * We also have to take into account the fact that memory that we're - * going to allocate from can belong to an externally allocated memory - * area, in which case the assumption of IOVA as VA mode being - * synonymous with IOVA contiguousness will not hold. */ need_iova_contig_obj = !(mp->flags & MEMPOOL_F_NO_IOVA_CONTIG); diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h index ad7cc6ad2..225bf9fc9 100644 --- a/lib/librte_mempool/rte_mempool.h +++ b/lib/librte_mempool/rte_mempool.h @@ -491,6 +491,9 @@ typedef ssize_t (*rte_mempool_calc_mem_size_t)(const struct rte_mempool *mp, * Number of objects to be added in mempool. * @param[in] pg_shift * LOG2 of the physical pages size. If set to 0, ignore page boundaries. + * @param[in] chunk_reserve + * Amount of memory that must be reserved at the beginning of each page, + * or at the beginning of the memory area if pg_shift is 0. * @param[out] min_chunk_size * Location for minimum size of the memory chunk which may be used to * store memory pool objects. @@ -501,7 +504,7 @@ typedef ssize_t (*rte_mempool_calc_mem_size_t)(const struct rte_mempool *mp, */ __rte_experimental ssize_t rte_mempool_op_calc_mem_size_helper(const struct rte_mempool *mp, - uint32_t obj_num, uint32_t pg_shift, + uint32_t obj_num, uint32_t pg_shift, size_t chunk_reserve, size_t *min_chunk_size, size_t *align); /** @@ -509,7 +512,7 @@ ssize_t rte_mempool_op_calc_mem_size_helper(const struct rte_mempool *mp, * objects. * * Equivalent to rte_mempool_op_calc_mem_size_helper(mp, obj_num, pg_shift, - * min_chunk_size, align). + * 0, min_chunk_size, align). */ ssize_t rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp, uint32_t obj_num, uint32_t pg_shift, @@ -563,17 +566,31 @@ typedef int (*rte_mempool_populate_t)(struct rte_mempool *mp, void *vaddr, rte_iova_t iova, size_t len, rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg); +/** + * Align objects on addresses multiple of total_elt_sz. + */ +#define RTE_MEMPOOL_POPULATE_F_ALIGN_OBJ 0x0001 + /** * @warning * @b EXPERIMENTAL: this API may change without prior notice. * * @internal Helper to populate memory pool object using provided memory - * chunk: just slice objects one by one. + * chunk: just slice objects one by one, taking care of not + * crossing page boundaries. + * + * If RTE_MEMPOOL_POPULATE_F_ALIGN_OBJ is set in flags, the addresses + * of object headers will be aligned on a multiple of total_elt_sz. + * This feature is used by octeontx hardware. * * This function is internal to mempool library and mempool drivers. * * @param[in] mp * A pointer to the mempool structure. + * @param[in] flags + * Logical OR of following flags: + * - RTE_MEMPOOL_POPULATE_F_ALIGN_OBJ: align objects on addresses + * multiple of total_elt_sz. * @param[in] max_objs * Maximum number of objects to be added in mempool. * @param[in] vaddr @@ -591,14 +608,14 @@ typedef int (*rte_mempool_populate_t)(struct rte_mempool *mp, */ __rte_experimental int rte_mempool_op_populate_helper(struct rte_mempool *mp, - unsigned int max_objs, + unsigned int flags, unsigned int max_objs, void *vaddr, rte_iova_t iova, size_t len, rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg); /** * Default way to populate memory pool object using provided memory chunk. * - * Equivalent to rte_mempool_op_populate_helper(mp, max_objs, vaddr, iova, + * Equivalent to rte_mempool_op_populate_helper(mp, 0, max_objs, vaddr, iova, * len, obj_cb, obj_cb_arg). */ int rte_mempool_op_populate_default(struct rte_mempool *mp, diff --git a/lib/librte_mempool/rte_mempool_ops_default.c b/lib/librte_mempool/rte_mempool_ops_default.c index 0bfc63497..e6be7152b 100644 --- a/lib/librte_mempool/rte_mempool_ops_default.c +++ b/lib/librte_mempool/rte_mempool_ops_default.c @@ -9,6 +9,7 @@ ssize_t rte_mempool_op_calc_mem_size_helper(const struct rte_mempool *mp, uint32_t obj_num, uint32_t pg_shift, + size_t chunk_reserve, size_t *min_chunk_size, size_t *align) { size_t total_elt_sz; @@ -19,10 +20,12 @@ rte_mempool_op_calc_mem_size_helper(const struct rte_mempool *mp, if (total_elt_sz == 0) { mem_size = 0; } else if (pg_shift == 0) { - mem_size = total_elt_sz * obj_num; + mem_size = total_elt_sz * obj_num + chunk_reserve; } else { pg_sz = (size_t)1 << pg_shift; - obj_per_page = pg_sz / total_elt_sz; + if (chunk_reserve >= pg_sz) + return -EINVAL; + obj_per_page = (pg_sz - chunk_reserve) / total_elt_sz; if (obj_per_page == 0) { /* * Note that if object size is bigger than page size, @@ -30,8 +33,8 @@ rte_mempool_op_calc_mem_size_helper(const struct rte_mempool *mp, * of physically continuous pages big enough to store * at least one object. */ - mem_size = - RTE_ALIGN_CEIL(total_elt_sz, pg_sz) * obj_num; + mem_size = RTE_ALIGN_CEIL(total_elt_sz + chunk_reserve, + pg_sz) * obj_num; } else { /* In the best case, the allocator will return a * page-aligned address. For example, with 5 objs, @@ -42,7 +45,8 @@ rte_mempool_op_calc_mem_size_helper(const struct rte_mempool *mp, */ objs_in_last_page = ((obj_num - 1) % obj_per_page) + 1; /* room required for the last page */ - mem_size = objs_in_last_page * total_elt_sz; + mem_size = objs_in_last_page * total_elt_sz + + chunk_reserve; /* room required for other pages */ mem_size += ((obj_num - objs_in_last_page) / obj_per_page) << pg_shift; @@ -67,24 +71,60 @@ rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp, size_t *min_chunk_size, size_t *align) { return rte_mempool_op_calc_mem_size_helper(mp, obj_num, pg_shift, - min_chunk_size, align); + 0, min_chunk_size, align); +} + +/* Returns -1 if object crosses a page boundary, else returns 0 */ +static int +check_obj_bounds(char *obj, size_t pg_sz, size_t elt_sz) +{ + if (pg_sz == 0) + return 0; + if (elt_sz > pg_sz) + return 0; + if (RTE_PTR_ALIGN(obj, pg_sz) != RTE_PTR_ALIGN(obj + elt_sz - 1, pg_sz)) + return -1; + return 0; } int -rte_mempool_op_populate_helper(struct rte_mempool *mp, unsigned int max_objs, - void *vaddr, rte_iova_t iova, size_t len, - rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg) +rte_mempool_op_populate_helper(struct rte_mempool *mp, unsigned int flags, + unsigned int max_objs, void *vaddr, rte_iova_t iova, + size_t len, rte_mempool_populate_obj_cb_t *obj_cb, + void *obj_cb_arg) { - size_t total_elt_sz; + char *va = vaddr; + size_t total_elt_sz, pg_sz; size_t off; unsigned int i; void *obj; + int ret; + + ret = rte_mempool_get_page_size(mp, &pg_sz); + if (ret < 0) + return ret; total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size; - for (off = 0, i = 0; off + total_elt_sz <= len && i < max_objs; i++) { + if (flags & RTE_MEMPOOL_POPULATE_F_ALIGN_OBJ) + off = total_elt_sz - (((uintptr_t)(va - 1) % total_elt_sz) + 1); + else + off = 0; + for (i = 0; i < max_objs; i++) { + /* avoid objects to cross page boundaries */ + if (check_obj_bounds(va + off, pg_sz, total_elt_sz) < 0) { + off += RTE_PTR_ALIGN_CEIL(va + off, pg_sz) - (va + off); + if (flags & RTE_MEMPOOL_POPULATE_F_ALIGN_OBJ) + off += total_elt_sz - + (((uintptr_t)(va + off - 1) % + total_elt_sz) + 1); + } + + if (off + total_elt_sz > len) + break; + off += mp->header_size; - obj = (char *)vaddr + off; + obj = va + off; obj_cb(mp, obj_cb_arg, obj, (iova == RTE_BAD_IOVA) ? RTE_BAD_IOVA : (iova + off)); rte_mempool_ops_enqueue_bulk(mp, &obj, 1); @@ -100,6 +140,6 @@ rte_mempool_op_populate_default(struct rte_mempool *mp, unsigned int max_objs, rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg) { - return rte_mempool_op_populate_helper(mp, max_objs, vaddr, iova, + return rte_mempool_op_populate_helper(mp, 0, max_objs, vaddr, iova, len, obj_cb, obj_cb_arg); } From patchwork Tue Nov 5 15:37:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Olivier Matz X-Patchwork-Id: 62482 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9714EA04A2; Tue, 5 Nov 2019 16:38:12 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 950D61BFD2; Tue, 5 Nov 2019 16:37:31 +0100 (CET) Received: from proxy.6wind.com (host.76.145.23.62.rev.coltfrance.com [62.23.145.76]) by dpdk.org (Postfix) with ESMTP id A083D1BFAD for ; Tue, 5 Nov 2019 16:37:19 +0100 (CET) Received: from glumotte.dev.6wind.com. (unknown [10.16.0.195]) by proxy.6wind.com (Postfix) with ESMTP id 8BCFA33B0F4; Tue, 5 Nov 2019 16:37:19 +0100 (CET) From: Olivier Matz To: dev@dpdk.org Cc: Anatoly Burakov , Andrew Rybchenko , Ferruh Yigit , "Giridharan, Ganesan" , Jerin Jacob Kollanukkaran , "Kiran Kumar Kokkilagadda" , Stephen Hemminger , Thomas Monjalon , Vamsi Krishna Attunuru , Hemant Agrawal , Nipun Gupta , David Marchand Date: Tue, 5 Nov 2019 16:37:06 +0100 Message-Id: <20191105153707.14645-8-olivier.matz@6wind.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191105153707.14645-1-olivier.matz@6wind.com> References: <20190719133845.32432-1-olivier.matz@6wind.com> <20191105153707.14645-1-olivier.matz@6wind.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v4 7/7] mempool: use the specific macro for object alignment X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" For consistency, RTE_MEMPOOL_ALIGN should be used in place of RTE_CACHE_LINE_SIZE. They have the same value, because the only arch that was defining a specific value for it has been removed from dpdk. Signed-off-by: Olivier Matz Reviewed-by: Andrew Rybchenko Acked-by: Nipun Gupta --- lib/librte_mempool/rte_mempool.c | 2 +- lib/librte_mempool/rte_mempool.h | 3 +++ lib/librte_mempool/rte_mempool_ops_default.c | 2 +- 3 files changed, 5 insertions(+), 2 deletions(-) diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c index d3db9273d..40cae3eb6 100644 --- a/lib/librte_mempool/rte_mempool.c +++ b/lib/librte_mempool/rte_mempool.c @@ -329,7 +329,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr, if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN) off = RTE_PTR_ALIGN_CEIL(vaddr, 8) - vaddr; else - off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_CACHE_LINE_SIZE) - vaddr; + off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_MEMPOOL_ALIGN) - vaddr; if (off > len) { ret = -EINVAL; diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h index 225bf9fc9..f81152af9 100644 --- a/lib/librte_mempool/rte_mempool.h +++ b/lib/librte_mempool/rte_mempool.h @@ -116,6 +116,9 @@ struct rte_mempool_objsz { #define MEMPOOL_PG_NUM_DEFAULT 1 #ifndef RTE_MEMPOOL_ALIGN +/** + * Alignment of elements inside mempool. + */ #define RTE_MEMPOOL_ALIGN RTE_CACHE_LINE_SIZE #endif diff --git a/lib/librte_mempool/rte_mempool_ops_default.c b/lib/librte_mempool/rte_mempool_ops_default.c index e6be7152b..22fccf9d7 100644 --- a/lib/librte_mempool/rte_mempool_ops_default.c +++ b/lib/librte_mempool/rte_mempool_ops_default.c @@ -60,7 +60,7 @@ rte_mempool_op_calc_mem_size_helper(const struct rte_mempool *mp, } *min_chunk_size = total_elt_sz; - *align = RTE_CACHE_LINE_SIZE; + *align = RTE_MEMPOOL_ALIGN; return mem_size; }