Message ID | 20191105153707.14645-1-olivier.matz@6wind.com (mailing list archive) |
---|---|
Headers |
Return-Path: <dev-bounces@dpdk.org> X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 987ECA04A2; Tue, 5 Nov 2019 16:37:21 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C24001BF9F; Tue, 5 Nov 2019 16:37:20 +0100 (CET) Received: from proxy.6wind.com (host.76.145.23.62.rev.coltfrance.com [62.23.145.76]) by dpdk.org (Postfix) with ESMTP id 4D0851BF6B for <dev@dpdk.org>; Tue, 5 Nov 2019 16:37:19 +0100 (CET) Received: from glumotte.dev.6wind.com. (unknown [10.16.0.195]) by proxy.6wind.com (Postfix) with ESMTP id 1A24B33B0ED; Tue, 5 Nov 2019 16:37:19 +0100 (CET) From: Olivier Matz <olivier.matz@6wind.com> To: dev@dpdk.org Cc: Anatoly Burakov <anatoly.burakov@intel.com>, Andrew Rybchenko <arybchenko@solarflare.com>, Ferruh Yigit <ferruh.yigit@linux.intel.com>, "Giridharan, Ganesan" <ggiridharan@rbbn.com>, Jerin Jacob Kollanukkaran <jerinj@marvell.com>, "Kiran Kumar Kokkilagadda" <kirankumark@marvell.com>, Stephen Hemminger <sthemmin@microsoft.com>, Thomas Monjalon <thomas@monjalon.net>, Vamsi Krishna Attunuru <vattunuru@marvell.com>, Hemant Agrawal <hemant.agrawal@nxp.com>, Nipun Gupta <nipun.gupta@nxp.com>, David Marchand <david.marchand@redhat.com> Date: Tue, 5 Nov 2019 16:36:59 +0100 Message-Id: <20191105153707.14645-1-olivier.matz@6wind.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190719133845.32432-1-olivier.matz@6wind.com> References: <20190719133845.32432-1-olivier.matz@6wind.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH v4 0/7] mempool: avoid objects allocations across pages X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions <dev.dpdk.org> List-Unsubscribe: <https://mails.dpdk.org/options/dev>, <mailto:dev-request@dpdk.org?subject=unsubscribe> List-Archive: <http://mails.dpdk.org/archives/dev/> List-Post: <mailto:dev@dpdk.org> List-Help: <mailto:dev-request@dpdk.org?subject=help> List-Subscribe: <https://mails.dpdk.org/listinfo/dev>, <mailto:dev-request@dpdk.org?subject=subscribe> Errors-To: dev-bounces@dpdk.org Sender: "dev" <dev-bounces@dpdk.org> |
Series |
mempool: avoid objects allocations across pages
|
|
Message
Olivier Matz
Nov. 5, 2019, 3:36 p.m. UTC
KNI supposes that mbufs are contiguous in kernel virtual memory. This may not be true when using the IOVA=VA mode. To fix this, a possibility is to ensure that objects do not cross page boundaries in mempool. This patchset implements this in the last patch (5/5). The previous patches prepare the job: - allow to populate with an unaligned virtual area (1/5). - reduce spaced wasted in the mempool size calculation when not using the iova-contiguous allocation (2/5). - remove the iova-contiguous allocation when populating mempool (3/5): a va-contiguous alloc does the job as well if we want to populate without crossing page boundaries, so simplify the mempool populate function. - export a function to get the minimum page used in a mempool (4/5) Memory consumption impact when using hugepages: - worst case: + ~0.1% for a mbuf pool (objsize ~= 2368) - best case: -50% for if pool size is just above page size The memory consumption impact with 4K pages in IOVA=VA mode could however consume up to 75% more memory for mbuf pool, because there will be only 1 mbuf per page. Not sure how common this usecase is. Caveat: this changes the behavior of the mempool (calc_mem_size and populate), and there is a small risk to break things, especially with alternate mempool drivers. v4 * remove useless comments in Makefiles and meson.build (sugg by David) * add EXPERIMENTAL banner on new functions in API comments (David) * sort by version in rte_mempool_version.map (David) * remove duplicated -DALLOW_EXPERIMENTAL_API flag in octeontx2 mempool driver * enhance API comments for new helpers v3 * introduce new helpers to calculate required memory size and to populate mempool, use them in drivers: the alignment constraint of octeontx/octeontx2 is managed in this common code. * fix octeontx mempool driver by taking alignment constraint in account like in octeontx2 * fix bucket mempool driver with 4K pages: limit bucket size in this case to ensure that objects do not cross page boundaries. With larger pages, it was already ok, because bucket size (64K) is smaller than a page. * fix some api comments in mempool header file v2 * update octeontx2 driver to keep alignment constraint (issue seen by Vamsi) * add a new patch to use RTE_MEMPOOL_ALIGN (Andrew) * fix initialization of for loop in rte_mempool_populate_virt() (Andrew) * use rte_mempool_populate_iova() if mz_flags has RTE_MEMZONE_IOVA_CONTIG (Andrew) * check rte_mempool_get_page_size() return value (Andrew) * some other minor style improvements rfc -> v1 * remove first cleanup patch, it was pushed separately a2b5a8722f20 ("mempool: clarify default populate function") * add missing change in rte_mempool_op_calc_mem_size_default() * allow unaligned addr/len in populate virt * better split patches * try to better explain the change * use DPDK align macros when relevant Olivier Matz (7): mempool: allow unaligned addr/len in populate virt mempool: reduce wasted space on mempool populate mempool: remove optimistic IOVA-contiguous allocation mempool: introduce function to get mempool page size mempool: introduce helpers for populate and calc mem size mempool: prevent objects from being across pages mempool: use the specific macro for object alignment drivers/mempool/bucket/Makefile | 1 + drivers/mempool/bucket/meson.build | 2 + drivers/mempool/bucket/rte_mempool_bucket.c | 10 +- drivers/mempool/dpaa/dpaa_mempool.c | 4 +- drivers/mempool/dpaa2/dpaa2_hw_mempool.c | 4 +- drivers/mempool/octeontx/Makefile | 2 + drivers/mempool/octeontx/meson.build | 2 + .../mempool/octeontx/rte_mempool_octeontx.c | 21 +-- drivers/mempool/octeontx2/Makefile | 2 + drivers/mempool/octeontx2/meson.build | 2 + drivers/mempool/octeontx2/otx2_mempool_ops.c | 21 ++- lib/librte_mempool/rte_mempool.c | 147 +++++++----------- lib/librte_mempool/rte_mempool.h | 114 ++++++++++++-- lib/librte_mempool/rte_mempool_ops.c | 4 +- lib/librte_mempool/rte_mempool_ops_default.c | 113 +++++++++++--- lib/librte_mempool/rte_mempool_version.map | 6 + 16 files changed, 312 insertions(+), 143 deletions(-)
Comments
On Tue, Nov 05, 2019 at 04:36:59PM +0100, Olivier Matz wrote: > KNI supposes that mbufs are contiguous in kernel virtual memory. This > may not be true when using the IOVA=VA mode. To fix this, a possibility > is to ensure that objects do not cross page boundaries in mempool. This > patchset implements this in the last patch (5/5). > > The previous patches prepare the job: > - allow to populate with an unaligned virtual area (1/5). > - reduce spaced wasted in the mempool size calculation when not using > the iova-contiguous allocation (2/5). > - remove the iova-contiguous allocation when populating mempool (3/5): > a va-contiguous alloc does the job as well if we want to populate > without crossing page boundaries, so simplify the mempool populate > function. > - export a function to get the minimum page used in a mempool (4/5) > > Memory consumption impact when using hugepages: > - worst case: + ~0.1% for a mbuf pool (objsize ~= 2368) > - best case: -50% for if pool size is just above page size > > The memory consumption impact with 4K pages in IOVA=VA mode could > however consume up to 75% more memory for mbuf pool, because there will > be only 1 mbuf per page. Not sure how common this usecase is. > > Caveat: this changes the behavior of the mempool (calc_mem_size and > populate), and there is a small risk to break things, especially with > alternate mempool drivers. > > v4 > > * remove useless comments in Makefiles and meson.build (sugg by David) > * add EXPERIMENTAL banner on new functions in API comments (David) > * sort by version in rte_mempool_version.map (David) > * remove duplicated -DALLOW_EXPERIMENTAL_API flag in octeontx2 > mempool driver > * enhance API comments for new helpers I forgot: * move bucket mempool driver modifications from patch 7 to patch 6 (seen by Andrew) > > v3 > > * introduce new helpers to calculate required memory size and to > populate mempool, use them in drivers: the alignment constraint > of octeontx/octeontx2 is managed in this common code. > * fix octeontx mempool driver by taking alignment constraint in account > like in octeontx2 > * fix bucket mempool driver with 4K pages: limit bucket size in this > case to ensure that objects do not cross page boundaries. With larger > pages, it was already ok, because bucket size (64K) is smaller than > a page. > * fix some api comments in mempool header file > > v2 > > * update octeontx2 driver to keep alignment constraint (issue seen by > Vamsi) > * add a new patch to use RTE_MEMPOOL_ALIGN (Andrew) > * fix initialization of for loop in rte_mempool_populate_virt() (Andrew) > * use rte_mempool_populate_iova() if mz_flags has > RTE_MEMZONE_IOVA_CONTIG (Andrew) > * check rte_mempool_get_page_size() return value (Andrew) > * some other minor style improvements > > rfc -> v1 > > * remove first cleanup patch, it was pushed separately > a2b5a8722f20 ("mempool: clarify default populate function") > * add missing change in rte_mempool_op_calc_mem_size_default() > * allow unaligned addr/len in populate virt > * better split patches > * try to better explain the change > * use DPDK align macros when relevant > > Olivier Matz (7): > mempool: allow unaligned addr/len in populate virt > mempool: reduce wasted space on mempool populate > mempool: remove optimistic IOVA-contiguous allocation > mempool: introduce function to get mempool page size > mempool: introduce helpers for populate and calc mem size > mempool: prevent objects from being across pages > mempool: use the specific macro for object alignment > > drivers/mempool/bucket/Makefile | 1 + > drivers/mempool/bucket/meson.build | 2 + > drivers/mempool/bucket/rte_mempool_bucket.c | 10 +- > drivers/mempool/dpaa/dpaa_mempool.c | 4 +- > drivers/mempool/dpaa2/dpaa2_hw_mempool.c | 4 +- > drivers/mempool/octeontx/Makefile | 2 + > drivers/mempool/octeontx/meson.build | 2 + > .../mempool/octeontx/rte_mempool_octeontx.c | 21 +-- > drivers/mempool/octeontx2/Makefile | 2 + > drivers/mempool/octeontx2/meson.build | 2 + > drivers/mempool/octeontx2/otx2_mempool_ops.c | 21 ++- > lib/librte_mempool/rte_mempool.c | 147 +++++++----------- > lib/librte_mempool/rte_mempool.h | 114 ++++++++++++-- > lib/librte_mempool/rte_mempool_ops.c | 4 +- > lib/librte_mempool/rte_mempool_ops_default.c | 113 +++++++++++--- > lib/librte_mempool/rte_mempool_version.map | 6 + > 16 files changed, 312 insertions(+), 143 deletions(-) > > -- > 2.20.1 >
05/11/2019 16:36, Olivier Matz: > Olivier Matz (7): > mempool: allow unaligned addr/len in populate virt > mempool: reduce wasted space on mempool populate > mempool: remove optimistic IOVA-contiguous allocation > mempool: introduce function to get mempool page size > mempool: introduce helpers for populate and calc mem size > mempool: prevent objects from being across pages > mempool: use the specific macro for object alignment Applied, thanks Release notes were added.