Message ID | 20191030143619.4007-1-olivier.matz@6wind.com (mailing list archive) |
---|---|
Headers |
Return-Path: <dev-bounces@dpdk.org> X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6E389A00BE; Wed, 30 Oct 2019 15:37:51 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 74A651C034; Wed, 30 Oct 2019 15:37:10 +0100 (CET) Received: from proxy.6wind.com (host.76.145.23.62.rev.coltfrance.com [62.23.145.76]) by dpdk.org (Postfix) with ESMTP id A90551BFDF for <dev@dpdk.org>; Wed, 30 Oct 2019 15:36:57 +0100 (CET) Received: from glumotte.dev.6wind.com. (unknown [10.16.0.195]) by proxy.6wind.com (Postfix) with ESMTP id 96AC4338A48; Wed, 30 Oct 2019 15:36:55 +0100 (CET) From: Olivier Matz <olivier.matz@6wind.com> To: dev@dpdk.org Cc: Anatoly Burakov <anatoly.burakov@intel.com>, Andrew Rybchenko <arybchenko@solarflare.com>, Ferruh Yigit <ferruh.yigit@linux.intel.com>, "Giridharan, Ganesan" <ggiridharan@rbbn.com>, Jerin Jacob Kollanukkaran <jerinj@marvell.com>, "Kiran Kumar Kokkilagadda" <kirankumark@marvell.com>, Stephen Hemminger <sthemmin@microsoft.com>, Thomas Monjalon <thomas@monjalon.net>, Vamsi Krishna Attunuru <vattunuru@marvell.com> Date: Wed, 30 Oct 2019 15:36:13 +0100 Message-Id: <20191030143619.4007-1-olivier.matz@6wind.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190719133845.32432-1-olivier.matz@6wind.com> References: <20190719133845.32432-1-olivier.matz@6wind.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH v2 0/6] mempool: avoid objects allocations across pages X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions <dev.dpdk.org> List-Unsubscribe: <https://mails.dpdk.org/options/dev>, <mailto:dev-request@dpdk.org?subject=unsubscribe> List-Archive: <http://mails.dpdk.org/archives/dev/> List-Post: <mailto:dev@dpdk.org> List-Help: <mailto:dev-request@dpdk.org?subject=help> List-Subscribe: <https://mails.dpdk.org/listinfo/dev>, <mailto:dev-request@dpdk.org?subject=subscribe> Errors-To: dev-bounces@dpdk.org Sender: "dev" <dev-bounces@dpdk.org> |
Series |
mempool: avoid objects allocations across pages
|
|
Message
Olivier Matz
Oct. 30, 2019, 2:36 p.m. UTC
KNI supposes that mbufs are contiguous in kernel virtual memory. This may not be true when using the IOVA=VA mode. To fix this, a possibility is to ensure that objects do not cross page boundaries in mempool. This patchset implements this in the last patch (5/5). The previous patches prepare the job: - allow to populate with an unaligned virtual area (1/5). - reduce spaced wasted in the mempool size calculation when not using the iova-contiguous allocation (2/5). - remove the iova-contiguous allocation when populating mempool (3/5): a va-contiguous alloc does the job as well if we want to populate without crossing page boundaries, so simplify the mempool populate function. - export a function to get the minimum page used in a mempool (4/5) Memory consumption impact when using hugepages: - worst case: + ~0.1% for a mbuf pool (objsize ~= 2368) - best case: -50% for if pool size is just above page size The memory consumption impact with 4K pages in IOVA=VA mode could however consume up to 75% more memory for mbuf pool, because there will be only 1 mbuf per page. Not sure how common this usecase is. Caveat: this changes the behavior of the mempool (calc_mem_size and populate), and there is a small risk to break things, especially with alternate mempool drivers. v2 * update octeontx2 driver to keep alignment constraint (issue seen by Vamsi) * add a new patch to use RTE_MEMPOOL_ALIGN (Andrew) * fix initialization of for loop in rte_mempool_populate_virt() (Andrew) * use rte_mempool_populate_iova() if mz_flags has RTE_MEMZONE_IOVA_CONTIG (Andrew) * check rte_mempool_get_page_size() return value (Andrew) * some other minor style improvements rfc -> v1 * remove first cleanup patch, it was pushed separately a2b5a8722f20 ("mempool: clarify default populate function") * add missing change in rte_mempool_op_calc_mem_size_default() * allow unaligned addr/len in populate virt * better split patches * try to better explain the change * use DPDK align macros when relevant Olivier Matz (6): mempool: allow unaligned addr/len in populate virt mempool: reduce wasted space on mempool populate mempool: remove optimistic IOVA-contiguous allocation mempool: introduce function to get mempool page size mempool: prevent objects from being across pages mempool: use the specific macro for object alignment drivers/mempool/octeontx2/Makefile | 3 + drivers/mempool/octeontx2/meson.build | 3 + drivers/mempool/octeontx2/otx2_mempool_ops.c | 119 +++++++++++++-- lib/librte_mempool/rte_mempool.c | 147 ++++++++----------- lib/librte_mempool/rte_mempool.h | 15 +- lib/librte_mempool/rte_mempool_ops.c | 4 +- lib/librte_mempool/rte_mempool_ops_default.c | 60 ++++++-- lib/librte_mempool/rte_mempool_version.map | 1 + 8 files changed, 236 insertions(+), 116 deletions(-)
Comments
Series Acked-by: Nipun Gupta <nipun.gupta@nxp.com> > -----Original Message----- > From: dev <dev-bounces@dpdk.org> On Behalf Of Olivier Matz > Sent: Wednesday, October 30, 2019 8:06 PM > To: dev@dpdk.org > Cc: Anatoly Burakov <anatoly.burakov@intel.com>; Andrew Rybchenko > <arybchenko@solarflare.com>; Ferruh Yigit <ferruh.yigit@linux.intel.com>; > Giridharan, Ganesan <ggiridharan@rbbn.com>; Jerin Jacob Kollanukkaran > <jerinj@marvell.com>; Kiran Kumar Kokkilagadda <kirankumark@marvell.com>; > Stephen Hemminger <sthemmin@microsoft.com>; Thomas Monjalon > <thomas@monjalon.net>; Vamsi Krishna Attunuru <vattunuru@marvell.com> > Subject: [dpdk-dev] [PATCH v2 0/6] mempool: avoid objects allocations across > pages > > KNI supposes that mbufs are contiguous in kernel virtual memory. This > may not be true when using the IOVA=VA mode. To fix this, a possibility > is to ensure that objects do not cross page boundaries in mempool. This > patchset implements this in the last patch (5/5). > > The previous patches prepare the job: > - allow to populate with an unaligned virtual area (1/5). > - reduce spaced wasted in the mempool size calculation when not using > the iova-contiguous allocation (2/5). > - remove the iova-contiguous allocation when populating mempool (3/5): > a va-contiguous alloc does the job as well if we want to populate > without crossing page boundaries, so simplify the mempool populate > function. > - export a function to get the minimum page used in a mempool (4/5) > > Memory consumption impact when using hugepages: > - worst case: + ~0.1% for a mbuf pool (objsize ~= 2368) > - best case: -50% for if pool size is just above page size > > The memory consumption impact with 4K pages in IOVA=VA mode could > however consume up to 75% more memory for mbuf pool, because there will > be only 1 mbuf per page. Not sure how common this usecase is. > > Caveat: this changes the behavior of the mempool (calc_mem_size and > populate), and there is a small risk to break things, especially with > alternate mempool drivers. > > v2 > > * update octeontx2 driver to keep alignment constraint (issue seen by > Vamsi) > * add a new patch to use RTE_MEMPOOL_ALIGN (Andrew) > * fix initialization of for loop in rte_mempool_populate_virt() (Andrew) > * use rte_mempool_populate_iova() if mz_flags has > RTE_MEMZONE_IOVA_CONTIG (Andrew) > * check rte_mempool_get_page_size() return value (Andrew) > * some other minor style improvements > > rfc -> v1 > > * remove first cleanup patch, it was pushed separately > a2b5a8722f20 ("mempool: clarify default populate function") > * add missing change in rte_mempool_op_calc_mem_size_default() > * allow unaligned addr/len in populate virt > * better split patches > * try to better explain the change > * use DPDK align macros when relevant > > Olivier Matz (6): > mempool: allow unaligned addr/len in populate virt > mempool: reduce wasted space on mempool populate > mempool: remove optimistic IOVA-contiguous allocation > mempool: introduce function to get mempool page size > mempool: prevent objects from being across pages > mempool: use the specific macro for object alignment > > drivers/mempool/octeontx2/Makefile | 3 + > drivers/mempool/octeontx2/meson.build | 3 + > drivers/mempool/octeontx2/otx2_mempool_ops.c | 119 +++++++++++++-- > lib/librte_mempool/rte_mempool.c | 147 ++++++++----------- > lib/librte_mempool/rte_mempool.h | 15 +- > lib/librte_mempool/rte_mempool_ops.c | 4 +- > lib/librte_mempool/rte_mempool_ops_default.c | 60 ++++++-- > lib/librte_mempool/rte_mempool_version.map | 1 + > 8 files changed, 236 insertions(+), 116 deletions(-) > > -- > 2.20.1