From patchwork Wed Oct 31 17:29:25 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alejandro Lucero X-Patchwork-Id: 47625 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 1B7284CA2; Wed, 31 Oct 2018 18:29:34 +0100 (CET) Received: from netronome.com (host-79-78-33-110.static.as9105.net [79.78.33.110]) by dpdk.org (Postfix) with ESMTP id D888E10A3 for ; Wed, 31 Oct 2018 18:29:31 +0100 (CET) Received: from netronome.com (localhost [127.0.0.1]) by netronome.com (8.15.2/8.15.2/Debian-10) with ESMTP id w9VHTV8Z011949 for ; Wed, 31 Oct 2018 17:29:31 GMT Received: (from root@localhost) by netronome.com (8.15.2/8.15.2/Submit) id w9VHTVtn011948 for dev@dpdk.org; Wed, 31 Oct 2018 17:29:31 GMT From: Alejandro Lucero To: dev@dpdk.org Date: Wed, 31 Oct 2018 17:29:25 +0000 Message-Id: <20181031172931.11894-2-alejandro.lucero@netronome.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181031172931.11894-1-alejandro.lucero@netronome.com> References: <20181031172931.11894-1-alejandro.lucero@netronome.com> Subject: [dpdk-dev] [PATCH 1/7] mem: fix call to DMA mask check X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The param needs to be the maskbits and not the mask. Fixes: 223b7f1d5ef6 ("mem: add function for checking memseg IOVA") Signed-off-by: Alejandro Lucero Acked-by: Anatoly Burakov --- lib/librte_eal/common/malloc_heap.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/lib/librte_eal/common/malloc_heap.c b/lib/librte_eal/common/malloc_heap.c index 1973b6e6e..0adab62ae 100644 --- a/lib/librte_eal/common/malloc_heap.c +++ b/lib/librte_eal/common/malloc_heap.c @@ -323,8 +323,7 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size, } if (mcfg->dma_maskbits) { - mask = ~((1ULL << mcfg->dma_maskbits) - 1); - if (rte_eal_check_dma_mask(mask)) { + if (rte_eal_check_dma_mask(mcfg->dma_maskbits)) { RTE_LOG(ERR, EAL, "%s(): couldn't allocate memory due to DMA mask\n", __func__); From patchwork Wed Oct 31 17:29:26 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alejandro Lucero X-Patchwork-Id: 47626 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D05A24CB5; Wed, 31 Oct 2018 18:29:35 +0100 (CET) Received: from netronome.com (host-79-78-33-110.static.as9105.net [79.78.33.110]) by dpdk.org (Postfix) with ESMTP id E04151B05 for ; Wed, 31 Oct 2018 18:29:31 +0100 (CET) Received: from netronome.com (localhost [127.0.0.1]) by netronome.com (8.15.2/8.15.2/Debian-10) with ESMTP id w9VHTVrY011953 for ; Wed, 31 Oct 2018 17:29:31 GMT Received: (from root@localhost) by netronome.com (8.15.2/8.15.2/Submit) id w9VHTVNf011952 for dev@dpdk.org; Wed, 31 Oct 2018 17:29:31 GMT From: Alejandro Lucero To: dev@dpdk.org Date: Wed, 31 Oct 2018 17:29:26 +0000 Message-Id: <20181031172931.11894-3-alejandro.lucero@netronome.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181031172931.11894-1-alejandro.lucero@netronome.com> References: <20181031172931.11894-1-alejandro.lucero@netronome.com> Subject: [dpdk-dev] [PATCH 2/7] mem: use proper prefix X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Current name rte_eal_check_dma_mask does not follow the naming used in the rest of the file. Signed-off-by: Alejandro Lucero --- doc/guides/rel_notes/release_18_11.rst | 2 +- drivers/bus/pci/linux/pci.c | 2 +- drivers/net/nfp/nfp_net.c | 2 +- lib/librte_eal/common/eal_common_memory.c | 4 ++-- lib/librte_eal/common/include/rte_memory.h | 2 +- lib/librte_eal/common/malloc_heap.c | 2 +- lib/librte_eal/rte_eal_version.map | 2 +- 7 files changed, 8 insertions(+), 8 deletions(-) diff --git a/doc/guides/rel_notes/release_18_11.rst b/doc/guides/rel_notes/release_18_11.rst index 376128f68..11a27405c 100644 --- a/doc/guides/rel_notes/release_18_11.rst +++ b/doc/guides/rel_notes/release_18_11.rst @@ -63,7 +63,7 @@ New Features * **Added check for ensuring allocated memory addressable by devices.** Some devices can have addressing limitations so a new function, - ``rte_eal_check_dma_mask``, has been added for checking allocated memory is + ``rte_mem_check_dma_mask``, has been added for checking allocated memory is not out of the device range. Because now memory can be dynamically allocated after initialization, a dma mask is kept and any new allocated memory will be checked out against that dma mask and rejected if out of range. If more than diff --git a/drivers/bus/pci/linux/pci.c b/drivers/bus/pci/linux/pci.c index 45c24ef7e..0a81e063b 100644 --- a/drivers/bus/pci/linux/pci.c +++ b/drivers/bus/pci/linux/pci.c @@ -590,7 +590,7 @@ pci_one_device_iommu_support_va(struct rte_pci_device *dev) mgaw = ((vtd_cap_reg & VTD_CAP_MGAW_MASK) >> VTD_CAP_MGAW_SHIFT) + 1; - return rte_eal_check_dma_mask(mgaw) == 0 ? true : false; + return rte_mem_check_dma_mask(mgaw) == 0 ? true : false; } #elif defined(RTE_ARCH_PPC_64) static bool diff --git a/drivers/net/nfp/nfp_net.c b/drivers/net/nfp/nfp_net.c index bab1f68eb..54c6da924 100644 --- a/drivers/net/nfp/nfp_net.c +++ b/drivers/net/nfp/nfp_net.c @@ -2703,7 +2703,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev) pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); /* NFP can not handle DMA addresses requiring more than 40 bits */ - if (rte_eal_check_dma_mask(40)) { + if (rte_mem_check_dma_mask(40)) { RTE_LOG(ERR, PMD, "device %s can not be used:", pci_dev->device.name); RTE_LOG(ERR, PMD, "\trestricted dma mask to 40 bits!\n"); diff --git a/lib/librte_eal/common/eal_common_memory.c b/lib/librte_eal/common/eal_common_memory.c index 12dcedf5c..e0f08f39a 100644 --- a/lib/librte_eal/common/eal_common_memory.c +++ b/lib/librte_eal/common/eal_common_memory.c @@ -49,7 +49,7 @@ static uint64_t system_page_sz; * Current known limitations are 39 or 40 bits. Setting the starting address * at 4GB implies there are 508GB or 1020GB for mapping the available * hugepages. This is likely enough for most systems, although a device with - * addressing limitations should call rte_eal_check_dma_mask for ensuring all + * addressing limitations should call rte_mem_check_dma_mask for ensuring all * memory is within supported range. */ static uint64_t baseaddr = 0x100000000; @@ -447,7 +447,7 @@ check_iova(const struct rte_memseg_list *msl __rte_unused, /* check memseg iovas are within the required range based on dma mask */ int __rte_experimental -rte_eal_check_dma_mask(uint8_t maskbits) +rte_mem_check_dma_mask(uint8_t maskbits) { struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config; uint64_t mask; diff --git a/lib/librte_eal/common/include/rte_memory.h b/lib/librte_eal/common/include/rte_memory.h index ce9370582..ad3f3cfb0 100644 --- a/lib/librte_eal/common/include/rte_memory.h +++ b/lib/librte_eal/common/include/rte_memory.h @@ -464,7 +464,7 @@ unsigned rte_memory_get_nchannel(void); unsigned rte_memory_get_nrank(void); /* check memsegs iovas are within a range based on dma mask */ -int __rte_experimental rte_eal_check_dma_mask(uint8_t maskbits); +int __rte_experimental rte_mem_check_dma_mask(uint8_t maskbits); /** * Drivers based on uio will not load unless physical diff --git a/lib/librte_eal/common/malloc_heap.c b/lib/librte_eal/common/malloc_heap.c index 0adab62ae..7d423089d 100644 --- a/lib/librte_eal/common/malloc_heap.c +++ b/lib/librte_eal/common/malloc_heap.c @@ -323,7 +323,7 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size, } if (mcfg->dma_maskbits) { - if (rte_eal_check_dma_mask(mcfg->dma_maskbits)) { + if (rte_mem_check_dma_mask(mcfg->dma_maskbits)) { RTE_LOG(ERR, EAL, "%s(): couldn't allocate memory due to DMA mask\n", __func__); diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map index 04f624246..ef8126a97 100644 --- a/lib/librte_eal/rte_eal_version.map +++ b/lib/librte_eal/rte_eal_version.map @@ -295,7 +295,7 @@ EXPERIMENTAL { rte_devargs_parsef; rte_devargs_remove; rte_devargs_type_count; - rte_eal_check_dma_mask; + rte_mem_check_dma_mask; rte_eal_cleanup; rte_fbarray_attach; rte_fbarray_destroy; From patchwork Wed Oct 31 17:29:27 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alejandro Lucero X-Patchwork-Id: 47628 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 57EED4F98; Wed, 31 Oct 2018 18:29:40 +0100 (CET) Received: from netronome.com (host-79-78-33-110.static.as9105.net [79.78.33.110]) by dpdk.org (Postfix) with ESMTP id 1909C4C8A for ; Wed, 31 Oct 2018 18:29:31 +0100 (CET) Received: from netronome.com (localhost [127.0.0.1]) by netronome.com (8.15.2/8.15.2/Debian-10) with ESMTP id w9VHTVqM011957 for ; Wed, 31 Oct 2018 17:29:31 GMT Received: (from root@localhost) by netronome.com (8.15.2/8.15.2/Submit) id w9VHTVkj011956 for dev@dpdk.org; Wed, 31 Oct 2018 17:29:31 GMT From: Alejandro Lucero To: dev@dpdk.org Date: Wed, 31 Oct 2018 17:29:27 +0000 Message-Id: <20181031172931.11894-4-alejandro.lucero@netronome.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181031172931.11894-1-alejandro.lucero@netronome.com> References: <20181031172931.11894-1-alejandro.lucero@netronome.com> Subject: [dpdk-dev] [PATCH 3/7] mem: add function for setting DMA mask X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds the possibility of setting a dma mask to be used once the memory initialization is done. This is currently needed when IOVA mode is set by PCI related code and an x86 IOMMU hardware unit is present. Current code calls rte_mem_check_dma_mask but it is wrong to do so at that point because the memory has not been initialized yet. Signed-off-by: Alejandro Lucero --- lib/librte_eal/common/eal_common_memory.c | 10 ++++++++++ lib/librte_eal/common/include/rte_memory.h | 10 ++++++++++ lib/librte_eal/rte_eal_version.map | 1 + 3 files changed, 21 insertions(+) diff --git a/lib/librte_eal/common/eal_common_memory.c b/lib/librte_eal/common/eal_common_memory.c index e0f08f39a..24b72fcb0 100644 --- a/lib/librte_eal/common/eal_common_memory.c +++ b/lib/librte_eal/common/eal_common_memory.c @@ -480,6 +480,16 @@ rte_mem_check_dma_mask(uint8_t maskbits) return 0; } +/* set dma mask to use when memory initialization is done */ +void __rte_experimental +rte_mem_set_dma_mask(uint8_t maskbits) +{ + struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config; + + mcfg->dma_maskbits = mcfg->dma_maskbits == 0 ? maskbits : + RTE_MIN(mcfg->dma_maskbits, maskbits); +} + /* return the number of memory channels */ unsigned rte_memory_get_nchannel(void) { diff --git a/lib/librte_eal/common/include/rte_memory.h b/lib/librte_eal/common/include/rte_memory.h index ad3f3cfb0..eff028db1 100644 --- a/lib/librte_eal/common/include/rte_memory.h +++ b/lib/librte_eal/common/include/rte_memory.h @@ -466,6 +466,16 @@ unsigned rte_memory_get_nrank(void); /* check memsegs iovas are within a range based on dma mask */ int __rte_experimental rte_mem_check_dma_mask(uint8_t maskbits); +/** + * * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Set dma mask to use once memory initialization is done. + * Previous function rte_mem_check_dma_mask can not be used + * safely until memory has been initialized. + */ +void __rte_experimental rte_mem_set_dma_mask(uint8_t maskbits); + /** * Drivers based on uio will not load unless physical * addresses are obtainable. It is only possible to get diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map index ef8126a97..ae24b5c73 100644 --- a/lib/librte_eal/rte_eal_version.map +++ b/lib/librte_eal/rte_eal_version.map @@ -296,6 +296,7 @@ EXPERIMENTAL { rte_devargs_remove; rte_devargs_type_count; rte_mem_check_dma_mask; + rte_mem_set_dma_mask; rte_eal_cleanup; rte_fbarray_attach; rte_fbarray_destroy; From patchwork Wed Oct 31 17:29:28 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alejandro Lucero X-Patchwork-Id: 47627 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 12C584D3A; Wed, 31 Oct 2018 18:29:38 +0100 (CET) Received: from netronome.com (host-79-78-33-110.static.as9105.net [79.78.33.110]) by dpdk.org (Postfix) with ESMTP id 1463C2BF9 for ; Wed, 31 Oct 2018 18:29:31 +0100 (CET) Received: from netronome.com (localhost [127.0.0.1]) by netronome.com (8.15.2/8.15.2/Debian-10) with ESMTP id w9VHTVns011961 for ; Wed, 31 Oct 2018 17:29:31 GMT Received: (from root@localhost) by netronome.com (8.15.2/8.15.2/Submit) id w9VHTV9b011960 for dev@dpdk.org; Wed, 31 Oct 2018 17:29:31 GMT From: Alejandro Lucero To: dev@dpdk.org Date: Wed, 31 Oct 2018 17:29:28 +0000 Message-Id: <20181031172931.11894-5-alejandro.lucero@netronome.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181031172931.11894-1-alejandro.lucero@netronome.com> References: <20181031172931.11894-1-alejandro.lucero@netronome.com> Subject: [dpdk-dev] [PATCH 4/7] bus/pci: avoid call to DMA mask check X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Calling rte_mem_check_dma_mask when memory has not been initialized yet is wrong. This patch use rte_mem_set_dma_mask instead. Once memory initialization is done, the dma mask set will be used for checking memory mapped is within the specified mask. Fixes: fe822eb8c565 ("bus/pci: use IOVA DMA mask check when setting IOVA mode") Signed-off-by: Alejandro Lucero Acked-by: Anatoly Burakov --- drivers/bus/pci/linux/pci.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/drivers/bus/pci/linux/pci.c b/drivers/bus/pci/linux/pci.c index 0a81e063b..d87384c72 100644 --- a/drivers/bus/pci/linux/pci.c +++ b/drivers/bus/pci/linux/pci.c @@ -590,7 +590,16 @@ pci_one_device_iommu_support_va(struct rte_pci_device *dev) mgaw = ((vtd_cap_reg & VTD_CAP_MGAW_MASK) >> VTD_CAP_MGAW_SHIFT) + 1; - return rte_mem_check_dma_mask(mgaw) == 0 ? true : false; + /* + * Assuming there is no limitation by now. We can not know at this point + * because the memory has not been initialized yet. Setting the dma mask + * will force a check once memory initialization is done. We can not do + * a fallback to IOVA PA now, but if the dma check fails, the error + * message should advice for using '--iova-mode pa' if IOVA VA is the + * current mode. + */ + rte_mem_set_dma_mask(mgaw); + return true; } #elif defined(RTE_ARCH_PPC_64) static bool From patchwork Wed Oct 31 17:29:29 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alejandro Lucero X-Patchwork-Id: 47630 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C38EC56A3; Wed, 31 Oct 2018 18:29:45 +0100 (CET) Received: from netronome.com (host-79-78-33-110.static.as9105.net [79.78.33.110]) by dpdk.org (Postfix) with ESMTP id 1890134EF for ; Wed, 31 Oct 2018 18:29:31 +0100 (CET) Received: from netronome.com (localhost [127.0.0.1]) by netronome.com (8.15.2/8.15.2/Debian-10) with ESMTP id w9VHTVIF011965 for ; Wed, 31 Oct 2018 17:29:31 GMT Received: (from root@localhost) by netronome.com (8.15.2/8.15.2/Submit) id w9VHTVld011964 for dev@dpdk.org; Wed, 31 Oct 2018 17:29:31 GMT From: Alejandro Lucero To: dev@dpdk.org Date: Wed, 31 Oct 2018 17:29:29 +0000 Message-Id: <20181031172931.11894-6-alejandro.lucero@netronome.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181031172931.11894-1-alejandro.lucero@netronome.com> References: <20181031172931.11894-1-alejandro.lucero@netronome.com> Subject: [dpdk-dev] [PATCH 5/7] mem: modify error message for DMA mask check X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" If DMA mask checks shows mapped memory out of the supported range specified by the DMA mask, nothing can be done but return an error an report the error. This can imply the app not being executed at all or precluding dynamic memory allocation once the app is running. In any case, we can advice the user to force IOVA as PA if currently IOVA being VA and user being root. Signed-off-by: Alejandro Lucero --- lib/librte_eal/common/malloc_heap.c | 35 +++++++++++++++++++++++++---- 1 file changed, 31 insertions(+), 4 deletions(-) diff --git a/lib/librte_eal/common/malloc_heap.c b/lib/librte_eal/common/malloc_heap.c index 7d423089d..711622f19 100644 --- a/lib/librte_eal/common/malloc_heap.c +++ b/lib/librte_eal/common/malloc_heap.c @@ -5,8 +5,10 @@ #include #include #include +#include #include #include +#include #include #include @@ -294,7 +296,6 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size, size_t alloc_sz; int allocd_pages; void *ret, *map_addr; - uint64_t mask; alloc_sz = (size_t)pg_sz * n_segs; @@ -322,11 +323,37 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size, goto fail; } + /* Once we have all the memseg lists configured, if there is a dma mask + * set, check iova addresses are not out of range. Otherwise the device + * setting the dma mask could have problems with the mapped memory. + * + * There are two situations when this can happen: + * 1) memory initialization + * 2) dynamic memory allocation + * + * For 1), an error when checking dma mask implies app can not be + * executed. For 2) implies the new memory can not be added. + */ if (mcfg->dma_maskbits) { if (rte_mem_check_dma_mask(mcfg->dma_maskbits)) { - RTE_LOG(ERR, EAL, - "%s(): couldn't allocate memory due to DMA mask\n", - __func__); + /* Currently this can only happen if IOMMU is enabled + * with RTE_ARCH_X86. It is not safe to use this memory + * so returning an error here. + * + * If IOVA is VA, advice to try with '--iova-mode pa' + * which could solve some situations when IOVA VA is not + * really needed. + */ + uid_t user = getuid(); + if ((rte_eal_iova_mode() == RTE_IOVA_VA) && user == 0) + RTE_LOG(ERR, EAL, + "%s(): couldn't allocate memory due to DMA mask.\n" + "Try with 'iova-mode=pa'\n", + __func__); + else + RTE_LOG(ERR, EAL, + "%s(): couldn't allocate memory due to DMA mask\n", + __func__); goto fail; } } From patchwork Wed Oct 31 17:29:30 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alejandro Lucero X-Patchwork-Id: 47631 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2305E5911; Wed, 31 Oct 2018 18:29:49 +0100 (CET) Received: from netronome.com (host-79-78-33-110.static.as9105.net [79.78.33.110]) by dpdk.org (Postfix) with ESMTP id 32CC34C9F for ; Wed, 31 Oct 2018 18:29:32 +0100 (CET) Received: from netronome.com (localhost [127.0.0.1]) by netronome.com (8.15.2/8.15.2/Debian-10) with ESMTP id w9VHTVs2011969 for ; Wed, 31 Oct 2018 17:29:31 GMT Received: (from root@localhost) by netronome.com (8.15.2/8.15.2/Submit) id w9VHTVBM011968 for dev@dpdk.org; Wed, 31 Oct 2018 17:29:31 GMT From: Alejandro Lucero To: dev@dpdk.org Date: Wed, 31 Oct 2018 17:29:30 +0000 Message-Id: <20181031172931.11894-7-alejandro.lucero@netronome.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181031172931.11894-1-alejandro.lucero@netronome.com> References: <20181031172931.11894-1-alejandro.lucero@netronome.com> Subject: [dpdk-dev] [PATCH 6/7] mem: add safe and unsafe versions for checking DMA mask X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" During memory initialization calling rte_mem_check_dma_mask leads to a deadlock because memory_hotplug_lock is locked by a writer, the current code in execution, and rte_memseg_walk tries to lock as a reader. This patch adds safe and unsafe versions for invoking the final function specifying if the memory_hotplug_lock needs to be acquired, this is for the safe version, or not, the unsafe one. PMDs should use the safe version and just internal EAL memory code should use the unsafe one. Fixes: 223b7f1d5ef6 ("mem: add function for checking memseg IOVA") Signed-off-by: Alejandro Lucero --- drivers/net/nfp/nfp_net.c | 2 +- lib/librte_eal/common/eal_common_memory.c | 24 +++++++++++++++--- lib/librte_eal/common/include/rte_memory.h | 29 +++++++++++++++++++--- lib/librte_eal/common/malloc_heap.c | 2 +- lib/librte_eal/rte_eal_version.map | 3 ++- 5 files changed, 51 insertions(+), 9 deletions(-) diff --git a/drivers/net/nfp/nfp_net.c b/drivers/net/nfp/nfp_net.c index 54c6da924..72c2d3cbb 100644 --- a/drivers/net/nfp/nfp_net.c +++ b/drivers/net/nfp/nfp_net.c @@ -2703,7 +2703,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev) pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); /* NFP can not handle DMA addresses requiring more than 40 bits */ - if (rte_mem_check_dma_mask(40)) { + if (rte_mem_check_dma_mask_safe(40)) { RTE_LOG(ERR, PMD, "device %s can not be used:", pci_dev->device.name); RTE_LOG(ERR, PMD, "\trestricted dma mask to 40 bits!\n"); diff --git a/lib/librte_eal/common/eal_common_memory.c b/lib/librte_eal/common/eal_common_memory.c index 24b72fcb0..2eb3eb48a 100644 --- a/lib/librte_eal/common/eal_common_memory.c +++ b/lib/librte_eal/common/eal_common_memory.c @@ -446,11 +446,12 @@ check_iova(const struct rte_memseg_list *msl __rte_unused, #endif /* check memseg iovas are within the required range based on dma mask */ -int __rte_experimental -rte_mem_check_dma_mask(uint8_t maskbits) +static int __rte_experimental +rte_mem_check_dma_mask(uint8_t maskbits, bool safe) { struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config; uint64_t mask; + int ret; /* sanity check */ if (maskbits > MAX_DMA_MASK_BITS) { @@ -462,7 +463,12 @@ rte_mem_check_dma_mask(uint8_t maskbits) /* create dma mask */ mask = ~((1ULL << maskbits) - 1); - if (rte_memseg_walk(check_iova, &mask)) + if (safe) + ret = rte_memseg_walk(check_iova, &mask); + else + ret = rte_memseg_walk_thread_unsafe(check_iova, &mask); + + if (ret) /* * Dma mask precludes hugepage usage. * This device can not be used and we do not need to keep @@ -480,6 +486,18 @@ rte_mem_check_dma_mask(uint8_t maskbits) return 0; } +int __rte_experimental +rte_mem_check_dma_mask_safe(uint8_t maskbits) +{ + return rte_mem_check_dma_mask(maskbits, true); +} + +int __rte_experimental +rte_mem_check_dma_mask_unsafe(uint8_t maskbits) +{ + return rte_mem_check_dma_mask(maskbits, false); +} + /* set dma mask to use when memory initialization is done */ void __rte_experimental rte_mem_set_dma_mask(uint8_t maskbits) diff --git a/lib/librte_eal/common/include/rte_memory.h b/lib/librte_eal/common/include/rte_memory.h index eff028db1..187a3c668 100644 --- a/lib/librte_eal/common/include/rte_memory.h +++ b/lib/librte_eal/common/include/rte_memory.h @@ -463,15 +463,38 @@ unsigned rte_memory_get_nchannel(void); */ unsigned rte_memory_get_nrank(void); -/* check memsegs iovas are within a range based on dma mask */ -int __rte_experimental rte_mem_check_dma_mask(uint8_t maskbits); +/** + * * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Check memsegs iovas are within a range based on dma mask. + * + * @param maskbits + * Address width to check against. + */ +int __rte_experimental rte_mem_check_dma_mask_safe(uint8_t maskbits); + +/** + * * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Check memsegs iovas are within a range based on dma mask without acquiring + * memory_hotplug_lock first. + * + * This function is just for EAL core memory internal use. Drivers should + * use the previous safe one. + * + * @param maskbits + * Address width to check against. + */ +int __rte_experimental rte_mem_check_dma_mask_unsafe(uint8_t maskbits); /** * * @warning * @b EXPERIMENTAL: this API may change without prior notice * * Set dma mask to use once memory initialization is done. - * Previous function rte_mem_check_dma_mask can not be used + * Previous functions rte_mem_check_dma_mask_safe/unsafe can not be used * safely until memory has been initialized. */ void __rte_experimental rte_mem_set_dma_mask(uint8_t maskbits); diff --git a/lib/librte_eal/common/malloc_heap.c b/lib/librte_eal/common/malloc_heap.c index 711622f19..dd8b983e7 100644 --- a/lib/librte_eal/common/malloc_heap.c +++ b/lib/librte_eal/common/malloc_heap.c @@ -335,7 +335,7 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size, * executed. For 2) implies the new memory can not be added. */ if (mcfg->dma_maskbits) { - if (rte_mem_check_dma_mask(mcfg->dma_maskbits)) { + if (rte_mem_check_dma_mask_unsafe(mcfg->dma_maskbits)) { /* Currently this can only happen if IOMMU is enabled * with RTE_ARCH_X86. It is not safe to use this memory * so returning an error here. diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map index ae24b5c73..f863903b6 100644 --- a/lib/librte_eal/rte_eal_version.map +++ b/lib/librte_eal/rte_eal_version.map @@ -296,7 +296,8 @@ EXPERIMENTAL { rte_devargs_remove; rte_devargs_type_count; rte_mem_check_dma_mask; - rte_mem_set_dma_mask; + rte_mem_set_dma_mask_safe; + rte_mem_set_dma_mask_unsafe; rte_eal_cleanup; rte_fbarray_attach; rte_fbarray_destroy; From patchwork Wed Oct 31 17:29:31 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alejandro Lucero X-Patchwork-Id: 47629 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5FEE1559A; Wed, 31 Oct 2018 18:29:43 +0100 (CET) Received: from netronome.com (host-79-78-33-110.static.as9105.net [79.78.33.110]) by dpdk.org (Postfix) with ESMTP id 39B524CA2 for ; Wed, 31 Oct 2018 18:29:32 +0100 (CET) Received: from netronome.com (localhost [127.0.0.1]) by netronome.com (8.15.2/8.15.2/Debian-10) with ESMTP id w9VHTVKL011973 for ; Wed, 31 Oct 2018 17:29:31 GMT Received: (from root@localhost) by netronome.com (8.15.2/8.15.2/Submit) id w9VHTVgl011972 for dev@dpdk.org; Wed, 31 Oct 2018 17:29:31 GMT From: Alejandro Lucero To: dev@dpdk.org Date: Wed, 31 Oct 2018 17:29:31 +0000 Message-Id: <20181031172931.11894-8-alejandro.lucero@netronome.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181031172931.11894-1-alejandro.lucero@netronome.com> References: <20181031172931.11894-1-alejandro.lucero@netronome.com> Subject: [dpdk-dev] [PATCH 7/7] eal/mem: use DMA mask check for legacy memory X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" If a device reports addressing limitations through a dma mask, the IOVAs for mapped memory needs to be checked out for ensuring correct functionality. Previous patches introduced this DMA check for main memory code currently being used but other options like legacy memory and the no hugepages one need to be also considered. This patch adds the DMA check for those cases. Signed-off-by: Alejandro Lucero --- lib/librte_eal/linuxapp/eal/eal_memory.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c index fce86fda6..2a3a8c7a3 100644 --- a/lib/librte_eal/linuxapp/eal/eal_memory.c +++ b/lib/librte_eal/linuxapp/eal/eal_memory.c @@ -1393,6 +1393,14 @@ eal_legacy_hugepage_init(void) addr = RTE_PTR_ADD(addr, (size_t)page_sz); } + if (mcfg->dma_maskbits) { + if (rte_mem_check_dma_mask_unsafe(mcfg->dma_maskbits)) { + RTE_LOG(ERR, EAL, + "%s(): couldn't allocate memory due to DMA mask\n", + __func__); + goto fail; + } + } return 0; } @@ -1628,6 +1636,15 @@ eal_legacy_hugepage_init(void) rte_fbarray_destroy(&msl->memseg_arr); } + if (mcfg->dma_maskbits) { + if (rte_mem_check_dma_mask_unsafe(mcfg->dma_maskbits)) { + RTE_LOG(ERR, EAL, + "%s(): couldn't allocate memory due to DMA mask\n", + __func__); + goto fail; + } + } + return 0; fail: