[v2] eal/mem: preallocate VA space in no-huge mode
Checks
Commit Message
When --no-huge mode is used, the memory is currently allocated with
mmap(NULL, ...). This is fine in most cases, but can fail in cases
where DPDK is run on a machine with an IOMMU that is of more limited
address width than that of a VA, because we're not specifying the
address hint for mmap() call.
Fix it by preallocating VA space before mapping it.
Cc: stable@dpdk.org
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
Notes:
v2:
- Add unmap on unsuccessful mmap
I couldn't figure out which specific commit has introduced
the issue, so there's no fix tag. The most likely candidate
is one that introduced the DMA mask thing in the first place
but i'm not sure.
lib/librte_eal/linux/eal/eal_memory.c | 20 ++++++++++++++++++--
1 file changed, 18 insertions(+), 2 deletions(-)
Comments
24/01/2020 18:05, Anatoly Burakov:
> When --no-huge mode is used, the memory is currently allocated with
> mmap(NULL, ...). This is fine in most cases, but can fail in cases
> where DPDK is run on a machine with an IOMMU that is of more limited
> address width than that of a VA, because we're not specifying the
> address hint for mmap() call.
>
> Fix it by preallocating VA space before mapping it.
>
> Cc: stable@dpdk.org
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Applied, thanks
06/02/2020 16:39, Thomas Monjalon:
> 24/01/2020 18:05, Anatoly Burakov:
> > When --no-huge mode is used, the memory is currently allocated with
> > mmap(NULL, ...). This is fine in most cases, but can fail in cases
> > where DPDK is run on a machine with an IOMMU that is of more limited
> > address width than that of a VA, because we're not specifying the
> > address hint for mmap() call.
> >
> > Fix it by preallocating VA space before mapping it.
> >
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
>
> Applied, thanks
Eventually dropped from DPDK 20.02-rc2 because it is breaking no-huge mode.
Sorry
@@ -1340,6 +1340,8 @@ eal_legacy_hugepage_init(void)
/* hugetlbfs can be disabled */
if (internal_config.no_hugetlbfs) {
+ void *prealloc_addr;
+ size_t mem_sz;
struct rte_memseg_list *msl;
int n_segs, cur_seg, fd, flags;
#ifdef MEMFD_SUPPORTED
@@ -1395,11 +1397,25 @@ eal_legacy_hugepage_init(void)
}
}
#endif
- addr = mmap(NULL, internal_config.memory, PROT_READ | PROT_WRITE,
- flags, fd, 0);
+ /* preallocate address space for the memory, so that it can be
+ * fit into the DMA mask.
+ */
+ mem_sz = internal_config.memory;
+ prealloc_addr = eal_get_virtual_area(
+ NULL, &mem_sz, page_sz, 0, 0);
+ if (prealloc_addr == NULL) {
+ RTE_LOG(ERR, EAL,
+ "%s: reserving memory area failed: "
+ "%s\n",
+ __func__, strerror(errno));
+ return -1;
+ }
+ addr = mmap(prealloc_addr, internal_config.memory,
+ PROT_READ | PROT_WRITE, flags, fd, MAP_FIXED);
if (addr == MAP_FAILED) {
RTE_LOG(ERR, EAL, "%s: mmap() failed: %s\n", __func__,
strerror(errno));
+ munmap(prealloc_addr, mem_sz);
return -1;
}
msl->base_va = addr;