[v8,1/3] vfio: revert changes for map contiguous areas in one go

Message ID 20210115073243.7025-2-ndabilpuram@marvell.com (mailing list archive)
State Accepted, archived
Delegated to: David Marchand
Headers
Series fix issue with partial DMA unmap |

Checks

Context Check Description
ci/checkpatch warning coding style issues

Commit Message

Nithin Dabilpuram Jan. 15, 2021, 7:32 a.m. UTC
  In order to save DMA entries limited by kernel both for externel
memory and hugepage memory, an attempt was made to map physically
contiguous memory in one go. This cannot be done as VFIO IOMMU type1
does not support partially unmapping a previously mapped memory
region while Heap can request for multi page mapping and
partial unmapping.
Hence for going back to old method of mapping/unmapping at
memseg granularity, this commit reverts
commit d1c7c0cdf7ba ("vfio: map contiguous areas in one go")

Also add documentation on what module parameter needs to be used
to increase the per-container dma map limit for VFIO.

Fixes: d1c7c0cdf7ba ("vfio: map contiguous areas in one go")
Cc: anatoly.burakov@intel.com
Cc: stable@dpdk.org

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: David Christensen <drc@linux.vnet.ibm.com>
---
 doc/guides/linux_gsg/linux_drivers.rst | 10 ++++++
 lib/librte_eal/linux/eal_vfio.c        | 59 +++++-----------------------------
 2 files changed, 18 insertions(+), 51 deletions(-)
  

Comments

David Marchand March 5, 2021, 7:50 a.m. UTC | #1
On Fri, Jan 15, 2021 at 8:33 AM Nithin Dabilpuram
<ndabilpuram@marvell.com> wrote:
>
> In order to save DMA entries limited by kernel both for externel
> memory and hugepage memory, an attempt was made to map physically
> contiguous memory in one go. This cannot be done as VFIO IOMMU type1
> does not support partially unmapping a previously mapped memory
> region while Heap can request for multi page mapping and
> partial unmapping.
> Hence for going back to old method of mapping/unmapping at
> memseg granularity, this commit reverts
> commit d1c7c0cdf7ba ("vfio: map contiguous areas in one go")
>
> Also add documentation on what module parameter needs to be used
> to increase the per-container dma map limit for VFIO.
>
> Fixes: d1c7c0cdf7ba ("vfio: map contiguous areas in one go")
> Cc: anatoly.burakov@intel.com
> Cc: stable@dpdk.org
>
> Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
> Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
> Acked-by: David Christensen <drc@linux.vnet.ibm.com>

There is a regression reported in bz: https://bugs.dpdk.org/show_bug.cgi?id=649

I assigned it to Anatoly for now.
Nithin, can you have a loo too?

Thanks.
  
Burakov, Anatoly March 5, 2021, 1:54 p.m. UTC | #2
On 05-Mar-21 7:50 AM, David Marchand wrote:
> On Fri, Jan 15, 2021 at 8:33 AM Nithin Dabilpuram
> <ndabilpuram@marvell.com> wrote:
>>
>> In order to save DMA entries limited by kernel both for externel
>> memory and hugepage memory, an attempt was made to map physically
>> contiguous memory in one go. This cannot be done as VFIO IOMMU type1
>> does not support partially unmapping a previously mapped memory
>> region while Heap can request for multi page mapping and
>> partial unmapping.
>> Hence for going back to old method of mapping/unmapping at
>> memseg granularity, this commit reverts
>> commit d1c7c0cdf7ba ("vfio: map contiguous areas in one go")
>>
>> Also add documentation on what module parameter needs to be used
>> to increase the per-container dma map limit for VFIO.
>>
>> Fixes: d1c7c0cdf7ba ("vfio: map contiguous areas in one go")
>> Cc: anatoly.burakov@intel.com
>> Cc: stable@dpdk.org
>>
>> Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
>> Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
>> Acked-by: David Christensen <drc@linux.vnet.ibm.com>
> 
> There is a regression reported in bz: https://bugs.dpdk.org/show_bug.cgi?id=649
> 
> I assigned it to Anatoly for now.
> Nithin, can you have a loo too?
> 
> Thanks.
> 
> 

I've responded on the bug tracker as well, but to repeat it here: this 
is not a regression, this is intended behavior. We cannot do anything 
about this.
  
Nithin Dabilpuram March 5, 2021, 3:50 p.m. UTC | #3
On Fri, Mar 05, 2021 at 01:54:34PM +0000, Burakov, Anatoly wrote:
> On 05-Mar-21 7:50 AM, David Marchand wrote:
> > On Fri, Jan 15, 2021 at 8:33 AM Nithin Dabilpuram
> > <ndabilpuram@marvell.com> wrote:
> > > 
> > > In order to save DMA entries limited by kernel both for externel
> > > memory and hugepage memory, an attempt was made to map physically
> > > contiguous memory in one go. This cannot be done as VFIO IOMMU type1
> > > does not support partially unmapping a previously mapped memory
> > > region while Heap can request for multi page mapping and
> > > partial unmapping.
> > > Hence for going back to old method of mapping/unmapping at
> > > memseg granularity, this commit reverts
> > > commit d1c7c0cdf7ba ("vfio: map contiguous areas in one go")
> > > 
> > > Also add documentation on what module parameter needs to be used
> > > to increase the per-container dma map limit for VFIO.
> > > 
> > > Fixes: d1c7c0cdf7ba ("vfio: map contiguous areas in one go")
> > > Cc: anatoly.burakov@intel.com
> > > Cc: stable@dpdk.org
> > > 
> > > Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
> > > Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
> > > Acked-by: David Christensen <drc@linux.vnet.ibm.com>
> > 
> > There is a regression reported in bz: https://bugs.dpdk.org/show_bug.cgi?id=649
> > 
> > I assigned it to Anatoly for now.
> > Nithin, can you have a loo too?
> > 
> > Thanks.
> > 
> > 
> 
> I've responded on the bug tracker as well, but to repeat it here: this is
> not a regression, this is intended behavior. We cannot do anything about
> this.

To add, for test case to pass, either limits have to be increased, or use "--mp-alloc=xmemhuge" 
instead of "--mp-alloc=xmem" which is forcing system page size or reduce total mbuf count
to reduce page count.

> 
> -- 
> Thanks,
> Anatoly
  
Burakov, Anatoly April 1, 2021, 11:27 a.m. UTC | #4
On 05-Mar-21 3:50 PM, Nithin Dabilpuram wrote:
> On Fri, Mar 05, 2021 at 01:54:34PM +0000, Burakov, Anatoly wrote:
>> On 05-Mar-21 7:50 AM, David Marchand wrote:
>>> On Fri, Jan 15, 2021 at 8:33 AM Nithin Dabilpuram
>>> <ndabilpuram@marvell.com> wrote:
>>>>
>>>> In order to save DMA entries limited by kernel both for externel
>>>> memory and hugepage memory, an attempt was made to map physically
>>>> contiguous memory in one go. This cannot be done as VFIO IOMMU type1
>>>> does not support partially unmapping a previously mapped memory
>>>> region while Heap can request for multi page mapping and
>>>> partial unmapping.
>>>> Hence for going back to old method of mapping/unmapping at
>>>> memseg granularity, this commit reverts
>>>> commit d1c7c0cdf7ba ("vfio: map contiguous areas in one go")
>>>>
>>>> Also add documentation on what module parameter needs to be used
>>>> to increase the per-container dma map limit for VFIO.
>>>>
>>>> Fixes: d1c7c0cdf7ba ("vfio: map contiguous areas in one go")
>>>> Cc: anatoly.burakov@intel.com
>>>> Cc: stable@dpdk.org
>>>>
>>>> Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
>>>> Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
>>>> Acked-by: David Christensen <drc@linux.vnet.ibm.com>
>>>
>>> There is a regression reported in bz: https://bugs.dpdk.org/show_bug.cgi?id=649
>>>
>>> I assigned it to Anatoly for now.
>>> Nithin, can you have a loo too?
>>>
>>> Thanks.
>>>
>>>
>>
>> I've responded on the bug tracker as well, but to repeat it here: this is
>> not a regression, this is intended behavior. We cannot do anything about
>> this.
> 
> To add, for test case to pass, either limits have to be increased, or use "--mp-alloc=xmemhuge"
> instead of "--mp-alloc=xmem" which is forcing system page size or reduce total mbuf count
> to reduce page count.
> 

Technically, one is not a replacement for the other, so the correct way 
to handle it is to increase the limits, not using xmemhuge.
  

Patch

diff --git a/doc/guides/linux_gsg/linux_drivers.rst b/doc/guides/linux_gsg/linux_drivers.rst
index 90635a4..9a662a7 100644
--- a/doc/guides/linux_gsg/linux_drivers.rst
+++ b/doc/guides/linux_gsg/linux_drivers.rst
@@ -25,6 +25,16 @@  To make use of VFIO, the ``vfio-pci`` module must be loaded:
 VFIO kernel is usually present by default in all distributions,
 however please consult your distributions documentation to make sure that is the case.
 
+For DMA mapping of either external memory or hugepages, VFIO interface is used.
+VFIO does not support partial unmap of once mapped memory. Hence DPDK's memory is
+mapped in hugepage granularity or system page granularity. Number of DMA
+mappings is limited by kernel with user locked memory limit of a process(rlimit)
+for system/hugepage memory. Another per-container overall limit applicable both
+for external memory and system memory was added in kernel 5.1 defined by
+VFIO module parameter ``dma_entry_limit`` with a default value of 64K.
+When application is out of DMA entries, these limits need to be adjusted to
+increase the allowed limit.
+
 Since Linux version 5.7,
 the ``vfio-pci`` module supports the creation of virtual functions.
 After the PF is bound to ``vfio-pci`` module,
diff --git a/lib/librte_eal/linux/eal_vfio.c b/lib/librte_eal/linux/eal_vfio.c
index 0500824..64b134d 100644
--- a/lib/librte_eal/linux/eal_vfio.c
+++ b/lib/librte_eal/linux/eal_vfio.c
@@ -517,11 +517,9 @@  static void
 vfio_mem_event_callback(enum rte_mem_event type, const void *addr, size_t len,
 		void *arg __rte_unused)
 {
-	rte_iova_t iova_start, iova_expected;
 	struct rte_memseg_list *msl;
 	struct rte_memseg *ms;
 	size_t cur_len = 0;
-	uint64_t va_start;
 
 	msl = rte_mem_virt2memseg_list(addr);
 
@@ -539,63 +537,22 @@  vfio_mem_event_callback(enum rte_mem_event type, const void *addr, size_t len,
 
 	/* memsegs are contiguous in memory */
 	ms = rte_mem_virt2memseg(addr, msl);
-
-	/*
-	 * This memory is not guaranteed to be contiguous, but it still could
-	 * be, or it could have some small contiguous chunks. Since the number
-	 * of VFIO mappings is limited, and VFIO appears to not concatenate
-	 * adjacent mappings, we have to do this ourselves.
-	 *
-	 * So, find contiguous chunks, then map them.
-	 */
-	va_start = ms->addr_64;
-	iova_start = iova_expected = ms->iova;
 	while (cur_len < len) {
-		bool new_contig_area = ms->iova != iova_expected;
-		bool last_seg = (len - cur_len) == ms->len;
-		bool skip_last = false;
-
-		/* only do mappings when current contiguous area ends */
-		if (new_contig_area) {
-			if (type == RTE_MEM_EVENT_ALLOC)
-				vfio_dma_mem_map(default_vfio_cfg, va_start,
-						iova_start,
-						iova_expected - iova_start, 1);
-			else
-				vfio_dma_mem_map(default_vfio_cfg, va_start,
-						iova_start,
-						iova_expected - iova_start, 0);
-			va_start = ms->addr_64;
-			iova_start = ms->iova;
-		}
 		/* some memory segments may have invalid IOVA */
 		if (ms->iova == RTE_BAD_IOVA) {
 			RTE_LOG(DEBUG, EAL, "Memory segment at %p has bad IOVA, skipping\n",
 					ms->addr);
-			skip_last = true;
+			goto next;
 		}
-		iova_expected = ms->iova + ms->len;
+		if (type == RTE_MEM_EVENT_ALLOC)
+			vfio_dma_mem_map(default_vfio_cfg, ms->addr_64,
+					ms->iova, ms->len, 1);
+		else
+			vfio_dma_mem_map(default_vfio_cfg, ms->addr_64,
+					ms->iova, ms->len, 0);
+next:
 		cur_len += ms->len;
 		++ms;
-
-		/*
-		 * don't count previous segment, and don't attempt to
-		 * dereference a potentially invalid pointer.
-		 */
-		if (skip_last && !last_seg) {
-			iova_expected = iova_start = ms->iova;
-			va_start = ms->addr_64;
-		} else if (!skip_last && last_seg) {
-			/* this is the last segment and we're not skipping */
-			if (type == RTE_MEM_EVENT_ALLOC)
-				vfio_dma_mem_map(default_vfio_cfg, va_start,
-						iova_start,
-						iova_expected - iova_start, 1);
-			else
-				vfio_dma_mem_map(default_vfio_cfg, va_start,
-						iova_start,
-						iova_expected - iova_start, 0);
-		}
 	}
 }