[v2] net/memif: fix buffer overflow in zero copy Rx

Message ID c3a72bcb-24b0-4c42-9a07-427eadce029a@broadcom.com (mailing list archive)
State Under Review
Delegated to: Ferruh Yigit
Headers
Series [v2] net/memif: fix buffer overflow in zero copy Rx |

Checks

Context Check Description
ci/checkpatch warning coding style issues
ci/loongarch-compilation success Compilation OK
ci/loongarch-unit-testing success Unit Testing PASS
ci/Intel-compilation success Compilation OK
ci/intel-Testing success Testing PASS
ci/github-robot: build success github build: passed
ci/intel-Functional success Functional PASS
ci/iol-broadcom-Performance success Performance Testing PASS
ci/iol-mellanox-Performance success Performance Testing PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-broadcom-Functional success Functional Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-abi-testing success Testing PASS
ci/iol-compile-amd64-testing success Testing PASS
ci/iol-sample-apps-testing success Testing PASS
ci/iol-unit-amd64-testing success Testing PASS
ci/iol-compile-arm64-testing success Testing PASS
ci/iol-unit-arm64-testing success Testing PASS
ci/iol-marvell-Functional success Functional Testing PASS

Commit Message

Mihai Brodschi June 28, 2024, 9:01 p.m. UTC
  rte_pktmbuf_alloc_bulk is called by the zero-copy receiver to allocate
new mbufs to be provided to the sender. The allocated mbuf pointers
are stored in a ring, but the alloc function doesn't implement index
wrap-around, so it writes past the end of the array. This results in
memory corruption and duplicate mbufs being received.

Allocate 2x the space for the mbuf ring, so that the alloc function
has a contiguous array to write to, then copy the excess entries
to the start of the array.

Fixes: 43b815d88188 ("net/memif: support zero-copy slave")
Cc: stable@dpdk.org
Signed-off-by: Mihai Brodschi <mihai.brodschi@broadcom.com>
---
v2:
 - fix email formatting

---
 drivers/net/memif/rte_eth_memif.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)
  

Comments

Patrick Robb July 1, 2024, 4:57 a.m. UTC | #1
I see this patchseries had a CI testing fail, for coremask DTS test on
Marvel CN10k. I don't think it could relate to the contents of your
patch though.

It had a timeout:

TestCoremask: Test Case test_individual_coremask Result FAILED:
TIMEOUT on ./arm64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x8000
-n 2 --log-level="lib.eal,8"

So, I'm issuing a retest.
Recheck-request: iol-marvell-Functional
  
Ferruh Yigit July 7, 2024, 2:12 a.m. UTC | #2
On 6/28/2024 10:01 PM, Mihai Brodschi wrote:
> rte_pktmbuf_alloc_bulk is called by the zero-copy receiver to allocate
> new mbufs to be provided to the sender. The allocated mbuf pointers
> are stored in a ring, but the alloc function doesn't implement index
> wrap-around, so it writes past the end of the array. This results in
> memory corruption and duplicate mbufs being received.
> 

Hi Mihai,

I am not sure writing past the ring actually occurs.

As far as I can see is to keep the ring full as much as possible, when
initially 'head' and 'tail' are 0, it fills all ring.
Later tails moves and emptied space filled again. So head (in modulo) is
always just behind tail after refill. In next run, refill will only fill
the part tail moved, and this is calculated by 'n_slots'. As this is
only the size of the gap, starting from 'head' (with modulo) shouldn't
pass the ring length.

Do you observe this issue practically? If so can you please provide your
backtrace and numbers that is showing how to reproduce the issue?


> Allocate 2x the space for the mbuf ring, so that the alloc function
> has a contiguous array to write to, then copy the excess entries
> to the start of the array.
> 

Even issue is valid, I am not sure about solution to double to buffer
memory, but lets confirm the issue first before discussing the solution.

> Fixes: 43b815d88188 ("net/memif: support zero-copy slave")
> Cc: stable@dpdk.org
> Signed-off-by: Mihai Brodschi <mihai.brodschi@broadcom.com>
> ---
> v2:
>  - fix email formatting
> 
> ---
>  drivers/net/memif/rte_eth_memif.c | 10 +++++++++-
>  1 file changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c
> index 16da22b5c6..3491c53cf1 100644
> --- a/drivers/net/memif/rte_eth_memif.c
> +++ b/drivers/net/memif/rte_eth_memif.c
> @@ -600,6 +600,10 @@ eth_memif_rx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
>  	ret = rte_pktmbuf_alloc_bulk(mq->mempool, &mq->buffers[head & mask], n_slots);
>  	if (unlikely(ret < 0))
>  		goto no_free_mbufs;
> +	if (unlikely(n_slots > ring_size - (head & mask))) {
> +		rte_memcpy(mq->buffers, &mq->buffers[ring_size],
> +			(n_slots + (head & mask) - ring_size) * sizeof(struct rte_mbuf *));
> +	}
>  
>  	while (n_slots--) {
>  		s0 = head++ & mask;
> @@ -1245,8 +1249,12 @@ memif_init_queues(struct rte_eth_dev *dev)
>  		}
>  		mq->buffers = NULL;
>  		if (pmd->flags & ETH_MEMIF_FLAG_ZERO_COPY) {
> +			/*
> +			 * Allocate 2x ring_size to reserve a contiguous array for
> +			 * rte_pktmbuf_alloc_bulk (to store allocated mbufs).
> +			 */
>  			mq->buffers = rte_zmalloc("bufs", sizeof(struct rte_mbuf *) *
> -						  (1 << mq->log2_ring_size), 0);
> +						  (1 << (mq->log2_ring_size + 1)), 0);
>  			if (mq->buffers == NULL)
>  				return -ENOMEM;
>  		}
  
Mihai Brodschi July 7, 2024, 5:50 a.m. UTC | #3
Hi Ferruh,

On 07/07/2024 05:12, Ferruh Yigit wrote:
> On 6/28/2024 10:01 PM, Mihai Brodschi wrote:
>> rte_pktmbuf_alloc_bulk is called by the zero-copy receiver to allocate
>> new mbufs to be provided to the sender. The allocated mbuf pointers
>> are stored in a ring, but the alloc function doesn't implement index
>> wrap-around, so it writes past the end of the array. This results in
>> memory corruption and duplicate mbufs being received.
>>
>
> Hi Mihai,
>
> I am not sure writing past the ring actually occurs.
>
> As far as I can see is to keep the ring full as much as possible, when
> initially 'head' and 'tail' are 0, it fills all ring.
> Later tails moves and emptied space filled again. So head (in modulo) is
> always just behind tail after refill. In next run, refill will only fill
> the part tail moved, and this is calculated by 'n_slots'. As this is
> only the size of the gap, starting from 'head' (with modulo) shouldn't
> pass the ring length.
>
> Do you observe this issue practically? If so can you please provide your
> backtrace and numbers that is showing how to reproduce the issue?

The alloc function writes starting from the ring's head, but the ring's
head can be located at the end of the ring's memory buffer (ring_size - 1).
The correct behavior would be to wrap around to the start of the buffer (0),
but the alloc function has no awareness of the fact that it's writing to a
ring, so it writes to ring_size, ring_size + 1, etc.

Let's look at the existing code:
We assume the ring size is 256 and we just received 32 packets.
The previous tail was at index 255, now it's at index 31.
The head is initially at index 255.

head = __atomic_load_n(&ring->head, __ATOMIC_RELAXED);	// head = 255
n_slots = ring_size - head + mq->last_tail;		// n_slots = 32

if (n_slots < 32)					// not taken
	goto no_free_mbufs;

ret = rte_pktmbuf_alloc_bulk(mq->mempool, &mq->buffers[head & mask], n_slots);
// This will write 32 mbuf pointers starting at index (head & mask) = 255.
// The ring size is 256, so apart from the first one all pointers will be
// written out of bounds (index 256 .. 286, when it should be 0 .. 30).

I can reproduce a crash 100% of the time with my application, but the output
is not very helpful, since it crashes elsewhere because of mempool corruption.
Applying this patch fixes the crashes completely.

>> Allocate 2x the space for the mbuf ring, so that the alloc function
>> has a contiguous array to write to, then copy the excess entries
>> to the start of the array.
>>
>
> Even issue is valid, I am not sure about solution to double to buffer
> memory, but lets confirm the issue first before discussing the solution.

Initially, I thought about splitting the call to rte_pktmbuf_alloc_bulk in two,
but I thought that might be bad for performance if the mempool is being used
concurrently from multiple threads.

If we want to use only one call to rte_pktmbuf_alloc_bulk, we need an array
to store the allocated mbuf pointers. This array must be of length ring_size,
since that's the maximum amount of mbufs which may be allocated in one go.
We need to copy the pointers from this array to the ring.

If we instead allocate twice the space for the ring, we can skip copying
the pointers which were written to the ring, and only copy those that were
written outside of its bounds.

>> Fixes: 43b815d88188 ("net/memif: support zero-copy slave")
>> Cc: stable@dpdk.org
>> Signed-off-by: Mihai Brodschi <mihai.brodschi@broadcom.com>
>> ---
>> v2:
>>  - fix email formatting
>>
>> ---
>>  drivers/net/memif/rte_eth_memif.c | 10 +++++++++-
>>  1 file changed, 9 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c
>> index 16da22b5c6..3491c53cf1 100644
>> --- a/drivers/net/memif/rte_eth_memif.c
>> +++ b/drivers/net/memif/rte_eth_memif.c
>> @@ -600,6 +600,10 @@ eth_memif_rx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
>>  	ret = rte_pktmbuf_alloc_bulk(mq->mempool, &mq->buffers[head & mask], n_slots);
>>  	if (unlikely(ret < 0))
>>  		goto no_free_mbufs;
>> +	if (unlikely(n_slots > ring_size - (head & mask))) {
>> +		rte_memcpy(mq->buffers, &mq->buffers[ring_size],
>> +			(n_slots + (head & mask) - ring_size) * sizeof(struct rte_mbuf *));
>> +	}
>>  
>>  	while (n_slots--) {
>>  		s0 = head++ & mask;
>> @@ -1245,8 +1249,12 @@ memif_init_queues(struct rte_eth_dev *dev)
>>  		}
>>  		mq->buffers = NULL;
>>  		if (pmd->flags & ETH_MEMIF_FLAG_ZERO_COPY) {
>> +			/*
>> +			 * Allocate 2x ring_size to reserve a contiguous array for
>> +			 * rte_pktmbuf_alloc_bulk (to store allocated mbufs).
>> +			 */
>>  			mq->buffers = rte_zmalloc("bufs", sizeof(struct rte_mbuf *) *
>> -						  (1 << mq->log2_ring_size), 0);
>> +						  (1 << (mq->log2_ring_size + 1)), 0);
>>  			if (mq->buffers == NULL)
>>  				return -ENOMEM;
>>  		}
>

Apologies for sending this multiple times, I'm not familiar with mailing lists.
  
Ferruh Yigit July 7, 2024, 2:05 p.m. UTC | #4
On 7/7/2024 6:50 AM, Mihai Brodschi wrote:
> Hi Ferruh,
> 
> On 07/07/2024 05:12, Ferruh Yigit wrote:
>> On 6/28/2024 10:01 PM, Mihai Brodschi wrote:
>>> rte_pktmbuf_alloc_bulk is called by the zero-copy receiver to allocate
>>> new mbufs to be provided to the sender. The allocated mbuf pointers
>>> are stored in a ring, but the alloc function doesn't implement index
>>> wrap-around, so it writes past the end of the array. This results in
>>> memory corruption and duplicate mbufs being received.
>>>
>>
>> Hi Mihai,
>>
>> I am not sure writing past the ring actually occurs.
>>
>> As far as I can see is to keep the ring full as much as possible, when
>> initially 'head' and 'tail' are 0, it fills all ring.
>> Later tails moves and emptied space filled again. So head (in modulo) is
>> always just behind tail after refill. In next run, refill will only fill
>> the part tail moved, and this is calculated by 'n_slots'. As this is
>> only the size of the gap, starting from 'head' (with modulo) shouldn't
>> pass the ring length.
>>
>> Do you observe this issue practically? If so can you please provide your
>> backtrace and numbers that is showing how to reproduce the issue?
> 
> The alloc function writes starting from the ring's head, but the ring's
> head can be located at the end of the ring's memory buffer (ring_size - 1).
> The correct behavior would be to wrap around to the start of the buffer (0),
> but the alloc function has no awareness of the fact that it's writing to a
> ring, so it writes to ring_size, ring_size + 1, etc.
> 
> Let's look at the existing code:
> We assume the ring size is 256 and we just received 32 packets.
> The previous tail was at index 255, now it's at index 31.
> The head is initially at index 255.
> 
> head = __atomic_load_n(&ring->head, __ATOMIC_RELAXED);	// head = 255
> n_slots = ring_size - head + mq->last_tail;		// n_slots = 32
> 
> if (n_slots < 32)					// not taken
> 	goto no_free_mbufs;
> 
> ret = rte_pktmbuf_alloc_bulk(mq->mempool, &mq->buffers[head & mask], n_slots);
> // This will write 32 mbuf pointers starting at index (head & mask) = 255.
> // The ring size is 256, so apart from the first one all pointers will be
> // written out of bounds (index 256 .. 286, when it should be 0 .. 30).
> 

My expectation is numbers should be like following:

Initially:
 size = 256
 head = 0
 tail = 0

In first refill:
 n_slots = 256
 head = 256
 tail = 0

Subsequent run that 32 slots used:
 head = 256
 tail = 32
 n_slots = 32
 rte_pktmbuf_alloc_bulk(mq, buf[head & mask], n_slots);
  head & mask = 0
  // So it fills first 32 elements of buffer, which is inbound

This will continue as above, combination of only gap filled and head
masked with 'mask' provides the wrapping required.


> I can reproduce a crash 100% of the time with my application, but the output
> is not very helpful, since it crashes elsewhere because of mempool corruption.
> Applying this patch fixes the crashes completely.
> 

This causing always reproducible crash means existing memif zero copy Rx
is broken and nobody can use it, but I am suspicions that this is the
case, perhaps something special in your usecase triggering this issue.

@Jakup, can you please confirm that memif Rx zero copy is tested?

>>> Allocate 2x the space for the mbuf ring, so that the alloc function
>>> has a contiguous array to write to, then copy the excess entries
>>> to the start of the array.
>>>
>>
>> Even issue is valid, I am not sure about solution to double to buffer
>> memory, but lets confirm the issue first before discussing the solution.
> 
> Initially, I thought about splitting the call to rte_pktmbuf_alloc_bulk in two,
> but I thought that might be bad for performance if the mempool is being used
> concurrently from multiple threads.
> 
> If we want to use only one call to rte_pktmbuf_alloc_bulk, we need an array
> to store the allocated mbuf pointers. This array must be of length ring_size,
> since that's the maximum amount of mbufs which may be allocated in one go.
> We need to copy the pointers from this array to the ring.
> 
> If we instead allocate twice the space for the ring, we can skip copying
> the pointers which were written to the ring, and only copy those that were
> written outside of its bounds.
> 

First thing comes my mind was also using two 'rte_pktmbuf_alloc_bulk()'
calls.
I can see why you prefer doubling the buffer size, but it comes with
copying overhead.
So both options comes with some overhead, not sure which one is better,
although I am leaning to the first solution we should do some
measurements to decide.

BUT first lets agree on the problem first, before doing more work, I am
not still fully convinced that original code is wrong.

>>> Fixes: 43b815d88188 ("net/memif: support zero-copy slave")
>>> Cc: stable@dpdk.org
>>> Signed-off-by: Mihai Brodschi <mihai.brodschi@broadcom.com>
>>> ---
>>> v2:
>>>  - fix email formatting
>>>
>>> ---
>>>  drivers/net/memif/rte_eth_memif.c | 10 +++++++++-
>>>  1 file changed, 9 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c
>>> index 16da22b5c6..3491c53cf1 100644
>>> --- a/drivers/net/memif/rte_eth_memif.c
>>> +++ b/drivers/net/memif/rte_eth_memif.c
>>> @@ -600,6 +600,10 @@ eth_memif_rx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
>>>  	ret = rte_pktmbuf_alloc_bulk(mq->mempool, &mq->buffers[head & mask], n_slots);
>>>  	if (unlikely(ret < 0))
>>>  		goto no_free_mbufs;
>>> +	if (unlikely(n_slots > ring_size - (head & mask))) {
>>> +		rte_memcpy(mq->buffers, &mq->buffers[ring_size],
>>> +			(n_slots + (head & mask) - ring_size) * sizeof(struct rte_mbuf *));
>>> +	}
>>>  
>>>  	while (n_slots--) {
>>>  		s0 = head++ & mask;
>>> @@ -1245,8 +1249,12 @@ memif_init_queues(struct rte_eth_dev *dev)
>>>  		}
>>>  		mq->buffers = NULL;
>>>  		if (pmd->flags & ETH_MEMIF_FLAG_ZERO_COPY) {
>>> +			/*
>>> +			 * Allocate 2x ring_size to reserve a contiguous array for
>>> +			 * rte_pktmbuf_alloc_bulk (to store allocated mbufs).
>>> +			 */
>>>  			mq->buffers = rte_zmalloc("bufs", sizeof(struct rte_mbuf *) *
>>> -						  (1 << mq->log2_ring_size), 0);
>>> +						  (1 << (mq->log2_ring_size + 1)), 0);
>>>  			if (mq->buffers == NULL)
>>>  				return -ENOMEM;
>>>  		}
>>
> 
> Apologies for sending this multiple times, I'm not familiar with mailing lists.
> 
>
  
Mihai Brodschi July 7, 2024, 3:18 p.m. UTC | #5
On 07/07/2024 17:05, Ferruh Yigit wrote:
> On 7/7/2024 6:50 AM, Mihai Brodschi wrote:
>> Hi Ferruh,
>>
>> On 07/07/2024 05:12, Ferruh Yigit wrote:
>>> On 6/28/2024 10:01 PM, Mihai Brodschi wrote:
>>>> rte_pktmbuf_alloc_bulk is called by the zero-copy receiver to allocate
>>>> new mbufs to be provided to the sender. The allocated mbuf pointers
>>>> are stored in a ring, but the alloc function doesn't implement index
>>>> wrap-around, so it writes past the end of the array. This results in
>>>> memory corruption and duplicate mbufs being received.
>>>>
>>>
>>> Hi Mihai,
>>>
>>> I am not sure writing past the ring actually occurs.
>>>
>>> As far as I can see is to keep the ring full as much as possible, when
>>> initially 'head' and 'tail' are 0, it fills all ring.
>>> Later tails moves and emptied space filled again. So head (in modulo) is
>>> always just behind tail after refill. In next run, refill will only fill
>>> the part tail moved, and this is calculated by 'n_slots'. As this is
>>> only the size of the gap, starting from 'head' (with modulo) shouldn't
>>> pass the ring length.
>>>
>>> Do you observe this issue practically? If so can you please provide your
>>> backtrace and numbers that is showing how to reproduce the issue?
>>
>> The alloc function writes starting from the ring's head, but the ring's
>> head can be located at the end of the ring's memory buffer (ring_size - 1).
>> The correct behavior would be to wrap around to the start of the buffer (0),
>> but the alloc function has no awareness of the fact that it's writing to a
>> ring, so it writes to ring_size, ring_size + 1, etc.
>>
>> Let's look at the existing code:
>> We assume the ring size is 256 and we just received 32 packets.
>> The previous tail was at index 255, now it's at index 31.
>> The head is initially at index 255.
>>
>> head = __atomic_load_n(&ring->head, __ATOMIC_RELAXED);	// head = 255
>> n_slots = ring_size - head + mq->last_tail;		// n_slots = 32
>>
>> if (n_slots < 32)					// not taken
>> 	goto no_free_mbufs;
>>
>> ret = rte_pktmbuf_alloc_bulk(mq->mempool, &mq->buffers[head & mask], n_slots);
>> // This will write 32 mbuf pointers starting at index (head & mask) = 255.
>> // The ring size is 256, so apart from the first one all pointers will be
>> // written out of bounds (index 256 .. 286, when it should be 0 .. 30).
>>
> 
> My expectation is numbers should be like following:
> 
> Initially:
>  size = 256
>  head = 0
>  tail = 0
> 
> In first refill:
>  n_slots = 256
>  head = 256
>  tail = 0
> 
> Subsequent run that 32 slots used:
>  head = 256
>  tail = 32
>  n_slots = 32
>  rte_pktmbuf_alloc_bulk(mq, buf[head & mask], n_slots);
>   head & mask = 0
>   // So it fills first 32 elements of buffer, which is inbound
> 
> This will continue as above, combination of only gap filled and head
> masked with 'mask' provides the wrapping required.

If I understand correctly, this works only if eth_memif_rx_zc always processes
a number of packets which is a power of 2, so that the ring's head always wraps
around at the end of a refill loop, never in the middle of it.
Is there any reason this should be the case?
Maybe the tests don't trigger the crash because this condition holds true for them?

>> I can reproduce a crash 100% of the time with my application, but the output
>> is not very helpful, since it crashes elsewhere because of mempool corruption.
>> Applying this patch fixes the crashes completely.
>>
> 
> This causing always reproducible crash means existing memif zero copy Rx
> is broken and nobody can use it, but I am suspicions that this is the
> case, perhaps something special in your usecase triggering this issue.
> 
> @Jakup, can you please confirm that memif Rx zero copy is tested?
> 
>>>> Allocate 2x the space for the mbuf ring, so that the alloc function
>>>> has a contiguous array to write to, then copy the excess entries
>>>> to the start of the array.
>>>>
>>>
>>> Even issue is valid, I am not sure about solution to double to buffer
>>> memory, but lets confirm the issue first before discussing the solution.
>>
>> Initially, I thought about splitting the call to rte_pktmbuf_alloc_bulk in two,
>> but I thought that might be bad for performance if the mempool is being used
>> concurrently from multiple threads.
>>
>> If we want to use only one call to rte_pktmbuf_alloc_bulk, we need an array
>> to store the allocated mbuf pointers. This array must be of length ring_size,
>> since that's the maximum amount of mbufs which may be allocated in one go.
>> We need to copy the pointers from this array to the ring.
>>
>> If we instead allocate twice the space for the ring, we can skip copying
>> the pointers which were written to the ring, and only copy those that were
>> written outside of its bounds.
>>
> 
> First thing comes my mind was also using two 'rte_pktmbuf_alloc_bulk()'
> calls.
> I can see why you prefer doubling the buffer size, but it comes with
> copying overhead.
> So both options comes with some overhead, not sure which one is better,
> although I am leaning to the first solution we should do some
> measurements to decide.
> 
> BUT first lets agree on the problem first, before doing more work, I am
> not still fully convinced that original code is wrong.
> 
>>>> Fixes: 43b815d88188 ("net/memif: support zero-copy slave")
>>>> Cc: stable@dpdk.org
>>>> Signed-off-by: Mihai Brodschi <mihai.brodschi@broadcom.com>
>>>> ---
>>>> v2:
>>>>  - fix email formatting
>>>>
>>>> ---
>>>>  drivers/net/memif/rte_eth_memif.c | 10 +++++++++-
>>>>  1 file changed, 9 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c
>>>> index 16da22b5c6..3491c53cf1 100644
>>>> --- a/drivers/net/memif/rte_eth_memif.c
>>>> +++ b/drivers/net/memif/rte_eth_memif.c
>>>> @@ -600,6 +600,10 @@ eth_memif_rx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
>>>>  	ret = rte_pktmbuf_alloc_bulk(mq->mempool, &mq->buffers[head & mask], n_slots);
>>>>  	if (unlikely(ret < 0))
>>>>  		goto no_free_mbufs;
>>>> +	if (unlikely(n_slots > ring_size - (head & mask))) {
>>>> +		rte_memcpy(mq->buffers, &mq->buffers[ring_size],
>>>> +			(n_slots + (head & mask) - ring_size) * sizeof(struct rte_mbuf *));
>>>> +	}
>>>>  
>>>>  	while (n_slots--) {
>>>>  		s0 = head++ & mask;
>>>> @@ -1245,8 +1249,12 @@ memif_init_queues(struct rte_eth_dev *dev)
>>>>  		}
>>>>  		mq->buffers = NULL;
>>>>  		if (pmd->flags & ETH_MEMIF_FLAG_ZERO_COPY) {
>>>> +			/*
>>>> +			 * Allocate 2x ring_size to reserve a contiguous array for
>>>> +			 * rte_pktmbuf_alloc_bulk (to store allocated mbufs).
>>>> +			 */
>>>>  			mq->buffers = rte_zmalloc("bufs", sizeof(struct rte_mbuf *) *
>>>> -						  (1 << mq->log2_ring_size), 0);
>>>> +						  (1 << (mq->log2_ring_size + 1)), 0);
>>>>  			if (mq->buffers == NULL)
>>>>  				return -ENOMEM;
>>>>  		}
>>>
>>
>> Apologies for sending this multiple times, I'm not familiar with mailing lists.
>>
>>
>
  
Mihai Brodschi July 7, 2024, 6:46 p.m. UTC | #6
On 07/07/2024 18:18, Mihai Brodschi wrote:
> 
> 
> On 07/07/2024 17:05, Ferruh Yigit wrote:
>>
>> My expectation is numbers should be like following:
>>
>> Initially:
>>  size = 256
>>  head = 0
>>  tail = 0
>>
>> In first refill:
>>  n_slots = 256
>>  head = 256
>>  tail = 0
>>
>> Subsequent run that 32 slots used:
>>  head = 256
>>  tail = 32
>>  n_slots = 32
>>  rte_pktmbuf_alloc_bulk(mq, buf[head & mask], n_slots);
>>   head & mask = 0
>>   // So it fills first 32 elements of buffer, which is inbound
>>
>> This will continue as above, combination of only gap filled and head
>> masked with 'mask' provides the wrapping required.
> 
> If I understand correctly, this works only if eth_memif_rx_zc always processes
> a number of packets which is a power of 2, so that the ring's head always wraps
> around at the end of a refill loop, never in the middle of it.
> Is there any reason this should be the case?
> Maybe the tests don't trigger the crash because this condition holds true for them?

Here's how to reproduce the crash on DPDK stable 23.11.1, using testpmd:

Server:
# ./dpdk-testpmd --vdev=net_memif0,id=1,role=server,bsize=1024,rsize=8 --single-file-segments -l2,3 --file-prefix test1 -- -i

Client:
# ./dpdk-testpmd --vdev=net_memif0,id=1,role=client,bsize=1024,rsize=8,zero-copy=yes --single-file-segments -l4,5 --file-prefix test2 -- -i
testpmd> start

Server:
testpmd> start tx_first
testpmt> set burst 15

At this point, the client crashes with a segmentation fault.
Before the burst is set to 15, its default value is 32.
If the receiver processes packets in bursts of size 2^N, the crash does not occur.
Setting the burst size to any power of 2 works, anything else crashes.
After applying this patch, the crashes are completely gone.
  
Mihai Brodschi July 8, 2024, 3:39 a.m. UTC | #7
On 07/07/2024 21:46, Mihai Brodschi wrote:
> 
> 
> On 07/07/2024 18:18, Mihai Brodschi wrote:
>>
>>
>> On 07/07/2024 17:05, Ferruh Yigit wrote:
>>>
>>> My expectation is numbers should be like following:
>>>
>>> Initially:
>>>  size = 256
>>>  head = 0
>>>  tail = 0
>>>
>>> In first refill:
>>>  n_slots = 256
>>>  head = 256
>>>  tail = 0
>>>
>>> Subsequent run that 32 slots used:
>>>  head = 256
>>>  tail = 32
>>>  n_slots = 32
>>>  rte_pktmbuf_alloc_bulk(mq, buf[head & mask], n_slots);
>>>   head & mask = 0
>>>   // So it fills first 32 elements of buffer, which is inbound
>>>
>>> This will continue as above, combination of only gap filled and head
>>> masked with 'mask' provides the wrapping required.
>>
>> If I understand correctly, this works only if eth_memif_rx_zc always processes
>> a number of packets which is a power of 2, so that the ring's head always wraps
>> around at the end of a refill loop, never in the middle of it.
>> Is there any reason this should be the case?
>> Maybe the tests don't trigger the crash because this condition holds true for them?
> 
> Here's how to reproduce the crash on DPDK stable 23.11.1, using testpmd:
> 
> Server:
> # ./dpdk-testpmd --vdev=net_memif0,id=1,role=server,bsize=1024,rsize=8 --single-file-segments -l2,3 --file-prefix test1 -- -i
> 
> Client:
> # ./dpdk-testpmd --vdev=net_memif0,id=1,role=client,bsize=1024,rsize=8,zero-copy=yes --single-file-segments -l4,5 --file-prefix test2 -- -i
> testpmd> start
> 
> Server:
> testpmd> start tx_first
> testpmt> set burst 15
> 
> At this point, the client crashes with a segmentation fault.
> Before the burst is set to 15, its default value is 32.
> If the receiver processes packets in bursts of size 2^N, the crash does not occur.
> Setting the burst size to any power of 2 works, anything else crashes.
> After applying this patch, the crashes are completely gone.

Sorry, this might not crash with a segmentation fault. To confirm the mempool is
corrupted, please compile DPDK with debug=true and the c_args -DRTE_LIBRTE_MEMPOOL_DEBUG.
You should see the client panic when changing the burst size to not be a power of 2.
This also works on the latest main branch.
  
Ferruh Yigit July 8, 2024, 11:45 a.m. UTC | #8
On 7/8/2024 4:39 AM, Mihai Brodschi wrote:
> 
> 
> On 07/07/2024 21:46, Mihai Brodschi wrote:
>>
>>
>> On 07/07/2024 18:18, Mihai Brodschi wrote:
>>>
>>>
>>> On 07/07/2024 17:05, Ferruh Yigit wrote:
>>>>
>>>> My expectation is numbers should be like following:
>>>>
>>>> Initially:
>>>>  size = 256
>>>>  head = 0
>>>>  tail = 0
>>>>
>>>> In first refill:
>>>>  n_slots = 256
>>>>  head = 256
>>>>  tail = 0
>>>>
>>>> Subsequent run that 32 slots used:
>>>>  head = 256
>>>>  tail = 32
>>>>  n_slots = 32
>>>>  rte_pktmbuf_alloc_bulk(mq, buf[head & mask], n_slots);
>>>>   head & mask = 0
>>>>   // So it fills first 32 elements of buffer, which is inbound
>>>>
>>>> This will continue as above, combination of only gap filled and head
>>>> masked with 'mask' provides the wrapping required.
>>>
>>> If I understand correctly, this works only if eth_memif_rx_zc always processes
>>> a number of packets which is a power of 2, so that the ring's head always wraps
>>> around at the end of a refill loop, never in the middle of it.
>>> Is there any reason this should be the case?
>>> Maybe the tests don't trigger the crash because this condition holds true for them?
>>
>> Here's how to reproduce the crash on DPDK stable 23.11.1, using testpmd:
>>
>> Server:
>> # ./dpdk-testpmd --vdev=net_memif0,id=1,role=server,bsize=1024,rsize=8 --single-file-segments -l2,3 --file-prefix test1 -- -i
>>
>> Client:
>> # ./dpdk-testpmd --vdev=net_memif0,id=1,role=client,bsize=1024,rsize=8,zero-copy=yes --single-file-segments -l4,5 --file-prefix test2 -- -i
>> testpmd> start
>>
>> Server:
>> testpmd> start tx_first
>> testpmt> set burst 15
>>
>> At this point, the client crashes with a segmentation fault.
>> Before the burst is set to 15, its default value is 32.
>> If the receiver processes packets in bursts of size 2^N, the crash does not occur.
>> Setting the burst size to any power of 2 works, anything else crashes.
>> After applying this patch, the crashes are completely gone.
> 
> Sorry, this might not crash with a segmentation fault. To confirm the mempool is
> corrupted, please compile DPDK with debug=true and the c_args -DRTE_LIBRTE_MEMPOOL_DEBUG.
> You should see the client panic when changing the burst size to not be a power of 2.
> This also works on the latest main branch.
> 

Hi Mihai,

Right, if the buffer size is not multiple of burst size, issue is valid.
And as there is a requirement to have buffer size power of two, burst
should have the same.
I assume this issue is not caught before because default burst size is 32.

Can you please share some performance impact of the change, with two
possible solutions we discussed above?

Other option is to add this as a limitation to the memif zero copy, but
this won't be good for usability.

We can decide based on performance numbers.

Thanks,
ferruh
  
Ferruh Yigit July 19, 2024, 9:03 a.m. UTC | #9
On 7/8/2024 12:45 PM, Ferruh Yigit wrote:
> On 7/8/2024 4:39 AM, Mihai Brodschi wrote:
>>
>>
>> On 07/07/2024 21:46, Mihai Brodschi wrote:
>>>
>>>
>>> On 07/07/2024 18:18, Mihai Brodschi wrote:
>>>>
>>>>
>>>> On 07/07/2024 17:05, Ferruh Yigit wrote:
>>>>>
>>>>> My expectation is numbers should be like following:
>>>>>
>>>>> Initially:
>>>>>  size = 256
>>>>>  head = 0
>>>>>  tail = 0
>>>>>
>>>>> In first refill:
>>>>>  n_slots = 256
>>>>>  head = 256
>>>>>  tail = 0
>>>>>
>>>>> Subsequent run that 32 slots used:
>>>>>  head = 256
>>>>>  tail = 32
>>>>>  n_slots = 32
>>>>>  rte_pktmbuf_alloc_bulk(mq, buf[head & mask], n_slots);
>>>>>   head & mask = 0
>>>>>   // So it fills first 32 elements of buffer, which is inbound
>>>>>
>>>>> This will continue as above, combination of only gap filled and head
>>>>> masked with 'mask' provides the wrapping required.
>>>>
>>>> If I understand correctly, this works only if eth_memif_rx_zc always processes
>>>> a number of packets which is a power of 2, so that the ring's head always wraps
>>>> around at the end of a refill loop, never in the middle of it.
>>>> Is there any reason this should be the case?
>>>> Maybe the tests don't trigger the crash because this condition holds true for them?
>>>
>>> Here's how to reproduce the crash on DPDK stable 23.11.1, using testpmd:
>>>
>>> Server:
>>> # ./dpdk-testpmd --vdev=net_memif0,id=1,role=server,bsize=1024,rsize=8 --single-file-segments -l2,3 --file-prefix test1 -- -i
>>>
>>> Client:
>>> # ./dpdk-testpmd --vdev=net_memif0,id=1,role=client,bsize=1024,rsize=8,zero-copy=yes --single-file-segments -l4,5 --file-prefix test2 -- -i
>>> testpmd> start
>>>
>>> Server:
>>> testpmd> start tx_first
>>> testpmt> set burst 15
>>>
>>> At this point, the client crashes with a segmentation fault.
>>> Before the burst is set to 15, its default value is 32.
>>> If the receiver processes packets in bursts of size 2^N, the crash does not occur.
>>> Setting the burst size to any power of 2 works, anything else crashes.
>>> After applying this patch, the crashes are completely gone.
>>
>> Sorry, this might not crash with a segmentation fault. To confirm the mempool is
>> corrupted, please compile DPDK with debug=true and the c_args -DRTE_LIBRTE_MEMPOOL_DEBUG.
>> You should see the client panic when changing the burst size to not be a power of 2.
>> This also works on the latest main branch.
>>
> 
> Hi Mihai,
> 
> Right, if the buffer size is not multiple of burst size, issue is valid.
> And as there is a requirement to have buffer size power of two, burst
> should have the same.
> I assume this issue is not caught before because default burst size is 32.
> 
> Can you please share some performance impact of the change, with two
> possible solutions we discussed above?
> 
> Other option is to add this as a limitation to the memif zero copy, but
> this won't be good for usability.
> 
> We can decide based on performance numbers.
> 
> 

Hi Jakup,

Do you have any comment on this?

I think we should either document this as limitation of the driver, or
fix it, and if so need to decide the fix.
  

Patch

diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c
index 16da22b5c6..3491c53cf1 100644
--- a/drivers/net/memif/rte_eth_memif.c
+++ b/drivers/net/memif/rte_eth_memif.c
@@ -600,6 +600,10 @@  eth_memif_rx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	ret = rte_pktmbuf_alloc_bulk(mq->mempool, &mq->buffers[head & mask], n_slots);
 	if (unlikely(ret < 0))
 		goto no_free_mbufs;
+	if (unlikely(n_slots > ring_size - (head & mask))) {
+		rte_memcpy(mq->buffers, &mq->buffers[ring_size],
+			(n_slots + (head & mask) - ring_size) * sizeof(struct rte_mbuf *));
+	}
 
 	while (n_slots--) {
 		s0 = head++ & mask;
@@ -1245,8 +1249,12 @@  memif_init_queues(struct rte_eth_dev *dev)
 		}
 		mq->buffers = NULL;
 		if (pmd->flags & ETH_MEMIF_FLAG_ZERO_COPY) {
+			/*
+			 * Allocate 2x ring_size to reserve a contiguous array for
+			 * rte_pktmbuf_alloc_bulk (to store allocated mbufs).
+			 */
 			mq->buffers = rte_zmalloc("bufs", sizeof(struct rte_mbuf *) *
-						  (1 << mq->log2_ring_size), 0);
+						  (1 << (mq->log2_ring_size + 1)), 0);
 			if (mq->buffers == NULL)
 				return -ENOMEM;
 		}