diff mbox series

[v2,3/4] vhost: allocate and free packets in bulk

Message ID 87be741c28c8d93a4cf93dc1af556cd4ead687ed.1617790501.git.bnemeth@redhat.com (mailing list archive)
State Superseded, archived
Delegated to: Maxime Coquelin
Headers show
Series [v2,1/4] vhost: move allocation of mbuf outside of packet enqueue | expand

Checks

Context Check Description
ci/checkpatch success coding style OK

Commit Message

Balazs Nemeth April 7, 2021, 10:17 a.m. UTC
Now that all allocation and freeing has been moved together, use the
faster bulk versions instead of handling packets one by one.

Signed-off-by: Balazs Nemeth <bnemeth@redhat.com>
---
 lib/librte_vhost/virtio_net.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

Comments

Maxime Coquelin April 15, 2021, 3:32 p.m. UTC | #1
On 4/7/21 12:17 PM, Balazs Nemeth wrote:
> Now that all allocation and freeing has been moved together, use the
> faster bulk versions instead of handling packets one by one.
> 
> Signed-off-by: Balazs Nemeth <bnemeth@redhat.com>
> ---
>  lib/librte_vhost/virtio_net.c | 11 ++++++-----
>  1 file changed, 6 insertions(+), 5 deletions(-)
> 
> diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
> index 496f750e3..2f0c97b91 100644
> --- a/lib/librte_vhost/virtio_net.c
> +++ b/lib/librte_vhost/virtio_net.c
> @@ -2469,10 +2469,10 @@ virtio_dev_tx_packed(struct virtio_net *dev,
>  {
>  	uint32_t pkt_idx = 0;
>  	uint32_t remained = count;
> -	uint16_t i;
>  
> -	for (i = 0; i < count; ++i)
> -		pkts[i] = rte_pktmbuf_alloc(mbuf_pool);
> +	if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts, count)) {
> +		return 0;
> +	}

OK, get it now :)
Maybe just add a comment in previous patches that it is going to be
handled when moving to bulk allocation.

>  
>  	do {
>  		rte_prefetch0(&vq->desc_packed[vq->last_avail_idx]);
> @@ -2497,8 +2497,9 @@ virtio_dev_tx_packed(struct virtio_net *dev,
>  
>  	} while (remained);
>  
> -	for (i = pkt_idx; i < count; ++i)
> -		rte_pktmbuf_free(pkts[i]);
> +	if (pkt_idx != count) {
> +		rte_pktmbuf_free_bulk(&pkts[pkt_idx], count - pkt_idx);
> +	}
>  
>  	if (vq->shadow_used_idx) {
>  		do_data_copy_dequeue(vq);
>
David Marchand April 15, 2021, 3:45 p.m. UTC | #2
On Thu, Apr 15, 2021 at 5:33 PM Maxime Coquelin
<maxime.coquelin@redhat.com> wrote:
> On 4/7/21 12:17 PM, Balazs Nemeth wrote:
> > Now that all allocation and freeing has been moved together, use the
> > faster bulk versions instead of handling packets one by one.
> >
> > Signed-off-by: Balazs Nemeth <bnemeth@redhat.com>
> > ---
> >  lib/librte_vhost/virtio_net.c | 11 ++++++-----
> >  1 file changed, 6 insertions(+), 5 deletions(-)
> >
> > diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
> > index 496f750e3..2f0c97b91 100644
> > --- a/lib/librte_vhost/virtio_net.c
> > +++ b/lib/librte_vhost/virtio_net.c
> > @@ -2469,10 +2469,10 @@ virtio_dev_tx_packed(struct virtio_net *dev,
> >  {
> >       uint32_t pkt_idx = 0;
> >       uint32_t remained = count;
> > -     uint16_t i;
> >
> > -     for (i = 0; i < count; ++i)
> > -             pkts[i] = rte_pktmbuf_alloc(mbuf_pool);
> > +     if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts, count)) {
> > +             return 0;
> > +     }
>
> OK, get it now :)
> Maybe just add a comment in previous patches that it is going to be
> handled when moving to bulk allocation.

Looking at the whole series, it is easy to read and understand as a
single patch.
And it avoids this intermediate issue.
Maxime Coquelin April 15, 2021, 3:50 p.m. UTC | #3
On 4/15/21 5:45 PM, David Marchand wrote:
> On Thu, Apr 15, 2021 at 5:33 PM Maxime Coquelin
> <maxime.coquelin@redhat.com> wrote:
>> On 4/7/21 12:17 PM, Balazs Nemeth wrote:
>>> Now that all allocation and freeing has been moved together, use the
>>> faster bulk versions instead of handling packets one by one.
>>>
>>> Signed-off-by: Balazs Nemeth <bnemeth@redhat.com>
>>> ---
>>>  lib/librte_vhost/virtio_net.c | 11 ++++++-----
>>>  1 file changed, 6 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
>>> index 496f750e3..2f0c97b91 100644
>>> --- a/lib/librte_vhost/virtio_net.c
>>> +++ b/lib/librte_vhost/virtio_net.c
>>> @@ -2469,10 +2469,10 @@ virtio_dev_tx_packed(struct virtio_net *dev,
>>>  {
>>>       uint32_t pkt_idx = 0;
>>>       uint32_t remained = count;
>>> -     uint16_t i;
>>>
>>> -     for (i = 0; i < count; ++i)
>>> -             pkts[i] = rte_pktmbuf_alloc(mbuf_pool);
>>> +     if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts, count)) {
>>> +             return 0;
>>> +     }
>>
>> OK, get it now :)
>> Maybe just add a comment in previous patches that it is going to be
>> handled when moving to bulk allocation.
> 
> Looking at the whole series, it is easy to read and understand as a
> single patch.
> And it avoids this intermediate issue.

I agree with David, better to squash the first three patches.

Thanks,
Maxime
Maxime Coquelin April 15, 2021, 3:58 p.m. UTC | #4
On 4/15/21 5:50 PM, Maxime Coquelin wrote:
> 
> 
> On 4/15/21 5:45 PM, David Marchand wrote:
>> On Thu, Apr 15, 2021 at 5:33 PM Maxime Coquelin
>> <maxime.coquelin@redhat.com> wrote:
>>> On 4/7/21 12:17 PM, Balazs Nemeth wrote:
>>>> Now that all allocation and freeing has been moved together, use the
>>>> faster bulk versions instead of handling packets one by one.
>>>>
>>>> Signed-off-by: Balazs Nemeth <bnemeth@redhat.com>
>>>> ---
>>>>  lib/librte_vhost/virtio_net.c | 11 ++++++-----
>>>>  1 file changed, 6 insertions(+), 5 deletions(-)
>>>>
>>>> diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
>>>> index 496f750e3..2f0c97b91 100644
>>>> --- a/lib/librte_vhost/virtio_net.c
>>>> +++ b/lib/librte_vhost/virtio_net.c
>>>> @@ -2469,10 +2469,10 @@ virtio_dev_tx_packed(struct virtio_net *dev,
>>>>  {
>>>>       uint32_t pkt_idx = 0;
>>>>       uint32_t remained = count;
>>>> -     uint16_t i;
>>>>
>>>> -     for (i = 0; i < count; ++i)
>>>> -             pkts[i] = rte_pktmbuf_alloc(mbuf_pool);
>>>> +     if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts, count)) {
>>>> +             return 0;
>>>> +     }
>>>
>>> OK, get it now :)
>>> Maybe just add a comment in previous patches that it is going to be
>>> handled when moving to bulk allocation.
>>
>> Looking at the whole series, it is easy to read and understand as a
>> single patch.
>> And it avoids this intermediate issue.
> 
> I agree with David, better to squash the first three patches.

As David told me off-list, all the series can be squashed since the last
patch is a consequence of doing bulk allocation.

Cheers,
Maxime

> Thanks,
> Maxime
>
diff mbox series

Patch

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 496f750e3..2f0c97b91 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -2469,10 +2469,10 @@  virtio_dev_tx_packed(struct virtio_net *dev,
 {
 	uint32_t pkt_idx = 0;
 	uint32_t remained = count;
-	uint16_t i;
 
-	for (i = 0; i < count; ++i)
-		pkts[i] = rte_pktmbuf_alloc(mbuf_pool);
+	if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts, count)) {
+		return 0;
+	}
 
 	do {
 		rte_prefetch0(&vq->desc_packed[vq->last_avail_idx]);
@@ -2497,8 +2497,9 @@  virtio_dev_tx_packed(struct virtio_net *dev,
 
 	} while (remained);
 
-	for (i = pkt_idx; i < count; ++i)
-		rte_pktmbuf_free(pkts[i]);
+	if (pkt_idx != count) {
+		rte_pktmbuf_free_bulk(&pkts[pkt_idx], count - pkt_idx);
+	}
 
 	if (vq->shadow_used_idx) {
 		do_data_copy_dequeue(vq);