fix repopulation of tx_mbufs table
diff mbox series

Message ID 1542961382-5234-1-git-send-email-rk@semihalf.com
State Not Applicable, archived
Headers show
Series
  • fix repopulation of tx_mbufs table
Related show

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK

Commit Message

Rafal Kozik Nov. 23, 2018, 8:23 a.m. UTC
If in one TX cycle NIC does not send any packet, pktgen tries
to allocate 0 mbufs from pool. In such case DPDK return error
and packets will not be send. As no packet will be send in
next iteration this situation will repeat.

Checking if taking more mbufs is needed will prevent this situation.

Fixes: f034b381d19f ("cleanup and fix for FVL NIC performance")

Signed-off-by: Rafal Kozik <rk@semihalf.com>
---
 app/pktgen.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

Comments

Rafal Kozik Dec. 14, 2018, 11:26 a.m. UTC | #1
Hello Keith,

as from my last post passed about three weeks I would kindly ask
if you could provide any comments about this patch?

Best regards,
Rafal Kozik

pt., 23 lis 2018 o 09:23 Rafal Kozik <rk@semihalf.com> napisał(a):
>
> If in one TX cycle NIC does not send any packet, pktgen tries
> to allocate 0 mbufs from pool. In such case DPDK return error
> and packets will not be send. As no packet will be send in
> next iteration this situation will repeat.
>
> Checking if taking more mbufs is needed will prevent this situation.
>
> Fixes: f034b381d19f ("cleanup and fix for FVL NIC performance")
>
> Signed-off-by: Rafal Kozik <rk@semihalf.com>
> ---
>  app/pktgen.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/app/pktgen.c b/app/pktgen.c
> index 2d9ff59..b4d3dfe 100644
> --- a/app/pktgen.c
> +++ b/app/pktgen.c
> @@ -1054,7 +1054,8 @@ pktgen_send_pkts(port_info_t *info, uint16_t qid, struct rte_mempool *mp)
>                 uint16_t saved = info->q[qid].tx_mbufs.len;
>                 uint16_t nb_pkts = info->tx_burst - saved;
>
> -               rc = pg_pktmbuf_alloc_bulk(mp,
> +               if (likely(nb_pkts > 0))
> +                       rc = pg_pktmbuf_alloc_bulk(mp,
>                                            &info->q[qid].tx_mbufs.m_table[saved],
>                                            nb_pkts);
>                 if (rc == 0) {
> @@ -1070,7 +1071,8 @@ pktgen_send_pkts(port_info_t *info, uint16_t qid, struct rte_mempool *mp)
>                         uint16_t saved = info->q[qid].tx_mbufs.len;
>                         uint16_t nb_pkts = txCnt - saved;
>
> -                       rc = pg_pktmbuf_alloc_bulk(mp,
> +                       if (likely(nb_pkts > 0))
> +                               rc = pg_pktmbuf_alloc_bulk(mp,
>                                                    &info->q[qid].tx_mbufs.m_table[saved],
>                                                    nb_pkts);
>                         if (rc == 0) {
> --
> 2.7.4
>
Wiles, Keith Dec. 14, 2018, 3:29 p.m. UTC | #2
> On Dec 14, 2018, at 5:26 AM, Rafał Kozik <rk@semihalf.com> wrote:
> 
> Hello Keith,
> 
> as from my last post passed about three weeks I would kindly ask
> if you could provide any comments about this patch?

Missed this patch originally as it did not have pktgen on the status line.

Can you give more details as to how to reproduce the failure? 
> 
> Best regards,
> Rafal Kozik
> 
> pt., 23 lis 2018 o 09:23 Rafal Kozik <rk@semihalf.com> napisał(a):
>> 
>> If in one TX cycle NIC does not send any packet, pktgen tries
>> to allocate 0 mbufs from pool. In such case DPDK return error
>> and packets will not be send. As no packet will be send in
>> next iteration this situation will repeat.
>> 
>> Checking if taking more mbufs is needed will prevent this situation.
>> 
>> Fixes: f034b381d19f ("cleanup and fix for FVL NIC performance")
>> 
>> Signed-off-by: Rafal Kozik <rk@semihalf.com>
>> ---
>> app/pktgen.c | 6 ++++--
>> 1 file changed, 4 insertions(+), 2 deletions(-)
>> 
>> diff --git a/app/pktgen.c b/app/pktgen.c
>> index 2d9ff59..b4d3dfe 100644
>> --- a/app/pktgen.c
>> +++ b/app/pktgen.c
>> @@ -1054,7 +1054,8 @@ pktgen_send_pkts(port_info_t *info, uint16_t qid, struct rte_mempool *mp)
>>                uint16_t saved = info->q[qid].tx_mbufs.len;
>>                uint16_t nb_pkts = info->tx_burst - saved;
>> 
>> -               rc = pg_pktmbuf_alloc_bulk(mp,
>> +               if (likely(nb_pkts > 0))
>> +                       rc = pg_pktmbuf_alloc_bulk(mp,
>>                                           &info->q[qid].tx_mbufs.m_table[saved],
>>                                           nb_pkts);
>>                if (rc == 0) {
>> @@ -1070,7 +1071,8 @@ pktgen_send_pkts(port_info_t *info, uint16_t qid, struct rte_mempool *mp)
>>                        uint16_t saved = info->q[qid].tx_mbufs.len;
>>                        uint16_t nb_pkts = txCnt - saved;
>> 
>> -                       rc = pg_pktmbuf_alloc_bulk(mp,
>> +                       if (likely(nb_pkts > 0))
>> +                               rc = pg_pktmbuf_alloc_bulk(mp,
>>                                                   &info->q[qid].tx_mbufs.m_table[saved],
>>                                                   nb_pkts);
>>                        if (rc == 0) {
>> --
>> 2.7.4
>> 

Regards,
Keith
Wiles, Keith Dec. 14, 2018, 3:45 p.m. UTC | #3
> On Dec 14, 2018, at 9:29 AM, Wiles, Keith <keith.wiles@intel.com> wrote:
> 
> 
> 
>> On Dec 14, 2018, at 5:26 AM, Rafał Kozik <rk@semihalf.com> wrote:
>> 
>> Hello Keith,
>> 
>> as from my last post passed about three weeks I would kindly ask
>> if you could provide any comments about this patch?
> 
> Missed this patch originally as it did not have pktgen on the status line.
> 
> Can you give more details as to how to reproduce the failure? 

I can not find your patch in patchwork, does anyone know why?

>> 
>> Best regards,
>> Rafal Kozik
>> 
>> pt., 23 lis 2018 o 09:23 Rafal Kozik <rk@semihalf.com> napisał(a):
>>> 
>>> If in one TX cycle NIC does not send any packet, pktgen tries
>>> to allocate 0 mbufs from pool. In such case DPDK return error
>>> and packets will not be send. As no packet will be send in
>>> next iteration this situation will repeat.
>>> 
>>> Checking if taking more mbufs is needed will prevent this situation.
>>> 
>>> Fixes: f034b381d19f ("cleanup and fix for FVL NIC performance")
>>> 
>>> Signed-off-by: Rafal Kozik <rk@semihalf.com>
>>> ---
>>> app/pktgen.c | 6 ++++--
>>> 1 file changed, 4 insertions(+), 2 deletions(-)
>>> 
>>> diff --git a/app/pktgen.c b/app/pktgen.c
>>> index 2d9ff59..b4d3dfe 100644
>>> --- a/app/pktgen.c
>>> +++ b/app/pktgen.c
>>> @@ -1054,7 +1054,8 @@ pktgen_send_pkts(port_info_t *info, uint16_t qid, struct rte_mempool *mp)
>>>               uint16_t saved = info->q[qid].tx_mbufs.len;
>>>               uint16_t nb_pkts = info->tx_burst - saved;
>>> 
>>> -               rc = pg_pktmbuf_alloc_bulk(mp,
>>> +               if (likely(nb_pkts > 0))
>>> +                       rc = pg_pktmbuf_alloc_bulk(mp,
>>>                                          &info->q[qid].tx_mbufs.m_table[saved],
>>>                                          nb_pkts);
>>>               if (rc == 0) {
>>> @@ -1070,7 +1071,8 @@ pktgen_send_pkts(port_info_t *info, uint16_t qid, struct rte_mempool *mp)
>>>                       uint16_t saved = info->q[qid].tx_mbufs.len;
>>>                       uint16_t nb_pkts = txCnt - saved;
>>> 
>>> -                       rc = pg_pktmbuf_alloc_bulk(mp,
>>> +                       if (likely(nb_pkts > 0))
>>> +                               rc = pg_pktmbuf_alloc_bulk(mp,
>>>                                                  &info->q[qid].tx_mbufs.m_table[saved],
>>>                                                  nb_pkts);
>>>                       if (rc == 0) {
>>> --
>>> 2.7.4
>>> 
> 
> Regards,
> Keith

Regards,
Keith
Rafal Kozik Dec. 14, 2018, 3:48 p.m. UTC | #4
pt., 14 gru 2018 o 16:45 Wiles, Keith <keith.wiles@intel.com> napisał(a):
>
>
>
> > On Dec 14, 2018, at 9:29 AM, Wiles, Keith <keith.wiles@intel.com> wrote:
> >
> >
> >
> >> On Dec 14, 2018, at 5:26 AM, Rafał Kozik <rk@semihalf.com> wrote:
> >>
> >> Hello Keith,
> >>
> >> as from my last post passed about three weeks I would kindly ask
> >> if you could provide any comments about this patch?
> >
> > Missed this patch originally as it did not have pktgen on the status line.

Thank you for response. I apologies for wrong status line.

> >
> > Can you give more details as to how to reproduce the failure?

It occurred when pktgen is set to send more packets the NIC is able to transfer.
I such cases there could by cycle, when no packets are send, as Tx ring is full.

>
> I can not find your patch in patchwork, does anyone know why?

It was set as  Not Applicable: http://patchwork.dpdk.org/patch/48305/
>
> >>
> >> Best regards,
> >> Rafal Kozik
> >>
> >> pt., 23 lis 2018 o 09:23 Rafal Kozik <rk@semihalf.com> napisał(a):
> >>>
> >>> If in one TX cycle NIC does not send any packet, pktgen tries
> >>> to allocate 0 mbufs from pool. In such case DPDK return error
> >>> and packets will not be send. As no packet will be send in
> >>> next iteration this situation will repeat.
> >>>
> >>> Checking if taking more mbufs is needed will prevent this situation.
> >>>
> >>> Fixes: f034b381d19f ("cleanup and fix for FVL NIC performance")
> >>>
> >>> Signed-off-by: Rafal Kozik <rk@semihalf.com>
> >>> ---
> >>> app/pktgen.c | 6 ++++--
> >>> 1 file changed, 4 insertions(+), 2 deletions(-)
> >>>
> >>> diff --git a/app/pktgen.c b/app/pktgen.c
> >>> index 2d9ff59..b4d3dfe 100644
> >>> --- a/app/pktgen.c
> >>> +++ b/app/pktgen.c
> >>> @@ -1054,7 +1054,8 @@ pktgen_send_pkts(port_info_t *info, uint16_t qid, struct rte_mempool *mp)
> >>>               uint16_t saved = info->q[qid].tx_mbufs.len;
> >>>               uint16_t nb_pkts = info->tx_burst - saved;
> >>>
> >>> -               rc = pg_pktmbuf_alloc_bulk(mp,
> >>> +               if (likely(nb_pkts > 0))
> >>> +                       rc = pg_pktmbuf_alloc_bulk(mp,
> >>>                                          &info->q[qid].tx_mbufs.m_table[saved],
> >>>                                          nb_pkts);
> >>>               if (rc == 0) {
> >>> @@ -1070,7 +1071,8 @@ pktgen_send_pkts(port_info_t *info, uint16_t qid, struct rte_mempool *mp)
> >>>                       uint16_t saved = info->q[qid].tx_mbufs.len;
> >>>                       uint16_t nb_pkts = txCnt - saved;
> >>>
> >>> -                       rc = pg_pktmbuf_alloc_bulk(mp,
> >>> +                       if (likely(nb_pkts > 0))
> >>> +                               rc = pg_pktmbuf_alloc_bulk(mp,
> >>>                                                  &info->q[qid].tx_mbufs.m_table[saved],
> >>>                                                  nb_pkts);
> >>>                       if (rc == 0) {
> >>> --
> >>> 2.7.4
> >>>
> >
> > Regards,
> > Keith
>
> Regards,
> Keith
>
Wiles, Keith Dec. 14, 2018, 3:52 p.m. UTC | #5
> On Dec 14, 2018, at 9:48 AM, Rafał Kozik <rk@semihalf.com> wrote:
> 
> pt., 14 gru 2018 o 16:45 Wiles, Keith <keith.wiles@intel.com> napisał(a):
>> 
>> 
>> 
>>> On Dec 14, 2018, at 9:29 AM, Wiles, Keith <keith.wiles@intel.com> wrote:
>>> 
>>> 
>>> 
>>>> On Dec 14, 2018, at 5:26 AM, Rafał Kozik <rk@semihalf.com> wrote:
>>>> 
>>>> Hello Keith,
>>>> 
>>>> as from my last post passed about three weeks I would kindly ask
>>>> if you could provide any comments about this patch?
>>> 
>>> Missed this patch originally as it did not have pktgen on the status line.
> 
> Thank you for response. I apologies for wrong status line.
> 
>>> 
>>> Can you give more details as to how to reproduce the failure?
> 
> It occurred when pktgen is set to send more packets the NIC is able to transfer.
> I such cases there could by cycle, when no packets are send, as Tx ring is full.
> 
>> 
>> I can not find your patch in patchwork, does anyone know why?
> 
> It was set as  Not Applicable: http://patchwork.dpdk.org/patch/48305/

Strange I searched using your name with ‘any’ patches and that did not show up or I miss type something.

Looking at the code and your patch it looks fine, so I will integrate the patch with a small formatting fix for indents.
>> 
>>>> 
>>>> Best regards,
>>>> Rafal Kozik
>>>> 
>>>> pt., 23 lis 2018 o 09:23 Rafal Kozik <rk@semihalf.com> napisał(a):
>>>>> 
>>>>> If in one TX cycle NIC does not send any packet, pktgen tries
>>>>> to allocate 0 mbufs from pool. In such case DPDK return error
>>>>> and packets will not be send. As no packet will be send in
>>>>> next iteration this situation will repeat.
>>>>> 
>>>>> Checking if taking more mbufs is needed will prevent this situation.
>>>>> 
>>>>> Fixes: f034b381d19f ("cleanup and fix for FVL NIC performance")
>>>>> 
>>>>> Signed-off-by: Rafal Kozik <rk@semihalf.com>
>>>>> ---
>>>>> app/pktgen.c | 6 ++++--
>>>>> 1 file changed, 4 insertions(+), 2 deletions(-)
>>>>> 
>>>>> diff --git a/app/pktgen.c b/app/pktgen.c
>>>>> index 2d9ff59..b4d3dfe 100644
>>>>> --- a/app/pktgen.c
>>>>> +++ b/app/pktgen.c
>>>>> @@ -1054,7 +1054,8 @@ pktgen_send_pkts(port_info_t *info, uint16_t qid, struct rte_mempool *mp)
>>>>>              uint16_t saved = info->q[qid].tx_mbufs.len;
>>>>>              uint16_t nb_pkts = info->tx_burst - saved;
>>>>> 
>>>>> -               rc = pg_pktmbuf_alloc_bulk(mp,
>>>>> +               if (likely(nb_pkts > 0))
>>>>> +                       rc = pg_pktmbuf_alloc_bulk(mp,
>>>>>                                         &info->q[qid].tx_mbufs.m_table[saved],
>>>>>                                         nb_pkts);
>>>>>              if (rc == 0) {
>>>>> @@ -1070,7 +1071,8 @@ pktgen_send_pkts(port_info_t *info, uint16_t qid, struct rte_mempool *mp)
>>>>>                      uint16_t saved = info->q[qid].tx_mbufs.len;
>>>>>                      uint16_t nb_pkts = txCnt - saved;
>>>>> 
>>>>> -                       rc = pg_pktmbuf_alloc_bulk(mp,
>>>>> +                       if (likely(nb_pkts > 0))
>>>>> +                               rc = pg_pktmbuf_alloc_bulk(mp,
>>>>>                                                 &info->q[qid].tx_mbufs.m_table[saved],
>>>>>                                                 nb_pkts);
>>>>>                      if (rc == 0) {
>>>>> --
>>>>> 2.7.4
>>>>> 
>>> 
>>> Regards,
>>> Keith
>> 
>> Regards,
>> Keith

Regards,
Keith

Patch
diff mbox series

diff --git a/app/pktgen.c b/app/pktgen.c
index 2d9ff59..b4d3dfe 100644
--- a/app/pktgen.c
+++ b/app/pktgen.c
@@ -1054,7 +1054,8 @@  pktgen_send_pkts(port_info_t *info, uint16_t qid, struct rte_mempool *mp)
 		uint16_t saved = info->q[qid].tx_mbufs.len;
 		uint16_t nb_pkts = info->tx_burst - saved;
 
-		rc = pg_pktmbuf_alloc_bulk(mp,
+		if (likely(nb_pkts > 0))
+			rc = pg_pktmbuf_alloc_bulk(mp,
 		                           &info->q[qid].tx_mbufs.m_table[saved],
 		                           nb_pkts);
 		if (rc == 0) {
@@ -1070,7 +1071,8 @@  pktgen_send_pkts(port_info_t *info, uint16_t qid, struct rte_mempool *mp)
 			uint16_t saved = info->q[qid].tx_mbufs.len;
 			uint16_t nb_pkts = txCnt - saved;
 
-			rc = pg_pktmbuf_alloc_bulk(mp,
+			if (likely(nb_pkts > 0))
+				rc = pg_pktmbuf_alloc_bulk(mp,
 			                           &info->q[qid].tx_mbufs.m_table[saved],
 			                           nb_pkts);
 			if (rc == 0) {