[v11,2/6] app/testpmd: add multiple pools per core creation

Message ID 1602855582-15847-3-git-send-email-viacheslavo@nvidia.com (mailing list archive)
State Superseded, archived
Delegated to: Ferruh Yigit
Headers
Series ethdev: introduce Rx buffer split |

Checks

Context Check Description
ci/checkpatch success coding style OK

Commit Message

Slava Ovsiienko Oct. 16, 2020, 1:39 p.m. UTC
  The command line parameter --mbuf-size is updated, it can handle
the multiple values like the following:

--mbuf-size=2176,512,768,4096

specifying the creation the extra memory pools with the requested
mbuf data buffer sizes. If some buffer split feature is engaged
the extra memory pools can be used to configure the Rx queues
with rte_the_dev_rx_queue_setup_ex().

The extra pools are created with requested sizes, and pool names
are assigned with appended index: mbuf_pool_socket_%socket_%index.
Index zero is used to specify the first mandatory pool to maintain
compatibility with existing code.

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 app/test-pmd/bpf_cmd.c                |  4 +--
 app/test-pmd/cmdline.c                |  2 +-
 app/test-pmd/config.c                 |  6 ++--
 app/test-pmd/parameters.c             | 24 +++++++++----
 app/test-pmd/testpmd.c                | 63 +++++++++++++++++++----------------
 app/test-pmd/testpmd.h                | 24 ++++++++++---
 doc/guides/testpmd_app_ug/run_app.rst |  7 ++--
 7 files changed, 83 insertions(+), 47 deletions(-)
  

Comments

Ferruh Yigit Oct. 16, 2020, 3:05 p.m. UTC | #1
On 10/16/2020 2:39 PM, Viacheslav Ovsiienko wrote:
> The command line parameter --mbuf-size is updated, it can handle
> the multiple values like the following:
> 
> --mbuf-size=2176,512,768,4096
> 
> specifying the creation the extra memory pools with the requested
> mbuf data buffer sizes. If some buffer split feature is engaged
> the extra memory pools can be used to configure the Rx queues
> with rte_the_dev_rx_queue_setup_ex().
> 
> The extra pools are created with requested sizes, and pool names
> are assigned with appended index: mbuf_pool_socket_%socket_%index.
> Index zero is used to specify the first mandatory pool to maintain
> compatibility with existing code.
> 
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

<...>

>   /* Mbuf Pools */
>   static inline void
> -mbuf_poolname_build(unsigned int sock_id, char* mp_name, int name_size)
> +mbuf_poolname_build(unsigned int sock_id, char *mp_name,
> +		    int name_size, unsigned int idx)
>   {
> -	snprintf(mp_name, name_size, "mbuf_pool_socket_%u", sock_id);
> +	if (!idx)
> +		snprintf(mp_name, name_size, "mbuf_pool_socket_%u", sock_id);
> +	else
> +		snprintf(mp_name, name_size, "mbuf_pool_socket_%u_%u",
> +			 sock_id, idx);

'mp_name' can theoretically overflow and gives a compiler warning, although not 
sure if this truncation is a problem in practice.

../app/test-pmd/testpmd.c: In function ‘rx_queue_setup’:
../app/test-pmd/testpmd.h:666:53: error: ‘%u’ directive output may be truncated 
writing between 1 and 10 bytes into a region of size between 0 and 7 
[-Werror=format-truncation=]
  666 |   snprintf(mp_name, name_size, "mbuf_pool_socket_%u_%u",
      |                                                     ^~
../app/test-pmd/testpmd.h:666:32: note: directive argument in the range [1, 
4294967295]
   666 |   snprintf(mp_name, name_size, "mbuf_pool_socket_%u_%u",
       |                                ^~~~~~~~~~~~~~~~~~~~~~~~
../app/test-pmd/testpmd.h:666:3: note: ‘snprintf’ output between 21 and 39 bytes 
into a destination of size 26
   666 |   snprintf(mp_name, name_size, "mbuf_pool_socket_%u_%u",
       |   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   667 |     sock_id, idx);
       |     ~~~~~~~~~~~~~

Any suggestion for fix? Can we shorten the string above, is it used somewhere 
else? Or casting variables to a smaller size may work too..
  
Ferruh Yigit Oct. 16, 2020, 3:38 p.m. UTC | #2
On 10/16/2020 4:05 PM, Ferruh Yigit wrote:
> On 10/16/2020 2:39 PM, Viacheslav Ovsiienko wrote:
>> The command line parameter --mbuf-size is updated, it can handle
>> the multiple values like the following:
>>
>> --mbuf-size=2176,512,768,4096
>>
>> specifying the creation the extra memory pools with the requested
>> mbuf data buffer sizes. If some buffer split feature is engaged
>> the extra memory pools can be used to configure the Rx queues
>> with rte_the_dev_rx_queue_setup_ex().
>>
>> The extra pools are created with requested sizes, and pool names
>> are assigned with appended index: mbuf_pool_socket_%socket_%index.
>> Index zero is used to specify the first mandatory pool to maintain
>> compatibility with existing code.
>>
>> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> 
> <...>
> 
>>   /* Mbuf Pools */
>>   static inline void
>> -mbuf_poolname_build(unsigned int sock_id, char* mp_name, int name_size)
>> +mbuf_poolname_build(unsigned int sock_id, char *mp_name,
>> +            int name_size, unsigned int idx)
>>   {
>> -    snprintf(mp_name, name_size, "mbuf_pool_socket_%u", sock_id);
>> +    if (!idx)
>> +        snprintf(mp_name, name_size, "mbuf_pool_socket_%u", sock_id);
>> +    else
>> +        snprintf(mp_name, name_size, "mbuf_pool_socket_%u_%u",
>> +             sock_id, idx);
> 
> 'mp_name' can theoretically overflow and gives a compiler warning, although not 
> sure if this truncation is a problem in practice.
> 
> ../app/test-pmd/testpmd.c: In function ‘rx_queue_setup’:
> ../app/test-pmd/testpmd.h:666:53: error: ‘%u’ directive output may be truncated 
> writing between 1 and 10 bytes into a region of size between 0 and 7 
> [-Werror=format-truncation=]
>   666 |   snprintf(mp_name, name_size, "mbuf_pool_socket_%u_%u",
>       |                                                     ^~
> ../app/test-pmd/testpmd.h:666:32: note: directive argument in the range [1, 
> 4294967295]
>    666 |   snprintf(mp_name, name_size, "mbuf_pool_socket_%u_%u",
>        |                                ^~~~~~~~~~~~~~~~~~~~~~~~
> ../app/test-pmd/testpmd.h:666:3: note: ‘snprintf’ output between 21 and 39 bytes 
> into a destination of size 26
>    666 |   snprintf(mp_name, name_size, "mbuf_pool_socket_%u_%u",
>        |   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>    667 |     sock_id, idx);
>        |     ~~~~~~~~~~~~~
> 
> Any suggestion for fix? Can we shorten the string above, is it used somewhere 
> else? Or casting variables to a smaller size may work too..

What do you think to following:

  diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
  index 4cd0f967f0..4ac1c1f86e 100644
  --- a/app/test-pmd/testpmd.h
  +++ b/app/test-pmd/testpmd.h
  @@ -661,10 +661,11 @@ mbuf_poolname_build(unsigned int sock_id, char *mp_name,
                      int name_size, unsigned int idx)
   {
          if (!idx)
  -               snprintf(mp_name, name_size, "mbuf_pool_socket_%u", sock_id);
  +               snprintf(mp_name, name_size, "mbuf_pool_s%u",
  +                       (uint16_t)sock_id);
          else
  -               snprintf(mp_name, name_size, "mbuf_pool_socket_%u_%u",
  -                        sock_id, idx);
  +               snprintf(mp_name, name_size, "mbuf_pool_s%u_%u",
  +                       (uint16_t)sock_id, (uint16_t)idx);
   }

   static inline struct rte_mempool *
  
Slava Ovsiienko Oct. 16, 2020, 3:48 p.m. UTC | #3
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Friday, October 16, 2020 18:39
> To: Slava Ovsiienko <viacheslavo@nvidia.com>; dev@dpdk.org
> Cc: NBU-Contact-Thomas Monjalon <thomas@monjalon.net>;
> stephen@networkplumber.org; olivier.matz@6wind.com;
> jerinjacobk@gmail.com; maxime.coquelin@redhat.com;
> david.marchand@redhat.com; arybchenko@solarflare.com
> Subject: Re: [dpdk-dev] [PATCH v11 2/6] app/testpmd: add multiple pools per
> core creation
> 
> On 10/16/2020 4:05 PM, Ferruh Yigit wrote:
> > On 10/16/2020 2:39 PM, Viacheslav Ovsiienko wrote:
> >> The command line parameter --mbuf-size is updated, it can handle the
> >> multiple values like the following:
> >>
> >> --mbuf-size=2176,512,768,4096
> >>
> >> specifying the creation the extra memory pools with the requested
> >> mbuf data buffer sizes. If some buffer split feature is engaged the
> >> extra memory pools can be used to configure the Rx queues with
> >> rte_the_dev_rx_queue_setup_ex().
> >>
> >> The extra pools are created with requested sizes, and pool names are
> >> assigned with appended index: mbuf_pool_socket_%socket_%index.
> >> Index zero is used to specify the first mandatory pool to maintain
> >> compatibility with existing code.
> >>
> >> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> >
> > <...>
> >
> >>   /* Mbuf Pools */
> >>   static inline void
> >> -mbuf_poolname_build(unsigned int sock_id, char* mp_name, int
> >> name_size)
> >> +mbuf_poolname_build(unsigned int sock_id, char *mp_name,
> >> +            int name_size, unsigned int idx)
> >>   {
> >> -    snprintf(mp_name, name_size, "mbuf_pool_socket_%u", sock_id);
> >> +    if (!idx)
> >> +        snprintf(mp_name, name_size, "mbuf_pool_socket_%u",
> >> +sock_id);
> >> +    else
> >> +        snprintf(mp_name, name_size, "mbuf_pool_socket_%u_%u",
> >> +             sock_id, idx);
> >
> > 'mp_name' can theoretically overflow and gives a compiler warning,
> > although not sure if this truncation is a problem in practice.
> >
> > ../app/test-pmd/testpmd.c: In function ‘rx_queue_setup’:
> > ../app/test-pmd/testpmd.h:666:53: error: ‘%u’ directive output may be
> > truncated writing between 1 and 10 bytes into a region of size between
> > 0 and 7 [-Werror=format-truncation=]
> >   666 |   snprintf(mp_name, name_size, "mbuf_pool_socket_%u_%u",
> >       |                                                     ^~
> > ../app/test-pmd/testpmd.h:666:32: note: directive argument in the
> > range [1, 4294967295]
> >    666 |   snprintf(mp_name, name_size, "mbuf_pool_socket_%u_%u",
> >        |                                ^~~~~~~~~~~~~~~~~~~~~~~~
> > ../app/test-pmd/testpmd.h:666:3: note: ‘snprintf’ output between 21
> > and 39 bytes into a destination of size 26
> >    666 |   snprintf(mp_name, name_size, "mbuf_pool_socket_%u_%u",
> >        |   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >    667 |     sock_id, idx);
> >        |     ~~~~~~~~~~~~~
> >
> > Any suggestion for fix? Can we shorten the string above, is it used
> > somewhere else? Or casting variables to a smaller size may work too..
> 
> What do you think to following:
> 
>   diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
>   index 4cd0f967f0..4ac1c1f86e 100644
>   --- a/app/test-pmd/testpmd.h
>   +++ b/app/test-pmd/testpmd.h
>   @@ -661,10 +661,11 @@ mbuf_poolname_build(unsigned int sock_id, char
> *mp_name,
>                       int name_size, unsigned int idx)
>    {
>           if (!idx)
>   -               snprintf(mp_name, name_size, "mbuf_pool_socket_%u", sock_id);
>   +               snprintf(mp_name, name_size, "mbuf_pool_s%u",
>   +                       (uint16_t)sock_id);
>           else
>   -               snprintf(mp_name, name_size, "mbuf_pool_socket_%u_%u",
>   -                        sock_id, idx);
>   +               snprintf(mp_name, name_size, "mbuf_pool_s%u_%u",
>   +                       (uint16_t)sock_id, (uint16_t)idx);
>    }
> 
>    static inline struct rte_mempool *

Testpmd build mbuf pool names with mbuf_poolname_build() routine only
(invoked from mbuf_pool_find/mbuf_pool_create). I suppose it is safe
to replace "mbuf_pool_socket_" prefix with shorter one.

Say something like this: #define TESTPMD_MBUF_POOL_PFX "mbuf"

Truncating idx to uint16_t is OK, let me update the types of routines.
sock_id seems to be OK either. If you agree on both - please, let me update the  patch.

With best regards, Slava
  
Ferruh Yigit Oct. 16, 2020, 3:52 p.m. UTC | #4
On 10/16/2020 4:48 PM, Slava Ovsiienko wrote:
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>> Sent: Friday, October 16, 2020 18:39
>> To: Slava Ovsiienko <viacheslavo@nvidia.com>; dev@dpdk.org
>> Cc: NBU-Contact-Thomas Monjalon <thomas@monjalon.net>;
>> stephen@networkplumber.org; olivier.matz@6wind.com;
>> jerinjacobk@gmail.com; maxime.coquelin@redhat.com;
>> david.marchand@redhat.com; arybchenko@solarflare.com
>> Subject: Re: [dpdk-dev] [PATCH v11 2/6] app/testpmd: add multiple pools per
>> core creation
>>
>> On 10/16/2020 4:05 PM, Ferruh Yigit wrote:
>>> On 10/16/2020 2:39 PM, Viacheslav Ovsiienko wrote:
>>>> The command line parameter --mbuf-size is updated, it can handle the
>>>> multiple values like the following:
>>>>
>>>> --mbuf-size=2176,512,768,4096
>>>>
>>>> specifying the creation the extra memory pools with the requested
>>>> mbuf data buffer sizes. If some buffer split feature is engaged the
>>>> extra memory pools can be used to configure the Rx queues with
>>>> rte_the_dev_rx_queue_setup_ex().
>>>>
>>>> The extra pools are created with requested sizes, and pool names are
>>>> assigned with appended index: mbuf_pool_socket_%socket_%index.
>>>> Index zero is used to specify the first mandatory pool to maintain
>>>> compatibility with existing code.
>>>>
>>>> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
>>>
>>> <...>
>>>
>>>>    /* Mbuf Pools */
>>>>    static inline void
>>>> -mbuf_poolname_build(unsigned int sock_id, char* mp_name, int
>>>> name_size)
>>>> +mbuf_poolname_build(unsigned int sock_id, char *mp_name,
>>>> +            int name_size, unsigned int idx)
>>>>    {
>>>> -    snprintf(mp_name, name_size, "mbuf_pool_socket_%u", sock_id);
>>>> +    if (!idx)
>>>> +        snprintf(mp_name, name_size, "mbuf_pool_socket_%u",
>>>> +sock_id);
>>>> +    else
>>>> +        snprintf(mp_name, name_size, "mbuf_pool_socket_%u_%u",
>>>> +             sock_id, idx);
>>>
>>> 'mp_name' can theoretically overflow and gives a compiler warning,
>>> although not sure if this truncation is a problem in practice.
>>>
>>> ../app/test-pmd/testpmd.c: In function ‘rx_queue_setup’:
>>> ../app/test-pmd/testpmd.h:666:53: error: ‘%u’ directive output may be
>>> truncated writing between 1 and 10 bytes into a region of size between
>>> 0 and 7 [-Werror=format-truncation=]
>>>    666 |   snprintf(mp_name, name_size, "mbuf_pool_socket_%u_%u",
>>>        |                                                     ^~
>>> ../app/test-pmd/testpmd.h:666:32: note: directive argument in the
>>> range [1, 4294967295]
>>>     666 |   snprintf(mp_name, name_size, "mbuf_pool_socket_%u_%u",
>>>         |                                ^~~~~~~~~~~~~~~~~~~~~~~~
>>> ../app/test-pmd/testpmd.h:666:3: note: ‘snprintf’ output between 21
>>> and 39 bytes into a destination of size 26
>>>     666 |   snprintf(mp_name, name_size, "mbuf_pool_socket_%u_%u",
>>>         |   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>>     667 |     sock_id, idx);
>>>         |     ~~~~~~~~~~~~~
>>>
>>> Any suggestion for fix? Can we shorten the string above, is it used
>>> somewhere else? Or casting variables to a smaller size may work too..
>>
>> What do you think to following:
>>
>>    diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
>>    index 4cd0f967f0..4ac1c1f86e 100644
>>    --- a/app/test-pmd/testpmd.h
>>    +++ b/app/test-pmd/testpmd.h
>>    @@ -661,10 +661,11 @@ mbuf_poolname_build(unsigned int sock_id, char
>> *mp_name,
>>                        int name_size, unsigned int idx)
>>     {
>>            if (!idx)
>>    -               snprintf(mp_name, name_size, "mbuf_pool_socket_%u", sock_id);
>>    +               snprintf(mp_name, name_size, "mbuf_pool_s%u",
>>    +                       (uint16_t)sock_id);
>>            else
>>    -               snprintf(mp_name, name_size, "mbuf_pool_socket_%u_%u",
>>    -                        sock_id, idx);
>>    +               snprintf(mp_name, name_size, "mbuf_pool_s%u_%u",
>>    +                       (uint16_t)sock_id, (uint16_t)idx);
>>     }
>>
>>     static inline struct rte_mempool *
> 
> Testpmd build mbuf pool names with mbuf_poolname_build() routine only
> (invoked from mbuf_pool_find/mbuf_pool_create). I suppose it is safe
> to replace "mbuf_pool_socket_" prefix with shorter one.
> 
> Say something like this: #define TESTPMD_MBUF_POOL_PFX "mbuf"
> 

Just 'mbuf' may not give enough context in the logs, what about "mbuf_pool_%u_%u"?

> Truncating idx to uint16_t is OK, let me update the types of routines.
> sock_id seems to be OK either. If you agree on both - please, let me update the  patch.
> 

If variables are not truncated, it leaves very small place for prefix, 3 chars 
if I calculate correctly, so it seems we need to truncate it anyway.
  
Slava Ovsiienko Oct. 16, 2020, 3:55 p.m. UTC | #5
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Friday, October 16, 2020 18:52
> To: Slava Ovsiienko <viacheslavo@nvidia.com>; dev@dpdk.org
> Cc: NBU-Contact-Thomas Monjalon <thomas@monjalon.net>;
> stephen@networkplumber.org; olivier.matz@6wind.com;
> jerinjacobk@gmail.com; maxime.coquelin@redhat.com;
> david.marchand@redhat.com; arybchenko@solarflare.com
> Subject: Re: [dpdk-dev] [PATCH v11 2/6] app/testpmd: add multiple pools per
> core creation
> 
> On 10/16/2020 4:48 PM, Slava Ovsiienko wrote:
> >> -----Original Message-----
> >> From: Ferruh Yigit <ferruh.yigit@intel.com>
> >> Sent: Friday, October 16, 2020 18:39
> >> To: Slava Ovsiienko <viacheslavo@nvidia.com>; dev@dpdk.org
> >> Cc: NBU-Contact-Thomas Monjalon <thomas@monjalon.net>;
> >> stephen@networkplumber.org; olivier.matz@6wind.com;
> >> jerinjacobk@gmail.com; maxime.coquelin@redhat.com;
> >> david.marchand@redhat.com; arybchenko@solarflare.com
> >> Subject: Re: [dpdk-dev] [PATCH v11 2/6] app/testpmd: add multiple
> >> pools per core creation
> >>
> >> On 10/16/2020 4:05 PM, Ferruh Yigit wrote:
> >>> On 10/16/2020 2:39 PM, Viacheslav Ovsiienko wrote:
> >>>> The command line parameter --mbuf-size is updated, it can handle
> >>>> the multiple values like the following:
> >>>>
> >>>> --mbuf-size=2176,512,768,4096
> >>>>
> >>>> specifying the creation the extra memory pools with the requested
> >>>> mbuf data buffer sizes. If some buffer split feature is engaged the
> >>>> extra memory pools can be used to configure the Rx queues with
> >>>> rte_the_dev_rx_queue_setup_ex().
> >>>>
> >>>> The extra pools are created with requested sizes, and pool names
> >>>> are assigned with appended index: mbuf_pool_socket_%socket_%index.
> >>>> Index zero is used to specify the first mandatory pool to maintain
> >>>> compatibility with existing code.
> >>>>
> >>>> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> >>>
> >>> <...>
> >>>
> >>>>    /* Mbuf Pools */
> >>>>    static inline void
> >>>> -mbuf_poolname_build(unsigned int sock_id, char* mp_name, int
> >>>> name_size)
> >>>> +mbuf_poolname_build(unsigned int sock_id, char *mp_name,
> >>>> +            int name_size, unsigned int idx)
> >>>>    {
> >>>> -    snprintf(mp_name, name_size, "mbuf_pool_socket_%u", sock_id);
> >>>> +    if (!idx)
> >>>> +        snprintf(mp_name, name_size, "mbuf_pool_socket_%u",
> >>>> +sock_id);
> >>>> +    else
> >>>> +        snprintf(mp_name, name_size, "mbuf_pool_socket_%u_%u",
> >>>> +             sock_id, idx);
> >>>
> >>> 'mp_name' can theoretically overflow and gives a compiler warning,
> >>> although not sure if this truncation is a problem in practice.
> >>>
> >>> ../app/test-pmd/testpmd.c: In function ‘rx_queue_setup’:
> >>> ../app/test-pmd/testpmd.h:666:53: error: ‘%u’ directive output may
> >>> be truncated writing between 1 and 10 bytes into a region of size
> >>> between
> >>> 0 and 7 [-Werror=format-truncation=]
> >>>    666 |   snprintf(mp_name, name_size, "mbuf_pool_socket_%u_%u",
> >>>        |                                                     ^~
> >>> ../app/test-pmd/testpmd.h:666:32: note: directive argument in the
> >>> range [1, 4294967295]
> >>>     666 |   snprintf(mp_name, name_size, "mbuf_pool_socket_%u_%u",
> >>>         |                                ^~~~~~~~~~~~~~~~~~~~~~~~
> >>> ../app/test-pmd/testpmd.h:666:3: note: ‘snprintf’ output between 21
> >>> and 39 bytes into a destination of size 26
> >>>     666 |   snprintf(mp_name, name_size, "mbuf_pool_socket_%u_%u",
> >>>         |   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >>>     667 |     sock_id, idx);
> >>>         |     ~~~~~~~~~~~~~
> >>>
> >>> Any suggestion for fix? Can we shorten the string above, is it used
> >>> somewhere else? Or casting variables to a smaller size may work too..
> >>
> >> What do you think to following:
> >>
> >>    diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
> >>    index 4cd0f967f0..4ac1c1f86e 100644
> >>    --- a/app/test-pmd/testpmd.h
> >>    +++ b/app/test-pmd/testpmd.h
> >>    @@ -661,10 +661,11 @@ mbuf_poolname_build(unsigned int sock_id,
> >> char *mp_name,
> >>                        int name_size, unsigned int idx)
> >>     {
> >>            if (!idx)
> >>    -               snprintf(mp_name, name_size, "mbuf_pool_socket_%u",
> sock_id);
> >>    +               snprintf(mp_name, name_size, "mbuf_pool_s%u",
> >>    +                       (uint16_t)sock_id);
> >>            else
> >>    -               snprintf(mp_name, name_size, "mbuf_pool_socket_%u_%u",
> >>    -                        sock_id, idx);
> >>    +               snprintf(mp_name, name_size, "mbuf_pool_s%u_%u",
> >>    +                       (uint16_t)sock_id, (uint16_t)idx);
> >>     }
> >>
> >>     static inline struct rte_mempool *
> >
> > Testpmd build mbuf pool names with mbuf_poolname_build() routine only
> > (invoked from mbuf_pool_find/mbuf_pool_create). I suppose it is safe
> > to replace "mbuf_pool_socket_" prefix with shorter one.
> >
> > Say something like this: #define TESTPMD_MBUF_POOL_PFX "mbuf"
> >
> 
> Just 'mbuf' may not give enough context in the logs, what about
> "mbuf_pool_%u_%u"?
It is place in the pool structure, so it is an object name
and tightly coupled with the pool.  OK, let's try "mb_pool" ?
  
Ferruh Yigit Oct. 16, 2020, 3:57 p.m. UTC | #6
On 10/16/2020 4:55 PM, Slava Ovsiienko wrote:
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>> Sent: Friday, October 16, 2020 18:52
>> To: Slava Ovsiienko <viacheslavo@nvidia.com>; dev@dpdk.org
>> Cc: NBU-Contact-Thomas Monjalon <thomas@monjalon.net>;
>> stephen@networkplumber.org; olivier.matz@6wind.com;
>> jerinjacobk@gmail.com; maxime.coquelin@redhat.com;
>> david.marchand@redhat.com; arybchenko@solarflare.com
>> Subject: Re: [dpdk-dev] [PATCH v11 2/6] app/testpmd: add multiple pools per
>> core creation
>>
>> On 10/16/2020 4:48 PM, Slava Ovsiienko wrote:
>>>> -----Original Message-----
>>>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>>>> Sent: Friday, October 16, 2020 18:39
>>>> To: Slava Ovsiienko <viacheslavo@nvidia.com>; dev@dpdk.org
>>>> Cc: NBU-Contact-Thomas Monjalon <thomas@monjalon.net>;
>>>> stephen@networkplumber.org; olivier.matz@6wind.com;
>>>> jerinjacobk@gmail.com; maxime.coquelin@redhat.com;
>>>> david.marchand@redhat.com; arybchenko@solarflare.com
>>>> Subject: Re: [dpdk-dev] [PATCH v11 2/6] app/testpmd: add multiple
>>>> pools per core creation
>>>>
>>>> On 10/16/2020 4:05 PM, Ferruh Yigit wrote:
>>>>> On 10/16/2020 2:39 PM, Viacheslav Ovsiienko wrote:
>>>>>> The command line parameter --mbuf-size is updated, it can handle
>>>>>> the multiple values like the following:
>>>>>>
>>>>>> --mbuf-size=2176,512,768,4096
>>>>>>
>>>>>> specifying the creation the extra memory pools with the requested
>>>>>> mbuf data buffer sizes. If some buffer split feature is engaged the
>>>>>> extra memory pools can be used to configure the Rx queues with
>>>>>> rte_the_dev_rx_queue_setup_ex().
>>>>>>
>>>>>> The extra pools are created with requested sizes, and pool names
>>>>>> are assigned with appended index: mbuf_pool_socket_%socket_%index.
>>>>>> Index zero is used to specify the first mandatory pool to maintain
>>>>>> compatibility with existing code.
>>>>>>
>>>>>> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
>>>>>
>>>>> <...>
>>>>>
>>>>>>     /* Mbuf Pools */
>>>>>>     static inline void
>>>>>> -mbuf_poolname_build(unsigned int sock_id, char* mp_name, int
>>>>>> name_size)
>>>>>> +mbuf_poolname_build(unsigned int sock_id, char *mp_name,
>>>>>> +            int name_size, unsigned int idx)
>>>>>>     {
>>>>>> -    snprintf(mp_name, name_size, "mbuf_pool_socket_%u", sock_id);
>>>>>> +    if (!idx)
>>>>>> +        snprintf(mp_name, name_size, "mbuf_pool_socket_%u",
>>>>>> +sock_id);
>>>>>> +    else
>>>>>> +        snprintf(mp_name, name_size, "mbuf_pool_socket_%u_%u",
>>>>>> +             sock_id, idx);
>>>>>
>>>>> 'mp_name' can theoretically overflow and gives a compiler warning,
>>>>> although not sure if this truncation is a problem in practice.
>>>>>
>>>>> ../app/test-pmd/testpmd.c: In function ‘rx_queue_setup’:
>>>>> ../app/test-pmd/testpmd.h:666:53: error: ‘%u’ directive output may
>>>>> be truncated writing between 1 and 10 bytes into a region of size
>>>>> between
>>>>> 0 and 7 [-Werror=format-truncation=]
>>>>>     666 |   snprintf(mp_name, name_size, "mbuf_pool_socket_%u_%u",
>>>>>         |                                                     ^~
>>>>> ../app/test-pmd/testpmd.h:666:32: note: directive argument in the
>>>>> range [1, 4294967295]
>>>>>      666 |   snprintf(mp_name, name_size, "mbuf_pool_socket_%u_%u",
>>>>>          |                                ^~~~~~~~~~~~~~~~~~~~~~~~
>>>>> ../app/test-pmd/testpmd.h:666:3: note: ‘snprintf’ output between 21
>>>>> and 39 bytes into a destination of size 26
>>>>>      666 |   snprintf(mp_name, name_size, "mbuf_pool_socket_%u_%u",
>>>>>          |   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>>>>      667 |     sock_id, idx);
>>>>>          |     ~~~~~~~~~~~~~
>>>>>
>>>>> Any suggestion for fix? Can we shorten the string above, is it used
>>>>> somewhere else? Or casting variables to a smaller size may work too..
>>>>
>>>> What do you think to following:
>>>>
>>>>     diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
>>>>     index 4cd0f967f0..4ac1c1f86e 100644
>>>>     --- a/app/test-pmd/testpmd.h
>>>>     +++ b/app/test-pmd/testpmd.h
>>>>     @@ -661,10 +661,11 @@ mbuf_poolname_build(unsigned int sock_id,
>>>> char *mp_name,
>>>>                         int name_size, unsigned int idx)
>>>>      {
>>>>             if (!idx)
>>>>     -               snprintf(mp_name, name_size, "mbuf_pool_socket_%u",
>> sock_id);
>>>>     +               snprintf(mp_name, name_size, "mbuf_pool_s%u",
>>>>     +                       (uint16_t)sock_id);
>>>>             else
>>>>     -               snprintf(mp_name, name_size, "mbuf_pool_socket_%u_%u",
>>>>     -                        sock_id, idx);
>>>>     +               snprintf(mp_name, name_size, "mbuf_pool_s%u_%u",
>>>>     +                       (uint16_t)sock_id, (uint16_t)idx);
>>>>      }
>>>>
>>>>      static inline struct rte_mempool *
>>>
>>> Testpmd build mbuf pool names with mbuf_poolname_build() routine only
>>> (invoked from mbuf_pool_find/mbuf_pool_create). I suppose it is safe
>>> to replace "mbuf_pool_socket_" prefix with shorter one.
>>>
>>> Say something like this: #define TESTPMD_MBUF_POOL_PFX "mbuf"
>>>
>>
>> Just 'mbuf' may not give enough context in the logs, what about
>> "mbuf_pool_%u_%u"?
> It is place in the pool structure, so it is an object name
> and tightly coupled with the pool.  OK, let's try "mb_pool" ?
> 

sounds good
  

Patch

diff --git a/app/test-pmd/bpf_cmd.c b/app/test-pmd/bpf_cmd.c
index 16e3c3b..0a1a178 100644
--- a/app/test-pmd/bpf_cmd.c
+++ b/app/test-pmd/bpf_cmd.c
@@ -69,7 +69,7 @@  struct cmd_bpf_ld_result {
 
 	*flags = RTE_BPF_ETH_F_NONE;
 	arg->type = RTE_BPF_ARG_PTR;
-	arg->size = mbuf_data_size;
+	arg->size = mbuf_data_size[0];
 
 	for (i = 0; str[i] != 0; i++) {
 		v = toupper(str[i]);
@@ -78,7 +78,7 @@  struct cmd_bpf_ld_result {
 		else if (v == 'M') {
 			arg->type = RTE_BPF_ARG_PTR_MBUF;
 			arg->size = sizeof(struct rte_mbuf);
-			arg->buf_size = mbuf_data_size;
+			arg->buf_size = mbuf_data_size[0];
 		} else if (v == '-')
 			continue;
 		else
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 078f063..19b1896 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -2925,7 +2925,7 @@  struct cmd_setup_rxtx_queue {
 		if (!numa_support || socket_id == NUMA_NO_CONFIG)
 			socket_id = port->socket_id;
 
-		mp = mbuf_pool_find(socket_id);
+		mp = mbuf_pool_find(socket_id, 0);
 		if (mp == NULL) {
 			printf("Failed to setup RX queue: "
 				"No mempool allocation"
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 2c00b55..c554dd0 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -690,7 +690,7 @@  static int bus_match_all(const struct rte_bus *bus, const void *data)
 	printf("\nConnect to socket: %u", port->socket_id);
 
 	if (port_numa[port_id] != NUMA_NO_CONFIG) {
-		mp = mbuf_pool_find(port_numa[port_id]);
+		mp = mbuf_pool_find(port_numa[port_id], 0);
 		if (mp)
 			printf("\nmemory allocation on the socket: %d",
 							port_numa[port_id]);
@@ -3580,9 +3580,9 @@  struct igb_ring_desc_16_bytes {
 	 */
 	tx_pkt_len = 0;
 	for (i = 0; i < nb_segs; i++) {
-		if (seg_lengths[i] > (unsigned) mbuf_data_size) {
+		if (seg_lengths[i] > mbuf_data_size[0]) {
 			printf("length[%u]=%u > mbuf_data_size=%u - give up\n",
-			       i, seg_lengths[i], (unsigned) mbuf_data_size);
+			       i, seg_lengths[i], mbuf_data_size[0]);
 			return;
 		}
 		tx_pkt_len = (uint16_t)(tx_pkt_len + seg_lengths[i]);
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index 15ce8c1..4db4987 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -106,7 +106,9 @@ 
 	       "(flag: 1 for RX; 2 for TX; 3 for RX and TX).\n");
 	printf("  --socket-num=N: set socket from which all memory is allocated "
 	       "in NUMA mode.\n");
-	printf("  --mbuf-size=N: set the data size of mbuf to N bytes.\n");
+	printf("  --mbuf-size=N,[N1[,..Nn]: set the data size of mbuf to "
+	       "N bytes. If multiple numbers are specified the extra pools "
+	       "will be created to receive with packet split features\n");
 	printf("  --total-num-mbufs=N: set the number of mbufs to be allocated "
 	       "in mbuf pools.\n");
 	printf("  --max-pkt-len=N: set the maximum size of packet to N bytes.\n");
@@ -892,12 +894,22 @@ 
 				}
 			}
 			if (!strcmp(lgopts[opt_idx].name, "mbuf-size")) {
-				n = atoi(optarg);
-				if (n > 0 && n <= 0xFFFF)
-					mbuf_data_size = (uint16_t) n;
-				else
+				unsigned int mb_sz[MAX_SEGS_BUFFER_SPLIT];
+				unsigned int nb_segs, i;
+
+				nb_segs = parse_item_list(optarg, "mbuf-size",
+					MAX_SEGS_BUFFER_SPLIT, mb_sz, 0);
+				if (nb_segs <= 0)
 					rte_exit(EXIT_FAILURE,
-						 "mbuf-size should be > 0 and < 65536\n");
+						 "bad mbuf-size\n");
+				for (i = 0; i < nb_segs; i++) {
+					if (mb_sz[i] <= 0 || mb_sz[i] > 0xFFFF)
+						rte_exit(EXIT_FAILURE,
+							 "mbuf-size should be "
+							 "> 0 and < 65536\n");
+					mbuf_data_size[i] = (uint16_t) mb_sz[i];
+				}
+				mbuf_data_size_n = nb_segs;
 			}
 			if (!strcmp(lgopts[opt_idx].name, "total-num-mbufs")) {
 				n = atoi(optarg);
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index ccba71c..7e6ef80 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -186,7 +186,7 @@  struct fwd_engine * fwd_engines[] = {
 	NULL,
 };
 
-struct rte_mempool *mempools[RTE_MAX_NUMA_NODES];
+struct rte_mempool *mempools[RTE_MAX_NUMA_NODES * MAX_SEGS_BUFFER_SPLIT];
 uint16_t mempool_flags;
 
 struct fwd_config cur_fwd_config;
@@ -195,7 +195,10 @@  struct fwd_engine * fwd_engines[] = {
 uint32_t burst_tx_delay_time = BURST_TX_WAIT_US;
 uint32_t burst_tx_retry_num = BURST_TX_RETRIES;
 
-uint16_t mbuf_data_size = DEFAULT_MBUF_DATA_SIZE; /**< Mbuf data space size. */
+uint32_t mbuf_data_size_n = 1; /* Number of specified mbuf sizes. */
+uint16_t mbuf_data_size[MAX_SEGS_BUFFER_SPLIT] = {
+	DEFAULT_MBUF_DATA_SIZE
+}; /**< Mbuf data space size. */
 uint32_t param_total_num_mbufs = 0;  /**< number of mbufs in all pools - if
                                       * specified on command-line. */
 uint16_t stats_period; /**< Period to show statistics (disabled by default) */
@@ -955,14 +958,14 @@  struct extmem_param {
  */
 static struct rte_mempool *
 mbuf_pool_create(uint16_t mbuf_seg_size, unsigned nb_mbuf,
-		 unsigned int socket_id)
+		 unsigned int socket_id, unsigned int size_idx)
 {
 	char pool_name[RTE_MEMPOOL_NAMESIZE];
 	struct rte_mempool *rte_mp = NULL;
 	uint32_t mb_size;
 
 	mb_size = sizeof(struct rte_mbuf) + mbuf_seg_size;
-	mbuf_poolname_build(socket_id, pool_name, sizeof(pool_name));
+	mbuf_poolname_build(socket_id, pool_name, sizeof(pool_name), size_idx);
 
 	TESTPMD_LOG(INFO,
 		"create a new mbuf pool <%s>: n=%u, size=%u, socket=%u\n",
@@ -1485,8 +1488,8 @@  struct extmem_param {
 				port->dev_info.rx_desc_lim.nb_mtu_seg_max;
 
 			if ((data_size + RTE_PKTMBUF_HEADROOM) >
-							mbuf_data_size) {
-				mbuf_data_size = data_size +
+							mbuf_data_size[0]) {
+				mbuf_data_size[0] = data_size +
 						 RTE_PKTMBUF_HEADROOM;
 				warning = 1;
 			}
@@ -1494,9 +1497,9 @@  struct extmem_param {
 	}
 
 	if (warning)
-		TESTPMD_LOG(WARNING, "Configured mbuf size %hu\n",
-			    mbuf_data_size);
-
+		TESTPMD_LOG(WARNING,
+			    "Configured mbuf size of the first segment %hu\n",
+			    mbuf_data_size[0]);
 	/*
 	 * Create pools of mbuf.
 	 * If NUMA support is disabled, create a single pool of mbuf in
@@ -1516,21 +1519,23 @@  struct extmem_param {
 	}
 
 	if (numa_support) {
-		uint8_t i;
+		uint8_t i, j;
 
 		for (i = 0; i < num_sockets; i++)
-			mempools[i] = mbuf_pool_create(mbuf_data_size,
-						       nb_mbuf_per_pool,
-						       socket_ids[i]);
+			for (j = 0; j < mbuf_data_size_n; j++)
+				mempools[i * MAX_SEGS_BUFFER_SPLIT + j] =
+					mbuf_pool_create(mbuf_data_size[j],
+							  nb_mbuf_per_pool,
+							  socket_ids[i], j);
 	} else {
-		if (socket_num == UMA_NO_CONFIG)
-			mempools[0] = mbuf_pool_create(mbuf_data_size,
-						       nb_mbuf_per_pool, 0);
-		else
-			mempools[socket_num] = mbuf_pool_create
-							(mbuf_data_size,
-							 nb_mbuf_per_pool,
-							 socket_num);
+		uint8_t i;
+
+		for (i = 0; i < mbuf_data_size_n; i++)
+			mempools[i] = mbuf_pool_create
+					(mbuf_data_size[i],
+					 nb_mbuf_per_pool,
+					 socket_num == UMA_NO_CONFIG ?
+					 0 : socket_num, i);
 	}
 
 	init_port_config();
@@ -1542,10 +1547,10 @@  struct extmem_param {
 	 */
 	for (lc_id = 0; lc_id < nb_lcores; lc_id++) {
 		mbp = mbuf_pool_find(
-			rte_lcore_to_socket_id(fwd_lcores_cpuids[lc_id]));
+			rte_lcore_to_socket_id(fwd_lcores_cpuids[lc_id]), 0);
 
 		if (mbp == NULL)
-			mbp = mbuf_pool_find(0);
+			mbp = mbuf_pool_find(0, 0);
 		fwd_lcores[lc_id]->mbp = mbp;
 		/* initialize GSO context */
 		fwd_lcores[lc_id]->gso_ctx.direct_pool = mbp;
@@ -2498,7 +2503,8 @@  struct extmem_param {
 				if ((numa_support) &&
 					(rxring_numa[pi] != NUMA_NO_CONFIG)) {
 					struct rte_mempool * mp =
-						mbuf_pool_find(rxring_numa[pi]);
+						mbuf_pool_find
+							(rxring_numa[pi], 0);
 					if (mp == NULL) {
 						printf("Failed to setup RX queue:"
 							"No mempool allocation"
@@ -2514,7 +2520,8 @@  struct extmem_param {
 					     mp);
 				} else {
 					struct rte_mempool *mp =
-						mbuf_pool_find(port->socket_id);
+						mbuf_pool_find
+							(port->socket_id, 0);
 					if (mp == NULL) {
 						printf("Failed to setup RX queue:"
 							"No mempool allocation"
@@ -2909,13 +2916,13 @@  struct extmem_param {
 pmd_test_exit(void)
 {
 	portid_t pt_id;
+	unsigned int i;
 	int ret;
-	int i;
 
 	if (test_done == 0)
 		stop_packet_forwarding();
 
-	for (i = 0 ; i < RTE_MAX_NUMA_NODES ; i++) {
+	for (i = 0 ; i < RTE_DIM(mempools) ; i++) {
 		if (mempools[i]) {
 			if (mp_alloc_type == MP_ALLOC_ANON)
 				rte_mempool_mem_iter(mempools[i], dma_unmap_cb,
@@ -2959,7 +2966,7 @@  struct extmem_param {
 			return;
 		}
 	}
-	for (i = 0 ; i < RTE_MAX_NUMA_NODES ; i++) {
+	for (i = 0 ; i < RTE_DIM(mempools) ; i++) {
 		if (mempools[i])
 			rte_mempool_free(mempools[i]);
 	}
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index a4dfd4f..0975305 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -42,6 +42,13 @@ 
  */
 #define RTE_MAX_SEGS_PER_PKT 255 /**< nb_segs is a 8-bit unsigned char. */
 
+/*
+ * The maximum number of segments per packet is used to configure
+ * buffer split feature, also specifies the maximum amount of
+ * optional Rx pools to allocate mbufs to split.
+ */
+#define MAX_SEGS_BUFFER_SPLIT 8 /**< nb_segs is a 8-bit unsigned char. */
+
 #define MAX_PKT_BURST 512
 #define DEF_PKT_BURST 32
 
@@ -403,7 +410,9 @@  struct queue_stats_mappings {
 extern uint8_t dcb_config;
 extern uint8_t dcb_test;
 
-extern uint16_t mbuf_data_size; /**< Mbuf data space size. */
+extern uint32_t mbuf_data_size_n;
+extern uint16_t mbuf_data_size[MAX_SEGS_BUFFER_SPLIT];
+/**< Mbuf data space size. */
 extern uint32_t param_total_num_mbufs;
 
 extern uint16_t stats_period;
@@ -615,17 +624,22 @@  struct mplsoudp_decap_conf {
 
 /* Mbuf Pools */
 static inline void
-mbuf_poolname_build(unsigned int sock_id, char* mp_name, int name_size)
+mbuf_poolname_build(unsigned int sock_id, char *mp_name,
+		    int name_size, unsigned int idx)
 {
-	snprintf(mp_name, name_size, "mbuf_pool_socket_%u", sock_id);
+	if (!idx)
+		snprintf(mp_name, name_size, "mbuf_pool_socket_%u", sock_id);
+	else
+		snprintf(mp_name, name_size, "mbuf_pool_socket_%u_%u",
+			 sock_id, idx);
 }
 
 static inline struct rte_mempool *
-mbuf_pool_find(unsigned int sock_id)
+mbuf_pool_find(unsigned int sock_id, unsigned int idx)
 {
 	char pool_name[RTE_MEMPOOL_NAMESIZE];
 
-	mbuf_poolname_build(sock_id, pool_name, sizeof(pool_name));
+	mbuf_poolname_build(sock_id, pool_name, sizeof(pool_name), idx);
 	return rte_mempool_lookup((const char *)pool_name);
 }
 
diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst
index ec085c2..1eb0a10 100644
--- a/doc/guides/testpmd_app_ug/run_app.rst
+++ b/doc/guides/testpmd_app_ug/run_app.rst
@@ -107,9 +107,12 @@  The command line options are:
     Set the socket from which all memory is allocated in NUMA mode,
     where 0 <= N < number of sockets on the board.
 
-*   ``--mbuf-size=N``
+*   ``--mbuf-size=N[,N1[,...Nn]``
 
-    Set the data size of the mbufs used to N bytes, where N < 65536. The default value is 2048.
+    Set the data size of the mbufs used to N bytes, where N < 65536.
+    The default value is 2048. If multiple mbuf-size values are specified the
+    extra memory pools will be created for allocating mbufs to receive packets
+    with buffer splittling features.
 
 *   ``--total-num-mbufs=N``