[dpdk-dev,v2,1/2] eal: allow user to override default pool handle

Message ID 20170720070613.18211-2-santosh.shukla@caviumnetworks.com (mailing list archive)
State Superseded, archived
Headers

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation fail Compilation issues

Commit Message

Santosh Shukla July 20, 2017, 7:06 a.m. UTC
  DPDK has support for both sw and hw mempool and
currently user is limited to use ring_mp_mc pool.
In case user want to use other pool handle,
need to update config RTE_MEMPOOL_OPS_DEFAULT, then
build and run with desired pool handle.

Introducing eal option to override default pool handle.

Now user can override the RTE_MEMPOOL_OPS_DEFAULT by passing
pool handle to eal `--mbuf-pool-ops=""`.

Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
---
v1 --> v2:
(Changes per review feedback from Olivier)
- Renamed rte_eal_get_mempool_name to rte_eal_mbuf_default_mempool_ops().
- Added support in bsdapp too. 

Few changes considered while working on v2 like:
- Choosing eal arg name 'mbuf-pool-ops' out of 3 proposed names by Olivier.
  Initialy thought to use eal arg 'mbuf-default-mempool-ops' But
  since its way too long and user may find difficult to use it. That's why
  chose 'mbuf-pool-ops', As that name very close to api name and #define.
- Adding new RTE_ for NAMESIZE in rte_eal.h called 
  'RTE_MBUF_DEFAULT_MEMPOOL_OPS'.


 lib/librte_eal/bsdapp/eal/eal.c                 | 17 +++++++++++++++++
 lib/librte_eal/bsdapp/eal/rte_eal_version.map   |  1 +
 lib/librte_eal/common/eal_common_options.c      |  5 +++++
 lib/librte_eal/common/eal_internal_cfg.h        |  1 +
 lib/librte_eal/common/eal_options.h             |  2 ++
 lib/librte_eal/common/include/rte_eal.h         | 11 +++++++++++
 lib/librte_eal/linuxapp/eal/eal.c               | 18 ++++++++++++++++++
 lib/librte_eal/linuxapp/eal/rte_eal_version.map |  1 +
 lib/librte_mbuf/rte_mbuf.c                      |  5 +++--
 9 files changed, 59 insertions(+), 2 deletions(-)
  

Comments

Santosh Shukla Aug. 15, 2017, 8:07 a.m. UTC | #1
v3:
 - Rebased on top of v17.11-rc0
 - Updated version.map entry to v17.11.

v2:

DPDK has support for hw and sw mempool. Those mempool
can work optimal for specific PMD's. 
Example:
sw ring based PMD for Intel NICs.
HW mempool manager dpaa2 for dpaa2 PMD.
HW mempool manager fpa for octeontx PMD.

There could be a use-case where different vendor NIC's used
on the same platform and User like to configure mempool in such a way that
each of those NIC's should use their preferred mempool(For performance reasons).

Current mempool infrastrucure don't support such use-case.

This patchset tries to address that problem in 2 steps:

0) Allowing user to dynamically configure mempool handle by  
passing pool handle as eal arg to `--mbuf-pool-ops=<pool-handle>`.

1) Allowing PMD's to advertise their preferred pool to an application.
From an application point of view:
- The application must ask PMD about their preferred pool.
- PMD to respond back with preferred pool otherwise
  CONFIG_RTE_MEMPOOL_DEFAULT_OPS will be used for that PMD.

* Application programming sequencing would be
    char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE];
    rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempool /* out */);
    rte_mempool_create_empty();
    rte_mempool_set_ops_byname( , pref_memppol, );
    rte_mempool_populate_default();

Change History:
v2 --> v3:
 - Changed version.map from DPDK_17.08 to DPDK_17.11.

v1 --> v2:
 - Renamed rte_eal_get_mempool_name to rte_eal_mbuf_default_mempool_ops().
	(suggested by Olivier)
 - Renamed _get_preferred_pool to _get_preferred_pool_ops().
 - Updated API description and changes return val from -EINVAL to -ENOTSUP.
   (Suggested by Olivier)
* Specific details on v1-->v2 change summary described in each patch.

Checkpatch status:
- None.

Work History:
* Refer [1] for v1.

Thanks.

[1] http://dpdk.org/ml/archives/dev/2017-June/067022.html


Santosh Shukla (2):
  eal: allow user to override default pool handle
  ethdev: allow pmd to advertise pool handle

 lib/librte_eal/bsdapp/eal/eal.c                 | 17 +++++++++++++++++
 lib/librte_eal/bsdapp/eal/rte_eal_version.map   |  7 +++++++
 lib/librte_eal/common/eal_common_options.c      |  5 +++++
 lib/librte_eal/common/eal_internal_cfg.h        |  1 +
 lib/librte_eal/common/eal_options.h             |  2 ++
 lib/librte_eal/common/include/rte_eal.h         | 11 +++++++++++
 lib/librte_eal/linuxapp/eal/eal.c               | 18 ++++++++++++++++++
 lib/librte_eal/linuxapp/eal/rte_eal_version.map |  7 +++++++
 lib/librte_ether/rte_ethdev.c                   | 18 ++++++++++++++++++
 lib/librte_ether/rte_ethdev.h                   | 21 +++++++++++++++++++++
 lib/librte_ether/rte_ether_version.map          |  7 +++++++
 lib/librte_mbuf/rte_mbuf.c                      |  5 +++--
 12 files changed, 117 insertions(+), 2 deletions(-)
  
Sergio Gonzalez Monroy Sept. 4, 2017, 9:41 a.m. UTC | #2
On 15/08/2017 09:07, Santosh Shukla wrote:
> v3:
>   - Rebased on top of v17.11-rc0
>   - Updated version.map entry to v17.11.
>
> v2:
>
> DPDK has support for hw and sw mempool. Those mempool
> can work optimal for specific PMD's.
> Example:
> sw ring based PMD for Intel NICs.
> HW mempool manager dpaa2 for dpaa2 PMD.
> HW mempool manager fpa for octeontx PMD.
>
> There could be a use-case where different vendor NIC's used
> on the same platform and User like to configure mempool in such a way that
> each of those NIC's should use their preferred mempool(For performance reasons).
>
> Current mempool infrastrucure don't support such use-case.
>
> This patchset tries to address that problem in 2 steps:
>
> 0) Allowing user to dynamically configure mempool handle by
> passing pool handle as eal arg to `--mbuf-pool-ops=<pool-handle>`.
>
> 1) Allowing PMD's to advertise their preferred pool to an application.
>  From an application point of view:
> - The application must ask PMD about their preferred pool.
> - PMD to respond back with preferred pool otherwise
>    CONFIG_RTE_MEMPOOL_DEFAULT_OPS will be used for that PMD.
>
> * Application programming sequencing would be
>      char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE];
>      rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempool /* out */);
>      rte_mempool_create_empty();
>      rte_mempool_set_ops_byname( , pref_memppol, );
>      rte_mempool_populate_default();

What about introducing an API like:
rte_pktmbuf_poll_create_with_ops (..., ops_name, config_pool);

I think that API would help for the case the application wants an mbuf 
pool with ie. stack handler.
Sure we can do the empty/set_ops/populate sequence, but the only thing 
we want to change from default pktmbuf_pool_create API is the pool handler.

Application just needs to decide the ops handler to use, either default 
or one suggested by PMD?

I think ideally we would have similar APIs:
- rte_mempool_create_with_ops (...)
- rte_memppol_xmem_create_with_ops (...)


Thanks,
Sergio

> Change History:
> v2 --> v3:
>   - Changed version.map from DPDK_17.08 to DPDK_17.11.
>
> v1 --> v2:
>   - Renamed rte_eal_get_mempool_name to rte_eal_mbuf_default_mempool_ops().
> 	(suggested by Olivier)
>   - Renamed _get_preferred_pool to _get_preferred_pool_ops().
>   - Updated API description and changes return val from -EINVAL to -ENOTSUP.
>     (Suggested by Olivier)
> * Specific details on v1-->v2 change summary described in each patch.
>
> Checkpatch status:
> - None.
>
> Work History:
> * Refer [1] for v1.
>
> Thanks.
>
> [1] http://dpdk.org/ml/archives/dev/2017-June/067022.html
>
>
> Santosh Shukla (2):
>    eal: allow user to override default pool handle
>    ethdev: allow pmd to advertise pool handle
>
>   lib/librte_eal/bsdapp/eal/eal.c                 | 17 +++++++++++++++++
>   lib/librte_eal/bsdapp/eal/rte_eal_version.map   |  7 +++++++
>   lib/librte_eal/common/eal_common_options.c      |  5 +++++
>   lib/librte_eal/common/eal_internal_cfg.h        |  1 +
>   lib/librte_eal/common/eal_options.h             |  2 ++
>   lib/librte_eal/common/include/rte_eal.h         | 11 +++++++++++
>   lib/librte_eal/linuxapp/eal/eal.c               | 18 ++++++++++++++++++
>   lib/librte_eal/linuxapp/eal/rte_eal_version.map |  7 +++++++
>   lib/librte_ether/rte_ethdev.c                   | 18 ++++++++++++++++++
>   lib/librte_ether/rte_ethdev.h                   | 21 +++++++++++++++++++++
>   lib/librte_ether/rte_ether_version.map          |  7 +++++++
>   lib/librte_mbuf/rte_mbuf.c                      |  5 +++--
>   12 files changed, 117 insertions(+), 2 deletions(-)
>
  
Santosh Shukla Sept. 4, 2017, 1:20 p.m. UTC | #3
Hi Sergio,


On Monday 04 September 2017 03:11 PM, Sergio Gonzalez Monroy wrote:
> On 15/08/2017 09:07, Santosh Shukla wrote:
>> v3:
>>   - Rebased on top of v17.11-rc0
>>   - Updated version.map entry to v17.11.
>>
>> v2:
>>
>> DPDK has support for hw and sw mempool. Those mempool
>> can work optimal for specific PMD's.
>> Example:
>> sw ring based PMD for Intel NICs.
>> HW mempool manager dpaa2 for dpaa2 PMD.
>> HW mempool manager fpa for octeontx PMD.
>>
>> There could be a use-case where different vendor NIC's used
>> on the same platform and User like to configure mempool in such a way that
>> each of those NIC's should use their preferred mempool(For performance reasons).
>>
>> Current mempool infrastrucure don't support such use-case.
>>
>> This patchset tries to address that problem in 2 steps:
>>
>> 0) Allowing user to dynamically configure mempool handle by
>> passing pool handle as eal arg to `--mbuf-pool-ops=<pool-handle>`.
>>
>> 1) Allowing PMD's to advertise their preferred pool to an application.
>>  From an application point of view:
>> - The application must ask PMD about their preferred pool.
>> - PMD to respond back with preferred pool otherwise
>>    CONFIG_RTE_MEMPOOL_DEFAULT_OPS will be used for that PMD.
>>
>> * Application programming sequencing would be
>>      char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE];
>>      rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempool /* out */);
>>      rte_mempool_create_empty();
>>      rte_mempool_set_ops_byname( , pref_memppol, );
>>      rte_mempool_populate_default();
>
> What about introducing an API like:
> rte_pktmbuf_poll_create_with_ops (..., ops_name, config_pool);
>
> I think that API would help for the case the application wants an mbuf pool with ie. stack handler.
> Sure we can do the empty/set_ops/populate sequence, but the only thing we want to change from default pktmbuf_pool_create API is the pool handler.
>
> Application just needs to decide the ops handler to use, either default or one suggested by PMD?
>
> I think ideally we would have similar APIs:
> - rte_mempool_create_with_ops (...)
> - rte_memppol_xmem_create_with_ops (...)
>
Olivier, What do you think?

No strong opion but Can we please close/agree on API? I don't want to
keep rebasing and pushing new variant in every other version, This whole
series and its dependent series ie..octeontx PMD/FPA PMD taregeted for -rc1-v17.11 release 
therefore if we close on api and get it reviewed by this week, would really appreciate.

Thanks.

>
> Thanks,
> Sergio
>
>> Change History:
>> v2 --> v3:
>>   - Changed version.map from DPDK_17.08 to DPDK_17.11.
>>
>> v1 --> v2:
>>   - Renamed rte_eal_get_mempool_name to rte_eal_mbuf_default_mempool_ops().
>>     (suggested by Olivier)
>>   - Renamed _get_preferred_pool to _get_preferred_pool_ops().
>>   - Updated API description and changes return val from -EINVAL to -ENOTSUP.
>>     (Suggested by Olivier)
>> * Specific details on v1-->v2 change summary described in each patch.
>>
>> Checkpatch status:
>> - None.
>>
>> Work History:
>> * Refer [1] for v1.
>>
>> Thanks.
>>
>> [1] http://dpdk.org/ml/archives/dev/2017-June/067022.html
>>
>>
>> Santosh Shukla (2):
>>    eal: allow user to override default pool handle
>>    ethdev: allow pmd to advertise pool handle
>>
>>   lib/librte_eal/bsdapp/eal/eal.c                 | 17 +++++++++++++++++
>>   lib/librte_eal/bsdapp/eal/rte_eal_version.map   |  7 +++++++
>>   lib/librte_eal/common/eal_common_options.c      |  5 +++++
>>   lib/librte_eal/common/eal_internal_cfg.h        |  1 +
>>   lib/librte_eal/common/eal_options.h             |  2 ++
>>   lib/librte_eal/common/include/rte_eal.h         | 11 +++++++++++
>>   lib/librte_eal/linuxapp/eal/eal.c               | 18 ++++++++++++++++++
>>   lib/librte_eal/linuxapp/eal/rte_eal_version.map |  7 +++++++
>>   lib/librte_ether/rte_ethdev.c                   | 18 ++++++++++++++++++
>>   lib/librte_ether/rte_ethdev.h                   | 21 +++++++++++++++++++++
>>   lib/librte_ether/rte_ether_version.map          |  7 +++++++
>>   lib/librte_mbuf/rte_mbuf.c                      |  5 +++--
>>   12 files changed, 117 insertions(+), 2 deletions(-)
>>
>
  
Olivier Matz Sept. 4, 2017, 1:34 p.m. UTC | #4
Hi Sergio,

On Mon, Sep 04, 2017 at 10:41:56AM +0100, Sergio Gonzalez Monroy wrote:
> On 15/08/2017 09:07, Santosh Shukla wrote:
> > v3:
> >   - Rebased on top of v17.11-rc0
> >   - Updated version.map entry to v17.11.
> > 
> > v2:
> > 
> > DPDK has support for hw and sw mempool. Those mempool
> > can work optimal for specific PMD's.
> > Example:
> > sw ring based PMD for Intel NICs.
> > HW mempool manager dpaa2 for dpaa2 PMD.
> > HW mempool manager fpa for octeontx PMD.
> > 
> > There could be a use-case where different vendor NIC's used
> > on the same platform and User like to configure mempool in such a way that
> > each of those NIC's should use their preferred mempool(For performance reasons).
> > 
> > Current mempool infrastrucure don't support such use-case.
> > 
> > This patchset tries to address that problem in 2 steps:
> > 
> > 0) Allowing user to dynamically configure mempool handle by
> > passing pool handle as eal arg to `--mbuf-pool-ops=<pool-handle>`.
> > 
> > 1) Allowing PMD's to advertise their preferred pool to an application.
> >  From an application point of view:
> > - The application must ask PMD about their preferred pool.
> > - PMD to respond back with preferred pool otherwise
> >    CONFIG_RTE_MEMPOOL_DEFAULT_OPS will be used for that PMD.
> > 
> > * Application programming sequencing would be
> >      char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE];
> >      rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempool /* out */);
> >      rte_mempool_create_empty();
> >      rte_mempool_set_ops_byname( , pref_memppol, );
> >      rte_mempool_populate_default();
> 
> What about introducing an API like:
> rte_pktmbuf_poll_create_with_ops (..., ops_name, config_pool);
> 
> I think that API would help for the case the application wants an mbuf pool
> with ie. stack handler.
> Sure we can do the empty/set_ops/populate sequence, but the only thing we
> want to change from default pktmbuf_pool_create API is the pool handler.
> 
> Application just needs to decide the ops handler to use, either default or
> one suggested by PMD?
> 
> I think ideally we would have similar APIs:
> - rte_mempool_create_with_ops (...)
> - rte_memppol_xmem_create_with_ops (...)

Today, we may only want to change the mempool handler, but if we
need to change something else tomorrow, we would need to add another
parameter again, breaking the ABI.

If we pass a config structure, adding a new field in it would also break the
ABI, except if the structure is opaque, with accessors. These accessors would be
functions (ex: mempool_cfg_new, mempool_cfg_set_pool_ops, ...). This is not so
much different than what we have now.

The advantage I can see of working on a config structure instead of directly on
a mempool is that the api can be reused to build a default config.

That said, I think it's quite orthogonal to this patch since we still require
the ethdev api.

Olivier
  
Sergio Gonzalez Monroy Sept. 4, 2017, 2:24 p.m. UTC | #5
Hi Olivier,

On 04/09/2017 14:34, Olivier MATZ wrote:
> Hi Sergio,
>
> On Mon, Sep 04, 2017 at 10:41:56AM +0100, Sergio Gonzalez Monroy wrote:
>> On 15/08/2017 09:07, Santosh Shukla wrote:
>>> v3:
>>>    - Rebased on top of v17.11-rc0
>>>    - Updated version.map entry to v17.11.
>>>
>>> v2:
>>>
>>> DPDK has support for hw and sw mempool. Those mempool
>>> can work optimal for specific PMD's.
>>> Example:
>>> sw ring based PMD for Intel NICs.
>>> HW mempool manager dpaa2 for dpaa2 PMD.
>>> HW mempool manager fpa for octeontx PMD.
>>>
>>> There could be a use-case where different vendor NIC's used
>>> on the same platform and User like to configure mempool in such a way that
>>> each of those NIC's should use their preferred mempool(For performance reasons).
>>>
>>> Current mempool infrastrucure don't support such use-case.
>>>
>>> This patchset tries to address that problem in 2 steps:
>>>
>>> 0) Allowing user to dynamically configure mempool handle by
>>> passing pool handle as eal arg to `--mbuf-pool-ops=<pool-handle>`.
>>>
>>> 1) Allowing PMD's to advertise their preferred pool to an application.
>>>   From an application point of view:
>>> - The application must ask PMD about their preferred pool.
>>> - PMD to respond back with preferred pool otherwise
>>>     CONFIG_RTE_MEMPOOL_DEFAULT_OPS will be used for that PMD.
>>>
>>> * Application programming sequencing would be
>>>       char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE];
>>>       rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempool /* out */);
>>>       rte_mempool_create_empty();
>>>       rte_mempool_set_ops_byname( , pref_memppol, );
>>>       rte_mempool_populate_default();
>> What about introducing an API like:
>> rte_pktmbuf_poll_create_with_ops (..., ops_name, config_pool);
>>
>> I think that API would help for the case the application wants an mbuf pool
>> with ie. stack handler.
>> Sure we can do the empty/set_ops/populate sequence, but the only thing we
>> want to change from default pktmbuf_pool_create API is the pool handler.
>>
>> Application just needs to decide the ops handler to use, either default or
>> one suggested by PMD?
>>
>> I think ideally we would have similar APIs:
>> - rte_mempool_create_with_ops (...)
>> - rte_memppol_xmem_create_with_ops (...)
> Today, we may only want to change the mempool handler, but if we
> need to change something else tomorrow, we would need to add another
> parameter again, breaking the ABI.
>
> If we pass a config structure, adding a new field in it would also break the
> ABI, except if the structure is opaque, with accessors. These accessors would be
> functions (ex: mempool_cfg_new, mempool_cfg_set_pool_ops, ...). This is not so
> much different than what we have now.
>
> The advantage I can see of working on a config structure instead of directly on
> a mempool is that the api can be reused to build a default config.
>
> That said, I think it's quite orthogonal to this patch since we still require
> the ethdev api.
>
> Olivier

Fair enough.

Just to double check that we are on the same page:
- rte_mempool_create is just there for backwards compatibility and a 
sequence of create_empty -> set_ops (optional) ->init -> 
populate_default -> obj_iter (optional) is the recommended way of 
creating mempools.
- if application wants to use re_mempool_xmem_create with different pool 
handler needs to replicate function and add set_ops step.

Now if rte_pktmbuf_pool_create is still the preferred mechanism for 
applications to create mbuf pools, wouldn't it make sense to offer the 
option of either pass the ops_name as suggested before or for the 
application to just set a different pool handler? I understand the it is 
either breaking API or introducing a new API, but is the solution to 
basically "copy" the whole function in the application and add an 
optional step (set_ops)?

With these patches we can:
- change the default pool handler with EAL option, which does *not* 
allow for pktmbuf_pool_create with different handlers.
- We can check the PMDs preferred/supported handlers but then we need to 
implement function with whole sequence, cannot use pktmbuf_pool_create.

It looks to me then that any application that wants to use different 
pool handlers than default/ring need to implement their own create_pool 
with set_ops.

Thanks,
Sergio
  
Olivier Matz Sept. 5, 2017, 7:47 a.m. UTC | #6
On Mon, Sep 04, 2017 at 03:24:38PM +0100, Sergio Gonzalez Monroy wrote:
> Hi Olivier,
> 
> On 04/09/2017 14:34, Olivier MATZ wrote:
> > Hi Sergio,
> > 
> > On Mon, Sep 04, 2017 at 10:41:56AM +0100, Sergio Gonzalez Monroy wrote:
> > > On 15/08/2017 09:07, Santosh Shukla wrote:
> > > > * Application programming sequencing would be
> > > >       char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE];
> > > >       rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempool /* out */);
> > > >       rte_mempool_create_empty();
> > > >       rte_mempool_set_ops_byname( , pref_memppol, );
> > > >       rte_mempool_populate_default();
> > > What about introducing an API like:
> > > rte_pktmbuf_poll_create_with_ops (..., ops_name, config_pool);
> > > 
> > > I think that API would help for the case the application wants an mbuf pool
> > > with ie. stack handler.
> > > Sure we can do the empty/set_ops/populate sequence, but the only thing we
> > > want to change from default pktmbuf_pool_create API is the pool handler.
> > > 
> > > Application just needs to decide the ops handler to use, either default or
> > > one suggested by PMD?
> > > 
> > > I think ideally we would have similar APIs:
> > > - rte_mempool_create_with_ops (...)
> > > - rte_memppol_xmem_create_with_ops (...)
> > Today, we may only want to change the mempool handler, but if we
> > need to change something else tomorrow, we would need to add another
> > parameter again, breaking the ABI.
> > 
> > If we pass a config structure, adding a new field in it would also break the
> > ABI, except if the structure is opaque, with accessors. These accessors would be
> > functions (ex: mempool_cfg_new, mempool_cfg_set_pool_ops, ...). This is not so
> > much different than what we have now.
> > 
> > The advantage I can see of working on a config structure instead of directly on
> > a mempool is that the api can be reused to build a default config.
> > 
> > That said, I think it's quite orthogonal to this patch since we still require
> > the ethdev api.
> 
> Fair enough.
> 
> Just to double check that we are on the same page:
> - rte_mempool_create is just there for backwards compatibility and a
> sequence of create_empty -> set_ops (optional) ->init -> populate_default ->
> obj_iter (optional) is the recommended way of creating mempools.

Yes, I think rte_mempool_create() has too many arguments.

> - if application wants to use rte_mempool_xmem_create with different pool
> handler needs to replicate function and add set_ops step.

Yes. And now that xen support is going to be removed, I'm wondering if
this function is still required, given the API is not that easy to
use. Calling rte_mempool_populate_phys() several times looks more
flexible. But this is also another topic.

> Now if rte_pktmbuf_pool_create is still the preferred mechanism for
> applications to create mbuf pools, wouldn't it make sense to offer the
> option of either pass the ops_name as suggested before or for the
> application to just set a different pool handler? I understand the it is
> either breaking API or introducing a new API, but is the solution to
> basically "copy" the whole function in the application and add an optional
> step (set_ops)?

I was quite reticent about introducing
rte_pktmbuf_pool_create_with_ops() because for the same reasons, we
would also want to introduce functions to create a mempool that use a
different pktmbuf_init() or pool_init() callback, or to create the pool
in external memory, ... and we would end up with a functions with too
many arguments.

I like the approach of having several simple functions, because it's
easier to read (even if the code is longer), and it's easily extensible.

Now if we feel that the mempool ops is more important than the other
parameters, we can consider to add it in rte_pktmbuf_pool_create().

> With these patches we can:
> - change the default pool handler with EAL option, which does *not* allow
> for pktmbuf_pool_create with different handlers.
> - We can check the PMDs preferred/supported handlers but then we need to
> implement function with whole sequence, cannot use pktmbuf_pool_create.
> 
> It looks to me then that any application that wants to use different pool
> handlers than default/ring need to implement their own create_pool with
> set_ops.
  
Jerin Jacob Sept. 5, 2017, 8:11 a.m. UTC | #7
-----Original Message-----
> Date: Tue, 5 Sep 2017 09:47:26 +0200
> From: Olivier MATZ <olivier.matz@6wind.com>
> To: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
> CC: Santosh Shukla <santosh.shukla@caviumnetworks.com>, dev@dpdk.org,
>  thomas@monjalon.net, jerin.jacob@caviumnetworks.com,
>  hemant.agrawal@nxp.com
> Subject: Re: [dpdk-dev] [PATCH v3 0/2] Dynamically configure mempool handle
> User-Agent: NeoMutt/20170113 (1.7.2)
> 
> On Mon, Sep 04, 2017 at 03:24:38PM +0100, Sergio Gonzalez Monroy wrote:
> > Hi Olivier,
> > 
> > On 04/09/2017 14:34, Olivier MATZ wrote:
> > > Hi Sergio,
> > > 
> > > On Mon, Sep 04, 2017 at 10:41:56AM +0100, Sergio Gonzalez Monroy wrote:
> > > > On 15/08/2017 09:07, Santosh Shukla wrote:
> > > > > * Application programming sequencing would be
> > > > >       char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE];
> > > > >       rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempool /* out */);
> > > > >       rte_mempool_create_empty();
> > > > >       rte_mempool_set_ops_byname( , pref_memppol, );
> > > > >       rte_mempool_populate_default();
> > > > What about introducing an API like:
> > > > rte_pktmbuf_poll_create_with_ops (..., ops_name, config_pool);
> > > > 
> > > > I think that API would help for the case the application wants an mbuf pool
> > > > with ie. stack handler.
> > > > Sure we can do the empty/set_ops/populate sequence, but the only thing we
> > > > want to change from default pktmbuf_pool_create API is the pool handler.
> > > > 
> > > > Application just needs to decide the ops handler to use, either default or
> > > > one suggested by PMD?
> > > > 
> > > > I think ideally we would have similar APIs:
> > > > - rte_mempool_create_with_ops (...)
> > > > - rte_memppol_xmem_create_with_ops (...)
> > > Today, we may only want to change the mempool handler, but if we
> > > need to change something else tomorrow, we would need to add another
> > > parameter again, breaking the ABI.
> > > 
> > > If we pass a config structure, adding a new field in it would also break the
> > > ABI, except if the structure is opaque, with accessors. These accessors would be
> > > functions (ex: mempool_cfg_new, mempool_cfg_set_pool_ops, ...). This is not so
> > > much different than what we have now.
> > > 
> > > The advantage I can see of working on a config structure instead of directly on
> > > a mempool is that the api can be reused to build a default config.
> > > 
> > > That said, I think it's quite orthogonal to this patch since we still require
> > > the ethdev api.
> > 
> > Fair enough.
> > 
> > Just to double check that we are on the same page:
> > - rte_mempool_create is just there for backwards compatibility and a
> > sequence of create_empty -> set_ops (optional) ->init -> populate_default ->
> > obj_iter (optional) is the recommended way of creating mempools.
> 
> Yes, I think rte_mempool_create() has too many arguments.
> 
> > - if application wants to use rte_mempool_xmem_create with different pool
> > handler needs to replicate function and add set_ops step.
> 
> Yes. And now that xen support is going to be removed, I'm wondering if
> this function is still required, given the API is not that easy to
> use. Calling rte_mempool_populate_phys() several times looks more
> flexible. But this is also another topic.
> 
> > Now if rte_pktmbuf_pool_create is still the preferred mechanism for
> > applications to create mbuf pools, wouldn't it make sense to offer the
> > option of either pass the ops_name as suggested before or for the
> > application to just set a different pool handler? I understand the it is
> > either breaking API or introducing a new API, but is the solution to
> > basically "copy" the whole function in the application and add an optional
> > step (set_ops)?
> 
> I was quite reticent about introducing
> rte_pktmbuf_pool_create_with_ops() because for the same reasons, we
> would also want to introduce functions to create a mempool that use a
> different pktmbuf_init() or pool_init() callback, or to create the pool
> in external memory, ... and we would end up with a functions with too
> many arguments.
> 
> I like the approach of having several simple functions, because it's
> easier to read (even if the code is longer), and it's easily extensible.
> 
> Now if we feel that the mempool ops is more important than the other
> parameters, we can consider to add it in rte_pktmbuf_pool_create().

Yes. I agree with Sergio here.

Something we could target for v18.02.
  
Santosh Shukla Sept. 11, 2017, 3:18 p.m. UTC | #8
v4:
- Includes v3 review coment changes.
  Patches rebased on 06791a4bce: ethdev: get the supported pools for a port 

v3:
 - Rebased on top of v17.11-rc0
 - Updated version.map entry to v17.11.

v2:

DPDK has support for hw and sw mempool. Those mempool
can work optimal for specific PMD's. 
Example:
sw ring based PMD for Intel NICs.
HW mempool manager dpaa2 for dpaa2 PMD.
HW mempool manager fpa for octeontx PMD.

There could be a use-case where different vendor NIC's used
on the same platform and User like to configure mempool in such a way that
each of those NIC's should use their preferred mempool(For performance reasons).

Current mempool infrastrucure don't support such use-case.

This patchset tries to address that problem in 2 steps:

0) Allowing user to dynamically configure mempool handle by  
passing pool handle as eal arg to `--mbuf-pool-ops=<pool-handle>`.

1) Allowing PMD's to advertise their pool capability to an application.
From an application point of view:
- The application must ask to PMD about supported pools.
- PMD will advertise his pool capability in priority order 
  '0' - Best match mempool ops for _that_ port (highest priority)
  '1' - PMD support this mempool ops.
- Based on those i/p from PMD, application to chose pool-hdl and
  do pool create.

Change History:
v3 --> v4:
- Removed RTE_MBUF_XXX_POOL_NAMESIZE macro and replaced mbuf_pool_name
  with const. (Suggested by Olivier)
- Added eal arg info in doc guide.
- Removed _preferred_pool() api and replaced with
  rte_eth_dev_pools_ops_supported() api (Suggested by Olivier)


v2 --> v3:
 - Changed version.map from DPDK_17.08 to DPDK_17.11.

v1 --> v2:
 - Renamed rte_eal_get_mempool_name to rte_eal_mbuf_default_mempool_ops().
	(suggested by Olivier)
 - Renamed _get_preferred_pool to _get_preferred_pool_ops().
 - Updated API description and changes return val from -EINVAL to -ENOTSUP.
   (Suggested by Olivier)
* Specific details on v1-->v2 change summary described in each patch.

Checkpatch status:
- None.

Work History:
* Refer [1] for v1.

Thanks.

[1] http://dpdk.org/ml/archives/dev/2017-June/067022.html

Santosh Shukla (2):
  eal: allow user to override default pool handle
  ethdev: get the supported pools for a port

 doc/guides/freebsd_gsg/build_sample_apps.rst    |  3 +++
 doc/guides/linux_gsg/build_sample_apps.rst      |  3 +++
 doc/guides/testpmd_app_ug/run_app.rst           |  4 ++++
 lib/librte_eal/bsdapp/eal/eal.c                 | 10 ++++++++++
 lib/librte_eal/bsdapp/eal/rte_eal_version.map   |  7 +++++++
 lib/librte_eal/common/eal_common_options.c      |  3 +++
 lib/librte_eal/common/eal_internal_cfg.h        |  2 +-
 lib/librte_eal/common/eal_options.h             |  2 ++
 lib/librte_eal/common/include/rte_eal.h         |  9 +++++++++
 lib/librte_eal/linuxapp/eal/eal.c               | 11 +++++++++++
 lib/librte_eal/linuxapp/eal/rte_eal_version.map |  7 +++++++
 lib/librte_ether/rte_ethdev.c                   | 24 ++++++++++++++++++++++++
 lib/librte_ether/rte_ethdev.h                   | 24 ++++++++++++++++++++++++
 lib/librte_ether/rte_ethdev_version.map         |  7 +++++++
 lib/librte_mbuf/rte_mbuf.c                      |  5 +++--
 15 files changed, 118 insertions(+), 3 deletions(-)
  
Santosh Shukla Sept. 13, 2017, 10 a.m. UTC | #9
Hi Olivier,


On Monday 11 September 2017 08:48 PM, Santosh Shukla wrote:
> v4:
> - Includes v3 review coment changes.
>   Patches rebased on 06791a4bce: ethdev: get the supported pools for a port 

are you fine with v4? Thanks.
  
Santosh Shukla Sept. 19, 2017, 8:28 a.m. UTC | #10
Hi Olivier,


On Wednesday 13 September 2017 03:30 PM, santosh wrote:
> Hi Olivier,
>
>
> On Monday 11 September 2017 08:48 PM, Santosh Shukla wrote:
>> v4:
>> - Includes v3 review coment changes.
>>   Patches rebased on 06791a4bce: ethdev: get the supported pools for a port 
> are you fine with v4? Thanks.
>
Ping? Thanks.
  
Olivier Matz Sept. 25, 2017, 7:24 a.m. UTC | #11
Hi Santosh,

On Tue, Sep 19, 2017 at 01:58:14PM +0530, santosh wrote:
> Hi Olivier,
> 
> 
> On Wednesday 13 September 2017 03:30 PM, santosh wrote:
> > Hi Olivier,
> >
> >
> > On Monday 11 September 2017 08:48 PM, Santosh Shukla wrote:
> >> v4:
> >> - Includes v3 review coment changes.
> >>   Patches rebased on 06791a4bce: ethdev: get the supported pools for a port 
> > are you fine with v4? Thanks.
> >
> Ping? Thanks.

Really sorry for the late review.
I'm sending some minor comments, hoping the patches can be
integrated in RC1.

Olivier
  
Santosh Shukla Sept. 25, 2017, 9:58 p.m. UTC | #12
On Monday 25 September 2017 08:24 AM, Olivier MATZ wrote:
> Hi Santosh,
>
> On Tue, Sep 19, 2017 at 01:58:14PM +0530, santosh wrote:
>> Hi Olivier,
>>
>>
>> On Wednesday 13 September 2017 03:30 PM, santosh wrote:
>>> Hi Olivier,
>>>
>>>
>>> On Monday 11 September 2017 08:48 PM, Santosh Shukla wrote:
>>>> v4:
>>>> - Includes v3 review coment changes.
>>>>   Patches rebased on 06791a4bce: ethdev: get the supported pools for a port 
>>> are you fine with v4? Thanks.
>>>
>> Ping? Thanks.
> Really sorry for the late review.
> I'm sending some minor comments, hoping the patches can be
> integrated in RC1.

Mempool nat_align and this series blocking octeontx pmd, must to get in -rc1.

Appreciate if you spend little time in reviewing, as those series design part addressed,
only style and renaming is under discussion.. few style comments are comming over weeks of time.. it is blocking 
patch progress.. appreciate if you review little quickly.

Thanks.

> Olivier
  
Santosh Shukla Oct. 1, 2017, 9:14 a.m. UTC | #13
v5:
- Includes v4 minor review comment.
  Patches rebased on upstream tip / commit id:5dce9fcdb2

v4:
- Includes v3 review coment changes.
  Patches rebased on 06791a4bce: ethdev: get the supported pools for a port 

v3:
 - Rebased on top of v17.11-rc0
 - Updated version.map entry to v17.11.

v2:

DPDK has support for hw and sw mempool. Those mempool
can work optimal for specific PMD's. 
Example:
sw ring based PMD for Intel NICs.
HW mempool manager dpaa2 for dpaa2 PMD.
HW mempool manager fpa for octeontx PMD.

There could be a use-case where different vendor NIC's used
on the same platform and User like to configure mempool in such a way that
each of those NIC's should use their preferred mempool(For performance reasons).

Current mempool infrastrucure don't support such use-case.

This patchset tries to address that problem in 2 steps:

0) Allowing user to dynamically configure mempool handle by  
passing pool handle as eal arg to `--mbuf-pool-ops-name=<pool-handle>`.

1) Allowing PMD's to advertise their pool capability to an application.
From an application point of view:
- The application must ask to PMD about supported pools.
- PMD will advertise his pool capability in priority order 
  '0' - Best match mempool ops for _that_ port (highest priority)
  '1' - PMD support this mempool ops.
- Based on those i/p from PMD, application to chose pool-hdl and
  do pool create.

Change History:
v5 --> v4:
- Renamed mbuf_pool_name to mbuf_pool_ops_name in [01/02]. (Suggested by
  Olivier)
- Incorporated review comment suggested by Olivier for [02/02].
  Refer specific patch for v5 change history.

v3 --> v4:
- Removed RTE_MBUF_XXX_POOL_NAMESIZE macro and replaced mbuf_pool_name
  with const. (Suggested by Olivier)
- Added eal arg info in doc guide.
- Removed _preferred_pool() api and replaced with
  rte_eth_dev_pools_ops_supported() api (Suggested by Olivier)


v2 --> v3:
 - Changed version.map from DPDK_17.08 to DPDK_17.11.

v1 --> v2:
 - Renamed rte_eal_get_mempool_name to rte_eal_mbuf_default_mempool_ops().
	(suggested by Olivier)
 - Renamed _get_preferred_pool to _get_preferred_pool_ops().
 - Updated API description and changes return val from -EINVAL to -ENOTSUP.
   (Suggested by Olivier)
* Specific details on v1-->v2 change summary described in each patch.

Checkpatch status:
- None.

Work History:
* Refer [1] for v1.

Thanks.

[1] http://dpdk.org/ml/archives/dev/2017-June/067022.html

Santosh Shukla (2):
  eal: allow user to override default pool handle
  ethdev: get the supported pool for a port

 doc/guides/freebsd_gsg/build_sample_apps.rst    |  3 +++
 doc/guides/linux_gsg/build_sample_apps.rst      |  3 +++
 doc/guides/testpmd_app_ug/run_app.rst           |  4 ++++
 lib/librte_eal/bsdapp/eal/eal.c                 | 10 ++++++++++
 lib/librte_eal/bsdapp/eal/rte_eal_version.map   |  7 +++++++
 lib/librte_eal/common/eal_common_options.c      |  3 +++
 lib/librte_eal/common/eal_internal_cfg.h        |  2 +-
 lib/librte_eal/common/eal_options.h             |  2 ++
 lib/librte_eal/common/include/rte_eal.h         |  9 +++++++++
 lib/librte_eal/linuxapp/eal/eal.c               | 11 +++++++++++
 lib/librte_eal/linuxapp/eal/rte_eal_version.map |  7 +++++++
 lib/librte_ether/rte_ethdev.c                   | 18 ++++++++++++++++++
 lib/librte_ether/rte_ethdev.h                   | 24 ++++++++++++++++++++++++
 lib/librte_ether/rte_ethdev_version.map         |  1 +
 lib/librte_mbuf/rte_mbuf.c                      |  5 +++--
 15 files changed, 106 insertions(+), 3 deletions(-)
  
Santosh Shukla Oct. 2, 2017, 8:37 a.m. UTC | #14
On Sunday 01 October 2017 02:44 PM, Santosh Shukla wrote:
> v5:
> - Includes v4 minor review comment.
>   Patches rebased on upstream tip / commit id:5dce9fcdb2

ping, Thanks.
  

Patch

diff --git a/lib/librte_eal/bsdapp/eal/eal.c b/lib/librte_eal/bsdapp/eal/eal.c
index 80fe21de3..461d81bde 100644
--- a/lib/librte_eal/bsdapp/eal/eal.c
+++ b/lib/librte_eal/bsdapp/eal/eal.c
@@ -112,6 +112,13 @@  struct internal_config internal_config;
 /* used by rte_rdtsc() */
 int rte_cycles_vmware_tsc_map;
 
+/* Return mbuf pool name */
+const char *
+rte_eal_mbuf_default_mempool_ops(void)
+{
+	return internal_config.mbuf_pool_name;
+}
+
 /* Return a pointer to the configuration structure */
 struct rte_config *
 rte_eal_get_configuration(void)
@@ -385,6 +392,16 @@  eal_parse_args(int argc, char **argv)
 			continue;
 
 		switch (opt) {
+		case OPT_MBUF_POOL_OPS_NUM:
+			ret = snprintf(internal_config.mbuf_pool_name,
+					sizeof(internal_config.mbuf_pool_name),
+					"%s", optarg);
+			if (ret < 0 || (uint32_t)ret >=
+			    sizeof(internal_config.mbuf_pool_name)) {
+				ret = -1;
+				goto out;
+			}
+			break;
 		case 'h':
 			eal_usage(prgname);
 			exit(EXIT_SUCCESS);
diff --git a/lib/librte_eal/bsdapp/eal/rte_eal_version.map b/lib/librte_eal/bsdapp/eal/rte_eal_version.map
index f689f0c8f..ed5f329cd 100644
--- a/lib/librte_eal/bsdapp/eal/rte_eal_version.map
+++ b/lib/librte_eal/bsdapp/eal/rte_eal_version.map
@@ -200,6 +200,7 @@  DPDK_17.08 {
 	rte_bus_find;
 	rte_bus_find_by_device;
 	rte_bus_find_by_name;
+	rte_eal_mbuf_default_mempool_ops;
 
 } DPDK_17.05;
 
diff --git a/lib/librte_eal/common/eal_common_options.c b/lib/librte_eal/common/eal_common_options.c
index 56c368cde..9b0e73504 100644
--- a/lib/librte_eal/common/eal_common_options.c
+++ b/lib/librte_eal/common/eal_common_options.c
@@ -97,6 +97,7 @@  eal_long_options[] = {
 	{OPT_VFIO_INTR,         1, NULL, OPT_VFIO_INTR_NUM        },
 	{OPT_VMWARE_TSC_MAP,    0, NULL, OPT_VMWARE_TSC_MAP_NUM   },
 	{OPT_XEN_DOM0,          0, NULL, OPT_XEN_DOM0_NUM         },
+	{OPT_MBUF_POOL_OPS,     1, NULL, OPT_MBUF_POOL_OPS_NUM},
 	{0,                     0, NULL, 0                        }
 };
 
@@ -163,6 +164,9 @@  eal_reset_internal_config(struct internal_config *internal_cfg)
 #endif
 	internal_cfg->vmware_tsc_map = 0;
 	internal_cfg->create_uio_dev = 0;
+	snprintf(internal_cfg->mbuf_pool_name,
+		 sizeof(internal_cfg->mbuf_pool_name), "%s",
+		 RTE_MBUF_DEFAULT_MEMPOOL_OPS);
 }
 
 static int
@@ -1246,5 +1250,6 @@  eal_common_usage(void)
 	       "  --"OPT_NO_PCI"            Disable PCI\n"
 	       "  --"OPT_NO_HPET"           Disable HPET\n"
 	       "  --"OPT_NO_SHCONF"         No shared config (mmap'd files)\n"
+	       "  --"OPT_MBUF_POOL_OPS"     Pool ops name for mbuf to use\n"
 	       "\n", RTE_MAX_LCORE);
 }
diff --git a/lib/librte_eal/common/eal_internal_cfg.h b/lib/librte_eal/common/eal_internal_cfg.h
index 7b7e8c887..e8a770a42 100644
--- a/lib/librte_eal/common/eal_internal_cfg.h
+++ b/lib/librte_eal/common/eal_internal_cfg.h
@@ -83,6 +83,7 @@  struct internal_config {
 	const char *hugefile_prefix;      /**< the base filename of hugetlbfs files */
 	const char *hugepage_dir;         /**< specific hugetlbfs directory to use */
 
+	char mbuf_pool_name[RTE_MBUF_POOL_OPS_NAMESIZE]; /**< mbuf pool name */
 	unsigned num_hugepage_sizes;      /**< how many sizes on this system */
 	struct hugepage_info hugepage_info[MAX_HUGEPAGE_SIZES];
 };
diff --git a/lib/librte_eal/common/eal_options.h b/lib/librte_eal/common/eal_options.h
index a881c62e2..34e6417e4 100644
--- a/lib/librte_eal/common/eal_options.h
+++ b/lib/librte_eal/common/eal_options.h
@@ -83,6 +83,8 @@  enum {
 	OPT_VMWARE_TSC_MAP_NUM,
 #define OPT_XEN_DOM0          "xen-dom0"
 	OPT_XEN_DOM0_NUM,
+#define OPT_MBUF_POOL_OPS       "mbuf-pool-ops"
+	OPT_MBUF_POOL_OPS_NUM,
 	OPT_LONG_MAX_NUM
 };
 
diff --git a/lib/librte_eal/common/include/rte_eal.h b/lib/librte_eal/common/include/rte_eal.h
index 0e7363d77..fcd212843 100644
--- a/lib/librte_eal/common/include/rte_eal.h
+++ b/lib/librte_eal/common/include/rte_eal.h
@@ -54,6 +54,8 @@  extern "C" {
 
 /* Maximum thread_name length. */
 #define RTE_MAX_THREAD_NAME_LEN 16
+/* Maximum length of mbuf pool ops name. */
+#define RTE_MBUF_POOL_OPS_NAMESIZE 32
 
 /**
  * The lcore role (used in RTE or not).
@@ -287,6 +289,15 @@  static inline int rte_gettid(void)
 	return RTE_PER_LCORE(_thread_id);
 }
 
+/**
+ * Ops to get default pool name for mbuf
+ *
+ * @return
+ *   returns default pool name.
+ */
+const char *
+rte_eal_mbuf_default_mempool_ops(void);
+
 #define RTE_INIT(func) \
 static void __attribute__((constructor, used)) func(void)
 
diff --git a/lib/librte_eal/linuxapp/eal/eal.c b/lib/librte_eal/linuxapp/eal/eal.c
index b28bbab54..1bf4e882f 100644
--- a/lib/librte_eal/linuxapp/eal/eal.c
+++ b/lib/librte_eal/linuxapp/eal/eal.c
@@ -121,6 +121,13 @@  struct internal_config internal_config;
 /* used by rte_rdtsc() */
 int rte_cycles_vmware_tsc_map;
 
+/* Return mbuf pool name */
+const char *
+rte_eal_mbuf_default_mempool_ops(void)
+{
+	return internal_config.mbuf_pool_name;
+}
+
 /* Return a pointer to the configuration structure */
 struct rte_config *
 rte_eal_get_configuration(void)
@@ -610,6 +617,17 @@  eal_parse_args(int argc, char **argv)
 			internal_config.create_uio_dev = 1;
 			break;
 
+		case OPT_MBUF_POOL_OPS_NUM:
+			ret = snprintf(internal_config.mbuf_pool_name,
+					sizeof(internal_config.mbuf_pool_name),
+					"%s", optarg);
+			if (ret < 0 || (uint32_t)ret >=
+			    sizeof(internal_config.mbuf_pool_name)) {
+				ret = -1;
+				goto out;
+			}
+			break;
+
 		default:
 			if (opt < OPT_LONG_MIN_NUM && isprint(opt)) {
 				RTE_LOG(ERR, EAL, "Option %c is not supported "
diff --git a/lib/librte_eal/linuxapp/eal/rte_eal_version.map b/lib/librte_eal/linuxapp/eal/rte_eal_version.map
index 202072189..e90e0e062 100644
--- a/lib/librte_eal/linuxapp/eal/rte_eal_version.map
+++ b/lib/librte_eal/linuxapp/eal/rte_eal_version.map
@@ -205,6 +205,7 @@  DPDK_17.08 {
 	rte_bus_find;
 	rte_bus_find_by_device;
 	rte_bus_find_by_name;
+	rte_eal_mbuf_default_mempool_ops;
 
 } DPDK_17.05;
 
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index 26a62b8e1..e1fc90ef3 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -157,6 +157,7 @@  rte_pktmbuf_pool_create(const char *name, unsigned n,
 {
 	struct rte_mempool *mp;
 	struct rte_pktmbuf_pool_private mbp_priv;
+	const char *mp_name;
 	unsigned elt_size;
 	int ret;
 
@@ -176,8 +177,8 @@  rte_pktmbuf_pool_create(const char *name, unsigned n,
 	if (mp == NULL)
 		return NULL;
 
-	ret = rte_mempool_set_ops_byname(mp,
-		RTE_MBUF_DEFAULT_MEMPOOL_OPS, NULL);
+	mp_name = rte_eal_mbuf_default_mempool_ops();
+	ret = rte_mempool_set_ops_byname(mp, mp_name, NULL);
 	if (ret != 0) {
 		RTE_LOG(ERR, MBUF, "error setting mempool handler\n");
 		rte_mempool_free(mp);