dumpcap: fix mbuf pool ring type

Message ID 20230804161604.61050-1-stephen@networkplumber.org (mailing list archive)
State Superseded, archived
Delegated to: Thomas Monjalon
Headers
Series dumpcap: fix mbuf pool ring type |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/loongarch-compilation success Compilation OK
ci/loongarch-unit-testing success Unit Testing PASS
ci/Intel-compilation success Compilation OK
ci/intel-Testing success Testing PASS
ci/intel-Functional success Functional PASS
ci/iol-mellanox-Performance success Performance Testing PASS
ci/github-robot: build success github build: passed
ci/iol-testing success Testing PASS
ci/iol-aarch-unit-testing success Testing PASS
ci/iol-x86_64-compile-testing success Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-x86_64-unit-testing success Testing PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-unit-testing success Testing PASS
ci/iol-broadcom-Performance success Performance Testing PASS
ci/iol-broadcom-Functional success Functional Testing PASS
ci/iol-aarch64-compile-testing success Testing PASS

Commit Message

Stephen Hemminger Aug. 4, 2023, 4:16 p.m. UTC
  The ring used to store mbufs needs to be multiple producer,
multiple consumer because multiple queues might on multiple
cores might be allocating and same time (consume) and in
case of ring full, the mbufs will be returned (multiple producer).

Bugzilla ID: 1271
Fixes: cb2440fd77af ("dumpcap: fix mbuf pool ring type")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 app/dumpcap/main.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)
  

Comments

Morten Brørup Aug. 5, 2023, 9:05 a.m. UTC | #1
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Friday, 4 August 2023 18.16
> 
> The ring used to store mbufs needs to be multiple producer,
> multiple consumer because multiple queues might on multiple
> cores might be allocating and same time (consume) and in
> case of ring full, the mbufs will be returned (multiple producer).
> 
> Bugzilla ID: 1271
> Fixes: cb2440fd77af ("dumpcap: fix mbuf pool ring type")
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> ---

Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
  
David Marchand Oct. 2, 2023, 7:33 a.m. UTC | #2
On Fri, Aug 4, 2023 at 6:16 PM Stephen Hemminger
<stephen@networkplumber.org> wrote:
>
> The ring used to store mbufs needs to be multiple producer,
> multiple consumer because multiple queues might on multiple
> cores might be allocating and same time (consume) and in
> case of ring full, the mbufs will be returned (multiple producer).

I think I get the idea, but can you rephrase please?


>
> Bugzilla ID: 1271
> Fixes: cb2440fd77af ("dumpcap: fix mbuf pool ring type")

This Fixes: tag looks wrong.


> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> ---
>  app/dumpcap/main.c | 7 +++----
>  1 file changed, 3 insertions(+), 4 deletions(-)
>
> diff --git a/app/dumpcap/main.c b/app/dumpcap/main.c
> index 64294bbfb3e6..991174e95022 100644
> --- a/app/dumpcap/main.c
> +++ b/app/dumpcap/main.c
> @@ -691,10 +691,9 @@ static struct rte_mempool *create_mempool(void)
>                         data_size = mbuf_size;
>         }
>
> -       mp = rte_pktmbuf_pool_create_by_ops(pool_name, num_mbufs,
> -                                           MBUF_POOL_CACHE_SIZE, 0,
> -                                           data_size,
> -                                           rte_socket_id(), "ring_mp_sc");
> +       mp = rte_pktmbuf_pool_create(pool_name, num_mbufs,
> +                                    MBUF_POOL_CACHE_SIZE, 0,
> +                                    data_size, rte_socket_id());

Switching to rte_pktmbuf_pool_create() still leaves the user with the
possibility to shoot himself in the foot (I was thinking of setting
some --mbuf-pool-ops-name EAL option).

This application has explicit requirements in terms of concurrent
access (and I don't think the mempool library exposes per driver
capabilities in that regard).
The application was enforcing the use of mempool/ring so far.

I think it is safer to go with an explicit
rte_pktmbuf_pool_create_by_ops(... "ring_mp_mc").
WDYT?


>         if (mp == NULL)
>                 rte_exit(EXIT_FAILURE,
>                          "Mempool (%s) creation failed: %s\n", pool_name,
> --
> 2.39.2
>

Thanks.
  
Morten Brørup Oct. 2, 2023, 8:42 a.m. UTC | #3
> From: David Marchand [mailto:david.marchand@redhat.com]
> Sent: Monday, 2 October 2023 09.34
> 
> On Fri, Aug 4, 2023 at 6:16 PM Stephen Hemminger
> <stephen@networkplumber.org> wrote:
> >
> > The ring used to store mbufs needs to be multiple producer,
> > multiple consumer because multiple queues might on multiple
> > cores might be allocating and same time (consume) and in
> > case of ring full, the mbufs will be returned (multiple producer).
> 
> I think I get the idea, but can you rephrase please?
> 
> 
> >
> > Bugzilla ID: 1271
> > Fixes: cb2440fd77af ("dumpcap: fix mbuf pool ring type")
> 
> This Fixes: tag looks wrong.
> 
> 
> > Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> > ---
> >  app/dumpcap/main.c | 7 +++----
> >  1 file changed, 3 insertions(+), 4 deletions(-)
> >
> > diff --git a/app/dumpcap/main.c b/app/dumpcap/main.c
> > index 64294bbfb3e6..991174e95022 100644
> > --- a/app/dumpcap/main.c
> > +++ b/app/dumpcap/main.c
> > @@ -691,10 +691,9 @@ static struct rte_mempool *create_mempool(void)
> >                         data_size = mbuf_size;
> >         }
> >
> > -       mp = rte_pktmbuf_pool_create_by_ops(pool_name, num_mbufs,
> > -                                           MBUF_POOL_CACHE_SIZE, 0,
> > -                                           data_size,
> > -                                           rte_socket_id(), "ring_mp_sc");
> > +       mp = rte_pktmbuf_pool_create(pool_name, num_mbufs,
> > +                                    MBUF_POOL_CACHE_SIZE, 0,
> > +                                    data_size, rte_socket_id());
> 
> Switching to rte_pktmbuf_pool_create() still leaves the user with the
> possibility to shoot himself in the foot (I was thinking of setting
> some --mbuf-pool-ops-name EAL option).
> 
> This application has explicit requirements in terms of concurrent
> access (and I don't think the mempool library exposes per driver
> capabilities in that regard).
> The application was enforcing the use of mempool/ring so far.
> 
> I think it is safer to go with an explicit
> rte_pktmbuf_pool_create_by_ops(... "ring_mp_mc").
> WDYT?

<feature creep>
Or perhaps one of "ring_mt_rts" or "ring_mt_hts", if any of those mbuf pool drivers are specified on the command line; otherwise fall back to "ring_mp_mc".

Actually, I prefer Stephen's suggestion of using the default mbuf pool driver. The option is there for a reason.

However, David is right: We want to prevent the user from using a thread-unsafe mempool driver in this use case.

And I guess there might be other use cases than this one, where a thread-safe mempool driver is required. So adding a generalized function to get the "upgraded" (i.e. thread safe) variant of a mempool driver would be nice.
</feature creep>

Feel free to ignore my suggested feature creep, and go ahead with David's suggestion instead.

> 
> 
> >         if (mp == NULL)
> >                 rte_exit(EXIT_FAILURE,
> >                          "Mempool (%s) creation failed: %s\n", pool_name,
> > --
> > 2.39.2
> >
> 
> Thanks.
> 
> --
> David Marchand
  
Stephen Hemminger Nov. 6, 2023, 7:23 p.m. UTC | #4
On Mon, 2 Oct 2023 10:42:53 +0200
Morten Brørup <mb@smartsharesystems.com> wrote:

> > Switching to rte_pktmbuf_pool_create() still leaves the user with the
> > possibility to shoot himself in the foot (I was thinking of setting
> > some --mbuf-pool-ops-name EAL option).
> > 
> > This application has explicit requirements in terms of concurrent
> > access (and I don't think the mempool library exposes per driver
> > capabilities in that regard).
> > The application was enforcing the use of mempool/ring so far.
> > 
> > I think it is safer to go with an explicit
> > rte_pktmbuf_pool_create_by_ops(... "ring_mp_mc").
> > WDYT?  
> 
> <feature creep>
> Or perhaps one of "ring_mt_rts" or "ring_mt_hts", if any of those mbuf pool drivers are specified on the command line; otherwise fall back to "ring_mp_mc".
> 
> Actually, I prefer Stephen's suggestion of using the default mbuf pool driver. The option is there for a reason.
> 
> However, David is right: We want to prevent the user from using a thread-unsafe mempool driver in this use case.
> 
> And I guess there might be other use cases than this one, where a thread-safe mempool driver is required. So adding a generalized function to get the "upgraded" (i.e. thread safe) variant of a mempool driver would be nice.
> </feature creep>

If the user overrides the default mbuf pool type, then it will need to be thread safe for
the general case of driver as well (or they are on single cpu). 


I think the patch should be applied as is.
  
Stephen Hemminger Nov. 6, 2023, 7:24 p.m. UTC | #5
On Mon, 2 Oct 2023 09:33:50 +0200
David Marchand <david.marchand@redhat.com> wrote:

> Switching to rte_pktmbuf_pool_create() still leaves the user with the
> possibility to shoot himself in the foot (I was thinking of setting
> some --mbuf-pool-ops-name EAL option).
> 
> This application has explicit requirements in terms of concurrent
> access (and I don't think the mempool library exposes per driver
> capabilities in that regard).
> The application was enforcing the use of mempool/ring so far.
> 
> I think it is safer to go with an explicit
> rte_pktmbuf_pool_create_by_ops(... "ring_mp_mc").
> WDYT?

The dumpcap command does not have EAL command line arguments that can
be changed by user.
  
Morten Brørup Nov. 6, 2023, 9:50 p.m. UTC | #6
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Monday, 6 November 2023 20.24
> 
> On Mon, 2 Oct 2023 10:42:53 +0200
> Morten Brørup <mb@smartsharesystems.com> wrote:
> 
> > > Switching to rte_pktmbuf_pool_create() still leaves the user with
> the
> > > possibility to shoot himself in the foot (I was thinking of setting
> > > some --mbuf-pool-ops-name EAL option).
> > >
> > > This application has explicit requirements in terms of concurrent
> > > access (and I don't think the mempool library exposes per driver
> > > capabilities in that regard).
> > > The application was enforcing the use of mempool/ring so far.
> > >
> > > I think it is safer to go with an explicit
> > > rte_pktmbuf_pool_create_by_ops(... "ring_mp_mc").
> > > WDYT?
> >
> > <feature creep>
> > Or perhaps one of "ring_mt_rts" or "ring_mt_hts", if any of those
> mbuf pool drivers are specified on the command line; otherwise fall
> back to "ring_mp_mc".
> >
> > Actually, I prefer Stephen's suggestion of using the default mbuf
> pool driver. The option is there for a reason.
> >
> > However, David is right: We want to prevent the user from using a
> thread-unsafe mempool driver in this use case.
> >
> > And I guess there might be other use cases than this one, where a
> thread-safe mempool driver is required. So adding a generalized
> function to get the "upgraded" (i.e. thread safe) variant of a mempool
> driver would be nice.
> > </feature creep>
> 
> If the user overrides the default mbuf pool type, then it will need to
> be thread safe for
> the general case of driver as well (or they are on single cpu).

If they have chosen a thread safe pool type, using the chosen type will work for dumpcap too. I think we all agree on that.

I'm not sure I understand your single-cpu argument, so let me try to paraphrase:

If their application only uses one EAL thread, and they have chosen "ring_sc_sp", dumpcap also work, because mempool accesses are still single-threaded. Is this correctly understood?

Or are you saying that if they want to use dumpcap, they must choose a thread safe pool type for their application (regardless if the application is single-threaded or not)?

> 
> 
> I think the patch should be applied as is.
  
Stephen Hemminger Nov. 7, 2023, 2:36 a.m. UTC | #7
On Mon, 6 Nov 2023 22:50:50 +0100
Morten Brørup <mb@smartsharesystems.com> wrote:

> > > And I guess there might be other use cases than this one, where a  
> > thread-safe mempool driver is required. So adding a generalized
> > function to get the "upgraded" (i.e. thread safe) variant of a mempool
> > driver would be nice.  
> > > </feature creep>  
> > 
> > If the user overrides the default mbuf pool type, then it will need to
> > be thread safe for
> > the general case of driver as well (or they are on single cpu).  
> 
> If they have chosen a thread safe pool type, using the chosen type will work for dumpcap too. I think we all agree on that.
> 
> I'm not sure I understand your single-cpu argument, so let me try to paraphrase:
> 
> If their application only uses one EAL thread, and they have chosen "ring_sc_sp", dumpcap also work, because mempool accesses are still single-threaded. Is this correctly understood?
> 
> Or are you saying that if they want to use dumpcap, they must choose a thread safe pool type for their application (regardless if the application is single-threaded or not)?

There is no command line of EAL nature in dumpcap.
This is intentional.
QED: overriding default pool type is not going to be a possible
  
Morten Brørup Nov. 7, 2023, 7:22 a.m. UTC | #8
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Tuesday, 7 November 2023 03.36
> 
> On Mon, 6 Nov 2023 22:50:50 +0100
> Morten Brørup <mb@smartsharesystems.com> wrote:
> 
> > > > And I guess there might be other use cases than this one, where a
> > > thread-safe mempool driver is required. So adding a generalized
> > > function to get the "upgraded" (i.e. thread safe) variant of a
> mempool
> > > driver would be nice.
> > > > </feature creep>
> > >
> > > If the user overrides the default mbuf pool type, then it will need
> to
> > > be thread safe for
> > > the general case of driver as well (or they are on single cpu).
> >
> > If they have chosen a thread safe pool type, using the chosen type
> will work for dumpcap too. I think we all agree on that.
> >
> > I'm not sure I understand your single-cpu argument, so let me try to
> paraphrase:
> >
> > If their application only uses one EAL thread, and they have chosen
> "ring_sc_sp", dumpcap also work, because mempool accesses are still
> single-threaded. Is this correctly understood?
> >
> > Or are you saying that if they want to use dumpcap, they must choose
> a thread safe pool type for their application (regardless if the
> application is single-threaded or not)?
> 
> There is no command line of EAL nature in dumpcap.
> This is intentional.
> QED: overriding default pool type is not going to be a possible

The preferred mbuf pool type can configured in the primary process by EAL params. If so configured, it is stored in a memzone named "mbuf_user_pool_ops".
And if it is set there, the secondary process will also use it as its preferred mbuf pool type.
  
Stephen Hemminger Nov. 7, 2023, 4:41 p.m. UTC | #9
On Tue, 7 Nov 2023 08:22:37 +0100
Morten Brørup <mb@smartsharesystems.com> wrote:

> > >
> > > Or are you saying that if they want to use dumpcap, they must choose  
> > a thread safe pool type for their application (regardless if the
> > application is single-threaded or not)?
> > 
> > There is no command line of EAL nature in dumpcap.
> > This is intentional.
> > QED: overriding default pool type is not going to be a possible  
> 
> The preferred mbuf pool type can configured in the primary process by EAL params. If so configured, it is stored in a memzone named "mbuf_user_pool_ops".
> And if it is set there, the secondary process will also use it as its preferred mbuf pool type.

If user decides to use a thread unsafe mempool, wouldn't it break in every PMD as well.
  
Stephen Hemminger Nov. 7, 2023, 5 p.m. UTC | #10
On Tue, 7 Nov 2023 08:22:37 +0100
Morten Brørup <mb@smartsharesystems.com> wrote:

> > > Or are you saying that if they want to use dumpcap, they must choose  
> > a thread safe pool type for their application (regardless if the
> > application is single-threaded or not)?
> > 
> > There is no command line of EAL nature in dumpcap.
> > This is intentional.
> > QED: overriding default pool type is not going to be a possible  
> 
> The preferred mbuf pool type can configured in the primary process by EAL params. If so configured, it is stored in a memzone named "mbuf_user_pool_ops".
> And if it is set there, the secondary process will also use it as its preferred mbuf pool type.

I notice that no other app or example is using the create_by_ops except pdump/pcapng/dumpcap.

~/DPDK/main/examples $ git grep rte_pktmbuf_pool_create_by_ops


~/DPDK/main/app $ git grep rte_pktmbuf_pool_create_by_ops
dumpcap/main.c: mp = rte_pktmbuf_pool_create_by_ops(pool_name, num_mbufs,
pdump/main.c:                   mbuf_pool = rte_pktmbuf_pool_create_by_ops(mempool_name,
test/test_pcapng.c:     mp = rte_pktmbuf_pool_create_by_ops("pcapng_test_pool", IOV_MAX + NUM_PACKETS,
  
Morten Brørup Nov. 7, 2023, 5:38 p.m. UTC | #11
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Tuesday, 7 November 2023 17.41
> 
> On Tue, 7 Nov 2023 08:22:37 +0100
> Morten Brørup <mb@smartsharesystems.com> wrote:
> 
> > > >
> > > > Or are you saying that if they want to use dumpcap, they must
> choose
> > > a thread safe pool type for their application (regardless if the
> > > application is single-threaded or not)?
> > >
> > > There is no command line of EAL nature in dumpcap.
> > > This is intentional.
> > > QED: overriding default pool type is not going to be a possible
> >
> > The preferred mbuf pool type can configured in the primary process by
> EAL params. If so configured, it is stored in a memzone named
> "mbuf_user_pool_ops".
> > And if it is set there, the secondary process will also use it as its
> preferred mbuf pool type.
> 
> If user decides to use a thread unsafe mempool, wouldn't it break in
> every PMD as well.

Yes, except if the application only uses one thread for packet processing. Then thread safety is not necessary.
  
Stephen Hemminger Nov. 8, 2023, 4:50 p.m. UTC | #12
On Tue, 7 Nov 2023 18:38:32 +0100
Morten Brørup <mb@smartsharesystems.com> wrote:

> > If user decides to use a thread unsafe mempool, wouldn't it break in
> > every PMD as well.  
> 
> Yes, except if the application only uses one thread for packet processing. Then thread safety is not necessary.
> 


If application only used single thread then single consumer pool would work fine
for dumpcap. Only the primary thread would ever allocate. But both secondary and
primary could be freeing mbuf.
  
Morten Brørup Nov. 8, 2023, 5:43 p.m. UTC | #13
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Wednesday, 8 November 2023 17.51
> 
> On Tue, 7 Nov 2023 18:38:32 +0100
> Morten Brørup <mb@smartsharesystems.com> wrote:
> 
> > > If user decides to use a thread unsafe mempool, wouldn't it break
> in
> > > every PMD as well.
> >
> > Yes, except if the application only uses one thread for packet
> processing. Then thread safety is not necessary.
> >
> 
> 
> If application only used single thread then single consumer pool would
> work fine
> for dumpcap. Only the primary thread would ever allocate. But both
> secondary and
> primary could be freeing mbuf.

I think it is an acceptable risk. Perhaps mention somewhere that dumpcap breaks the application if it uses non-thread safe mbuf ops.

Acked-by: Morten Brørup <mb@smartsharesystems.com>

Optionally, dumpcap could compare the return value of rte_mbuf_best_mempool_ops() [1] with the few known unsafe ops. However, this is probably going to be forgotten, if new non-thread safe mempool ops are ever added to DPDK.

[1]: https://elixir.bootlin.com/dpdk/latest/source/lib/mbuf/rte_mbuf_pool_ops.c#L88
  

Patch

diff --git a/app/dumpcap/main.c b/app/dumpcap/main.c
index 64294bbfb3e6..991174e95022 100644
--- a/app/dumpcap/main.c
+++ b/app/dumpcap/main.c
@@ -691,10 +691,9 @@  static struct rte_mempool *create_mempool(void)
 			data_size = mbuf_size;
 	}
 
-	mp = rte_pktmbuf_pool_create_by_ops(pool_name, num_mbufs,
-					    MBUF_POOL_CACHE_SIZE, 0,
-					    data_size,
-					    rte_socket_id(), "ring_mp_sc");
+	mp = rte_pktmbuf_pool_create(pool_name, num_mbufs,
+				     MBUF_POOL_CACHE_SIZE, 0,
+				     data_size, rte_socket_id());
 	if (mp == NULL)
 		rte_exit(EXIT_FAILURE,
 			 "Mempool (%s) creation failed: %s\n", pool_name,