app/testpmd: support unequal number of RXQ and TXQ
diff mbox series

Message ID 20191211053009.14906-1-hemant.agrawal@nxp.com
State Changes Requested
Delegated to: Ferruh Yigit
Headers show
Series
  • app/testpmd: support unequal number of RXQ and TXQ
Related show

Checks

Context Check Description
ci/Intel-compilation fail Compilation issues
ci/travis-robot warning Travis build: failed
ci/iol-mellanox-Performance success Performance Testing PASS
ci/iol-testing success Testing PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/checkpatch success coding style OK

Commit Message

Hemant Agrawal Dec. 11, 2019, 5:30 a.m. UTC
From: Jun Yang <jun.yang@nxp.com>

The existing forwarding mode usages the total number of
queues as the minimum of rxq and txq.
It finds the txq as the same index as rxq.
However in some scenarios, specially for flow control
the number of rxq and txq can be different.
This patch maxes the txq and function of rxq for all such
scenario instead of keeping 1:1 relationship between the two.

Now packets from all RXQs can be forwarded to TXQs

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 app/test-pmd/config.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

Comments

Ananyev, Konstantin Dec. 11, 2019, 9:59 a.m. UTC | #1
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Hemant Agrawal
> Sent: Wednesday, December 11, 2019 5:30 AM
> To: dev@dpdk.org
> Cc: Jun Yang <jun.yang@nxp.com>
> Subject: [dpdk-dev] [PATCH] app/testpmd: support unequal number of RXQ and TXQ
> 
> From: Jun Yang <jun.yang@nxp.com>
> 
> The existing forwarding mode usages the total number of
> queues as the minimum of rxq and txq.
> It finds the txq as the same index as rxq.
> However in some scenarios, specially for flow control
> the number of rxq and txq can be different.
> This patch maxes the txq and function of rxq for all such
> scenario instead of keeping 1:1 relationship between the two.
> 
> Now packets from all RXQs can be forwarded to TXQs
> 
> Signed-off-by: Jun Yang <jun.yang@nxp.com>
> ---
>  app/test-pmd/config.c | 4 +---
>  1 file changed, 1 insertion(+), 3 deletions(-)
> 
> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
> index d59968278..efa409453 100644
> --- a/app/test-pmd/config.c
> +++ b/app/test-pmd/config.c
> @@ -2130,8 +2130,6 @@ rss_fwd_config_setup(void)
>  	streamid_t  sm_id;
> 
>  	nb_q = nb_rxq;
> -	if (nb_q > nb_txq)
> -		nb_q = nb_txq;
>  	cur_fwd_config.nb_fwd_lcores = (lcoreid_t) nb_fwd_lcores;
>  	cur_fwd_config.nb_fwd_ports = nb_fwd_ports;
>  	cur_fwd_config.nb_fwd_streams =
> @@ -2154,7 +2152,7 @@ rss_fwd_config_setup(void)
>  		fs->rx_port = fwd_ports_ids[rxp];
>  		fs->rx_queue = rxq;
>  		fs->tx_port = fwd_ports_ids[txp];
> -		fs->tx_queue = rxq;
> +		fs->tx_queue = (rxq % nb_txq);

But does it mean that now 2 lcores cah use the same TX queue?
If so, then how it supposed to work?

>  		fs->peer_addr = fs->tx_port;
>  		fs->retry_enabled = retry_enabled;
>  		rxp++;
> --
> 2.17.1
Jerin Jacob Dec. 11, 2019, 10:26 a.m. UTC | #2
On Wed, Dec 11, 2019 at 3:29 PM Ananyev, Konstantin
<konstantin.ananyev@intel.com> wrote:
>
>
>
> > -----Original Message-----
> > From: dev <dev-bounces@dpdk.org> On Behalf Of Hemant Agrawal
> > Sent: Wednesday, December 11, 2019 5:30 AM
> > To: dev@dpdk.org
> > Cc: Jun Yang <jun.yang@nxp.com>
> > Subject: [dpdk-dev] [PATCH] app/testpmd: support unequal number of RXQ and TXQ
> >
> > From: Jun Yang <jun.yang@nxp.com>
> >
> > The existing forwarding mode usages the total number of
> > queues as the minimum of rxq and txq.
> > It finds the txq as the same index as rxq.
> > However in some scenarios, specially for flow control
> > the number of rxq and txq can be different.
> > This patch maxes the txq and function of rxq for all such
> > scenario instead of keeping 1:1 relationship between the two.
> >
> > Now packets from all RXQs can be forwarded to TXQs

Allow this feature only for DEV_TX_OFFLOAD_MT_LOCKFREE devices.
Please probe DEV_TX_OFFLOAD_MT_LOCKFREE() capability first to
avoid breaking contract on the other devices.

> >
> > Signed-off-by: Jun Yang <jun.yang@nxp.com>
> > ---
> >  app/test-pmd/config.c | 4 +---
> >  1 file changed, 1 insertion(+), 3 deletions(-)
> >
> > diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
> > index d59968278..efa409453 100644
> > --- a/app/test-pmd/config.c
> > +++ b/app/test-pmd/config.c
> > @@ -2130,8 +2130,6 @@ rss_fwd_config_setup(void)
> >       streamid_t  sm_id;
> >
> >       nb_q = nb_rxq;
> > -     if (nb_q > nb_txq)
> > -             nb_q = nb_txq;
> >       cur_fwd_config.nb_fwd_lcores = (lcoreid_t) nb_fwd_lcores;
> >       cur_fwd_config.nb_fwd_ports = nb_fwd_ports;
> >       cur_fwd_config.nb_fwd_streams =
> > @@ -2154,7 +2152,7 @@ rss_fwd_config_setup(void)
> >               fs->rx_port = fwd_ports_ids[rxp];
> >               fs->rx_queue = rxq;
> >               fs->tx_port = fwd_ports_ids[txp];
> > -             fs->tx_queue = rxq;
> > +             fs->tx_queue = (rxq % nb_txq);
>
> But does it mean that now 2 lcores cah use the same TX queue?
> If so, then how it supposed to work?

See above.


>
> >               fs->peer_addr = fs->tx_port;
> >               fs->retry_enabled = retry_enabled;
> >               rxp++;
> > --
> > 2.17.1
>
Hemant Agrawal Dec. 12, 2019, 11:20 a.m. UTC | #3
> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>

> On Wed, Dec 11, 2019 at 3:29 PM Ananyev, Konstantin
> <konstantin.ananyev@intel.com> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: dev <dev-bounces@dpdk.org> On Behalf Of Hemant Agrawal
> > > Sent: Wednesday, December 11, 2019 5:30 AM
> > > To: dev@dpdk.org
> > > Cc: Jun Yang <jun.yang@nxp.com>
> > > Subject: [dpdk-dev] [PATCH] app/testpmd: support unequal number of
> > > RXQ and TXQ
> > >
> > > From: Jun Yang <jun.yang@nxp.com>
> > >
> > > The existing forwarding mode usages the total number of queues as
> > > the minimum of rxq and txq.
> > > It finds the txq as the same index as rxq.
> > > However in some scenarios, specially for flow control the number of
> > > rxq and txq can be different.
> > > This patch maxes the txq and function of rxq for all such scenario
> > > instead of keeping 1:1 relationship between the two.
> > >
> > > Now packets from all RXQs can be forwarded to TXQs
> 
> Allow this feature only for DEV_TX_OFFLOAD_MT_LOCKFREE devices.
> Please probe DEV_TX_OFFLOAD_MT_LOCKFREE() capability first to avoid
> breaking contract on the other devices.

[Hemant] Agree with the suggestion.

> 
> > >
> > > Signed-off-by: Jun Yang <jun.yang@nxp.com>
> > > ---
> > >  app/test-pmd/config.c | 4 +---
> > >  1 file changed, 1 insertion(+), 3 deletions(-)
> > >
> > > diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index
> > > d59968278..efa409453 100644
> > > --- a/app/test-pmd/config.c
> > > +++ b/app/test-pmd/config.c
> > > @@ -2130,8 +2130,6 @@ rss_fwd_config_setup(void)
> > >       streamid_t  sm_id;
> > >
> > >       nb_q = nb_rxq;
> > > -     if (nb_q > nb_txq)
> > > -             nb_q = nb_txq;
> > >       cur_fwd_config.nb_fwd_lcores = (lcoreid_t) nb_fwd_lcores;
> > >       cur_fwd_config.nb_fwd_ports = nb_fwd_ports;
> > >       cur_fwd_config.nb_fwd_streams = @@ -2154,7 +2152,7 @@
> > > rss_fwd_config_setup(void)
> > >               fs->rx_port = fwd_ports_ids[rxp];
> > >               fs->rx_queue = rxq;
> > >               fs->tx_port = fwd_ports_ids[txp];
> > > -             fs->tx_queue = rxq;
> > > +             fs->tx_queue = (rxq % nb_txq);
> >
> > But does it mean that now 2 lcores cah use the same TX queue?
> > If so, then how it supposed to work?
> 
> See above.
> 
> 
> >
> > >               fs->peer_addr = fs->tx_port;
> > >               fs->retry_enabled = retry_enabled;
> > >               rxp++;
> > > --
> > > 2.17.1
> >

Patch
diff mbox series

diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index d59968278..efa409453 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2130,8 +2130,6 @@  rss_fwd_config_setup(void)
 	streamid_t  sm_id;
 
 	nb_q = nb_rxq;
-	if (nb_q > nb_txq)
-		nb_q = nb_txq;
 	cur_fwd_config.nb_fwd_lcores = (lcoreid_t) nb_fwd_lcores;
 	cur_fwd_config.nb_fwd_ports = nb_fwd_ports;
 	cur_fwd_config.nb_fwd_streams =
@@ -2154,7 +2152,7 @@  rss_fwd_config_setup(void)
 		fs->rx_port = fwd_ports_ids[rxp];
 		fs->rx_queue = rxq;
 		fs->tx_port = fwd_ports_ids[txp];
-		fs->tx_queue = rxq;
+		fs->tx_queue = (rxq % nb_txq);
 		fs->peer_addr = fs->tx_port;
 		fs->retry_enabled = retry_enabled;
 		rxp++;