mbox series

[00/13] add hairpin feature

Message ID 1569479349-36962-1-git-send-email-orika@mellanox.com (mailing list archive)
Headers
Series add hairpin feature |

Message

Ori Kam Sept. 26, 2019, 6:28 a.m. UTC
  This patch set implements the hairpin feature.
The hairpin feature was introduced in RFC[1]

The hairpin feature (different name can be forward) acts as "bump on the wire",
meaning that a packet that is received from the wire can be modified using
offloaded action and then sent back to the wire without application intervention
which save CPU cycles.

The hairpin is the inverse function of loopback in which application
sends a packet then it is received again by the
application without being sent to the wire.

The hairpin can be used by a number of different NVF, for example load
balancer, gateway and so on.

As can be seen from the hairpin description, hairpin is basically RX queue
connected to TX queue.

During the design phase I was thinking of two ways to implement this
feature the first one is adding a new rte flow action. and the second
one is create a special kind of queue.

The advantages of using the queue approch:
1. More control for the application. queue depth (the memory size that
should be used).
2. Enable QoS. QoS is normaly a parametr of queue, so in this approch it
will be easy to integrate with such system.
3. Native integression with the rte flow API. Just setting the target
queue/rss to hairpin queue, will result that the traffic will be routed
to the hairpin queue.
4. Enable queue offloading.

Each hairpin Rxq can be connected Txq / number of Txqs which can belong to a
different ports assuming the PMD supports it. The same goes the other
way each hairpin Txq can be connected to one or more Rxqs.
This is the reason that both the Txq setup and Rxq setup are getting the
hairpin configuration structure.

From PMD prespctive the number of Rxq/Txq is the total of standard
queues + hairpin queues.

To configure hairpin queue the user should call
rte_eth_rx_hairpin_queue_setup / rte_eth_tx_hairpin_queue_setup insteed
of the normal queue setup functions.

The hairpin queues are not part of the normal RSS functiosn.

To use the queues the user simply create a flow that points to RSS/queue
actions that are hairpin queues.
The reason for selecting 2 new functions for hairpin queue setup are:
1. avoid API break.
2. avoid extra and unused parameters.


This series must be applied after series[2]

[1] https://inbox.dpdk.org/dev/1565703468-55617-1-git-send-email-orika@mellanox.com/
[2] https://inbox.dpdk.org/dev/1569398015-6027-1-git-send-email-viacheslavo@mellanox.com/

Cc: wenzhuo.lu@intel.com
Cc: bernard.iremonger@intel.com
Cc: thomas@monjalon.net
Cc: ferruh.yigit@intel.com
Cc: arybchenko@solarflare.com
Cc: viacheslavo@mellanox.com


Ori Kam (13):
  ethdev: support setup function for hairpin queue
  net/mlx5: query hca hairpin capabilities
  net/mlx5: support Rx hairpin queues
  net/mlx5: prepare txq to work with different types
  net/mlx5: support Tx hairpin queues
  app/testpmd: add hairpin support
  net/mlx5: add hairpin binding function
  net/mlx5: add support for hairpin hrxq
  net/mlx5: add internal tag item and action
  net/mlx5: add id generation function
  net/mlx5: add default flows for hairpin
  net/mlx5: split hairpin flows
  doc: add hairpin feature

 app/test-pmd/parameters.c                |  12 +
 app/test-pmd/testpmd.c                   |  59 ++++-
 app/test-pmd/testpmd.h                   |   1 +
 doc/guides/rel_notes/release_19_11.rst   |   5 +
 drivers/net/mlx5/mlx5.c                  | 160 ++++++++++++-
 drivers/net/mlx5/mlx5.h                  |  65 ++++-
 drivers/net/mlx5/mlx5_devx_cmds.c        | 194 +++++++++++++++
 drivers/net/mlx5/mlx5_flow.c             | 393 ++++++++++++++++++++++++++++++-
 drivers/net/mlx5/mlx5_flow.h             |  73 +++++-
 drivers/net/mlx5/mlx5_flow_dv.c          | 231 +++++++++++++++++-
 drivers/net/mlx5/mlx5_flow_verbs.c       |  11 +-
 drivers/net/mlx5/mlx5_prm.h              | 127 +++++++++-
 drivers/net/mlx5/mlx5_rxq.c              | 323 ++++++++++++++++++++++---
 drivers/net/mlx5/mlx5_rxtx.c             |   2 +-
 drivers/net/mlx5/mlx5_rxtx.h             |  72 +++++-
 drivers/net/mlx5/mlx5_trigger.c          | 134 ++++++++++-
 drivers/net/mlx5/mlx5_txq.c              | 309 ++++++++++++++++++++----
 lib/librte_ethdev/rte_ethdev.c           | 213 +++++++++++++++++
 lib/librte_ethdev/rte_ethdev.h           | 145 ++++++++++++
 lib/librte_ethdev/rte_ethdev_core.h      |  18 ++
 lib/librte_ethdev/rte_ethdev_version.map |   4 +
 21 files changed, 2424 insertions(+), 127 deletions(-)
  

Comments

Andrew Rybchenko Sept. 26, 2019, 12:32 p.m. UTC | #1
On 9/26/19 9:28 AM, Ori Kam wrote:
> This patch set implements the hairpin feature.
> The hairpin feature was introduced in RFC[1]
>
> The hairpin feature (different name can be forward) acts as "bump on the wire",
> meaning that a packet that is received from the wire can be modified using
> offloaded action and then sent back to the wire without application intervention
> which save CPU cycles.
>
> The hairpin is the inverse function of loopback in which application
> sends a packet then it is received again by the
> application without being sent to the wire.
>
> The hairpin can be used by a number of different NVF, for example load
> balancer, gateway and so on.
>
> As can be seen from the hairpin description, hairpin is basically RX queue
> connected to TX queue.

Is it just a pipe or RTE flow API rules required?
If it is just a pipe, what about transformations which could be useful 
in this
case (encaps/decaps, NAT etc)? How to achieve it?
If it is not a pipe and flow API rules are required, why is peer information
required?

> During the design phase I was thinking of two ways to implement this
> feature the first one is adding a new rte flow action. and the second
> one is create a special kind of queue.
>
> The advantages of using the queue approch:
> 1. More control for the application. queue depth (the memory size that
> should be used).

But it inherits many parameters which are not really applicable to hairpin
queues. If all parameters are applicable, it should be explained in the
context of the hairpin queues.

> 2. Enable QoS. QoS is normaly a parametr of queue, so in this approch it
> will be easy to integrate with such system.

Could you elaborate it.

> 3. Native integression with the rte flow API. Just setting the target
> queue/rss to hairpin queue, will result that the traffic will be routed
> to the hairpin queue.

It sounds like queues are not required for flow API at all.
If the goal is to send traffic outside to specified physical port,
just specify it as an flow API action. That's it.

> 4. Enable queue offloading.

Which offloads are applicable to hairpin queues?

> Each hairpin Rxq can be connected Txq / number of Txqs which can belong to a
> different ports assuming the PMD supports it. The same goes the other
> way each hairpin Txq can be connected to one or more Rxqs.
> This is the reason that both the Txq setup and Rxq setup are getting the
> hairpin configuration structure.
>
>  From PMD prespctive the number of Rxq/Txq is the total of standard
> queues + hairpin queues.
>
> To configure hairpin queue the user should call
> rte_eth_rx_hairpin_queue_setup / rte_eth_tx_hairpin_queue_setup insteed
> of the normal queue setup functions.
>
> The hairpin queues are not part of the normal RSS functiosn.
>
> To use the queues the user simply create a flow that points to RSS/queue
> actions that are hairpin queues.
> The reason for selecting 2 new functions for hairpin queue setup are:
> 1. avoid API break.
> 2. avoid extra and unused parameters.
>
>
> This series must be applied after series[2]
>
> [1] https://inbox.dpdk.org/dev/1565703468-55617-1-git-send-email-orika@mellanox.com/
> [2] https://inbox.dpdk.org/dev/1569398015-6027-1-git-send-email-viacheslavo@mellanox.com/

[snip]
  
Ori Kam Sept. 26, 2019, 3:22 p.m. UTC | #2
Hi Andrew,
Thanks for your comments please see blow.

> -----Original Message-----
> From: Andrew Rybchenko <arybchenko@solarflare.com>
> 
> On 9/26/19 9:28 AM, Ori Kam wrote:
> > This patch set implements the hairpin feature.
> > The hairpin feature was introduced in RFC[1]
> >
> > The hairpin feature (different name can be forward) acts as "bump on the
> wire",
> > meaning that a packet that is received from the wire can be modified using
> > offloaded action and then sent back to the wire without application
> intervention
> > which save CPU cycles.
> >
> > The hairpin is the inverse function of loopback in which application
> > sends a packet then it is received again by the
> > application without being sent to the wire.
> >
> > The hairpin can be used by a number of different NVF, for example load
> > balancer, gateway and so on.
> >
> > As can be seen from the hairpin description, hairpin is basically RX queue
> > connected to TX queue.
> 
> Is it just a pipe or RTE flow API rules required?
> If it is just a pipe, what about transformations which could be useful
> in this
> case (encaps/decaps, NAT etc)? How to achieve it?
> If it is not a pipe and flow API rules are required, why is peer information
> required?
> 

RTE flow is required, and the peer information is needed in order to connect between the RX queue to the
TX queue. From application it simply set ingress RTE flow rule that has queue or RSS actions,
with queues that are hairpin queues.
It may be possible to have one RX connected to number of TX queues in order to distribute the sending. 
 
> > During the design phase I was thinking of two ways to implement this
> > feature the first one is adding a new rte flow action. and the second
> > one is create a special kind of queue.
> >
> > The advantages of using the queue approch:
> > 1. More control for the application. queue depth (the memory size that
> > should be used).
> 
> But it inherits many parameters which are not really applicable to hairpin
> queues. If all parameters are applicable, it should be explained in the
> context of the hairpin queues.
> 
Most if not all parameters can be applicable also for hairpin queue.
And the one that wasn’t for example mempool was removed.

> > 2. Enable QoS. QoS is normaly a parametr of queue, so in this approch it
> > will be easy to integrate with such system.
> 
> Could you elaborate it.
> 
I will try.
If you are asking about use cases, we can assume a cloud provider that has number
of customers each with different bandwidth. We can configure a Tx queue with higher 
priority which will result in that this queue will get more bandwidth.
This is true also for hairpin and non-hairpin.
We are working on more detail API how to use it, but the HW can support it.

> > 3. Native integression with the rte flow API. Just setting the target
> > queue/rss to hairpin queue, will result that the traffic will be routed
> > to the hairpin queue.
> 
> It sounds like queues are not required for flow API at all.
> If the goal is to send traffic outside to specified physical port,
> just specify it as an flow API action. That's it.
> 
This was one of the possible options, but like stated above we think that there is more meaning to look
at it as a queue, which will give the application better control, for example selecting which queues
to connect to which queues. If it would have been done as RTE flow action then the PMD will create the queues and
binding internally and the application will lose control.

> > 4. Enable queue offloading.
> 
> Which offloads are applicable to hairpin queues?
> 
Vlan striping for example,  and all of the rte flow actions that targets a queue.

> > Each hairpin Rxq can be connected Txq / number of Txqs which can belong to
> a
> > different ports assuming the PMD supports it. The same goes the other
> > way each hairpin Txq can be connected to one or more Rxqs.
> > This is the reason that both the Txq setup and Rxq setup are getting the
> > hairpin configuration structure.
> >
> >  From PMD prespctive the number of Rxq/Txq is the total of standard
> > queues + hairpin queues.
> >
> > To configure hairpin queue the user should call
> > rte_eth_rx_hairpin_queue_setup / rte_eth_tx_hairpin_queue_setup insteed
> > of the normal queue setup functions.
> >
> > The hairpin queues are not part of the normal RSS functiosn.
> >
> > To use the queues the user simply create a flow that points to RSS/queue
> > actions that are hairpin queues.
> > The reason for selecting 2 new functions for hairpin queue setup are:
> > 1. avoid API break.
> > 2. avoid extra and unused parameters.
> >
> >
> > This series must be applied after series[2]
> >
> > [1]
> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Finbox.dpd
> k.org%2Fdev%2F1565703468-55617-1-git-send-email-
> orika%40mellanox.com%2F&amp;data=02%7C01%7Corika%40mellanox.com%7
> C3f32608241834727763208d7427d9b85%7Ca652971c7d2e4d9ba6a4d149256f4
> 61b%7C0%7C0%7C637050979561965175&amp;sdata=M%2F9hfQxEeYx23oHeS
> AQlzJmeWtOzaL%2FhWNmCC7u3E9g%3D&amp;reserved=0
> > [2]
> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Finbox.dpd
> k.org%2Fdev%2F1569398015-6027-1-git-send-email-
> viacheslavo%40mellanox.com%2F&amp;data=02%7C01%7Corika%40mellanox.
> com%7C3f32608241834727763208d7427d9b85%7Ca652971c7d2e4d9ba6a4d1
> 49256f461b%7C0%7C0%7C637050979561965175&amp;sdata=MP8hZ81ZO6br
> RoGeUY5v4%2FMIlFAhzAzryH4NW0MmnTI%3D&amp;reserved=0
> 
> [snip]

Thanks
Ori
  
Andrew Rybchenko Sept. 26, 2019, 3:48 p.m. UTC | #3
Hi Ori,

On 9/26/19 6:22 PM, Ori Kam wrote:
> Hi Andrew,
> Thanks for your comments please see blow.
>
>> -----Original Message-----
>> From: Andrew Rybchenko <arybchenko@solarflare.com>
>>
>> On 9/26/19 9:28 AM, Ori Kam wrote:
>>> This patch set implements the hairpin feature.
>>> The hairpin feature was introduced in RFC[1]
>>>
>>> The hairpin feature (different name can be forward) acts as "bump on the
>> wire",
>>> meaning that a packet that is received from the wire can be modified using
>>> offloaded action and then sent back to the wire without application
>> intervention
>>> which save CPU cycles.
>>>
>>> The hairpin is the inverse function of loopback in which application
>>> sends a packet then it is received again by the
>>> application without being sent to the wire.
>>>
>>> The hairpin can be used by a number of different NVF, for example load
>>> balancer, gateway and so on.
>>>
>>> As can be seen from the hairpin description, hairpin is basically RX queue
>>> connected to TX queue.
>> Is it just a pipe or RTE flow API rules required?
>> If it is just a pipe, what about transformations which could be useful
>> in this
>> case (encaps/decaps, NAT etc)? How to achieve it?
>> If it is not a pipe and flow API rules are required, why is peer information
>> required?
>>
> RTE flow is required, and the peer information is needed in order to connect between the RX queue to the
> TX queue. From application it simply set ingress RTE flow rule that has queue or RSS actions,
> with queues that are hairpin queues.
> It may be possible to have one RX connected to number of TX queues in order to distribute the sending.

It looks like I start to understand. First, RTE flow does its job and
redirects some packets to hairpin Rx queue(s). Then, connection
of hairpin Rx queues to Tx queues does its job. What happens if
an Rx queue is connected to many Tx queues? Are packets duplicated?

>>> During the design phase I was thinking of two ways to implement this
>>> feature the first one is adding a new rte flow action. and the second
>>> one is create a special kind of queue.
>>>
>>> The advantages of using the queue approch:
>>> 1. More control for the application. queue depth (the memory size that
>>> should be used).
>> But it inherits many parameters which are not really applicable to hairpin
>> queues. If all parameters are applicable, it should be explained in the
>> context of the hairpin queues.
>>
> Most if not all parameters can be applicable also for hairpin queue.
> And the one that wasn’t for example mempool was removed.

I would really like to understand meaning of each Rx/Tx queue
configuration parameter for hairpin case. So, I hope to see it in the
documentation.

>>> 2. Enable QoS. QoS is normaly a parametr of queue, so in this approch it
>>> will be easy to integrate with such system.
>> Could you elaborate it.
>>
> I will try.
> If you are asking about use cases, we can assume a cloud provider that has number
> of customers each with different bandwidth. We can configure a Tx queue with higher
> priority which will result in that this queue will get more bandwidth.
> This is true also for hairpin and non-hairpin.
> We are working on more detail API how to use it, but the HW can support it.

OK, a bit abstract still, but makes sense.

>>> 3. Native integression with the rte flow API. Just setting the target
>>> queue/rss to hairpin queue, will result that the traffic will be routed
>>> to the hairpin queue.
>> It sounds like queues are not required for flow API at all.
>> If the goal is to send traffic outside to specified physical port,
>> just specify it as an flow API action. That's it.
>>
> This was one of the possible options, but like stated above we think that there is more meaning to look
> at it as a queue, which will give the application better control, for example selecting which queues
> to connect to which queues. If it would have been done as RTE flow action then the PMD will create the queues and
> binding internally and the application will lose control.
>
>>> 4. Enable queue offloading.
>> Which offloads are applicable to hairpin queues?
>>
> Vlan striping for example,  and all of the rte flow actions that targets a queue.

Can it be done with VLAN_POP action at RTE flow level?
The question is why we need it here as Rx queue offload.
Who will get and process stripped VLAN?
I don't understand what do you mean by the rte flow actions here.
Sorry, but I still think that many Rx and Tx offloads are not applicable.

>>> Each hairpin Rxq can be connected Txq / number of Txqs which can belong to
>> a
>>> different ports assuming the PMD supports it. The same goes the other
>>> way each hairpin Txq can be connected to one or more Rxqs.
>>> This is the reason that both the Txq setup and Rxq setup are getting the
>>> hairpin configuration structure.
>>>
>>>   From PMD prespctive the number of Rxq/Txq is the total of standard
>>> queues + hairpin queues.
>>>
>>> To configure hairpin queue the user should call
>>> rte_eth_rx_hairpin_queue_setup / rte_eth_tx_hairpin_queue_setup insteed
>>> of the normal queue setup functions.
>>>
>>> The hairpin queues are not part of the normal RSS functiosn.
>>>
>>> To use the queues the user simply create a flow that points to RSS/queue
>>> actions that are hairpin queues.
>>> The reason for selecting 2 new functions for hairpin queue setup are:
>>> 1. avoid API break.
>>> 2. avoid extra and unused parameters.
>>>
>>>
>>> This series must be applied after series[2]
>>>
>>> [1]
>> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Finbox.dpd
>> k.org%2Fdev%2F1565703468-55617-1-git-send-email-
>> orika%40mellanox.com%2F&amp;data=02%7C01%7Corika%40mellanox.com%7
>> C3f32608241834727763208d7427d9b85%7Ca652971c7d2e4d9ba6a4d149256f4
>> 61b%7C0%7C0%7C637050979561965175&amp;sdata=M%2F9hfQxEeYx23oHeS
>> AQlzJmeWtOzaL%2FhWNmCC7u3E9g%3D&amp;reserved=0
>>> [2]
>> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Finbox.dpd
>> k.org%2Fdev%2F1569398015-6027-1-git-send-email-
>> viacheslavo%40mellanox.com%2F&amp;data=02%7C01%7Corika%40mellanox.
>> com%7C3f32608241834727763208d7427d9b85%7Ca652971c7d2e4d9ba6a4d1
>> 49256f461b%7C0%7C0%7C637050979561965175&amp;sdata=MP8hZ81ZO6br
>> RoGeUY5v4%2FMIlFAhzAzryH4NW0MmnTI%3D&amp;reserved=0
>>
>> [snip]
> Thanks
> Ori
  
Ori Kam Sept. 26, 2019, 4:11 p.m. UTC | #4
Hi Andrew,

> -----Original Message-----
> From: Andrew Rybchenko <arybchenko@solarflare.com>
> 
> Hi Ori,
> 
> On 9/26/19 6:22 PM, Ori Kam wrote:
> > Hi Andrew,
> > Thanks for your comments please see blow.
> >
> >> -----Original Message-----
> >> From: Andrew Rybchenko <arybchenko@solarflare.com>
> >>
> >> On 9/26/19 9:28 AM, Ori Kam wrote:
> >>> This patch set implements the hairpin feature.
> >>> The hairpin feature was introduced in RFC[1]
> >>>
> >>> The hairpin feature (different name can be forward) acts as "bump on the
> >> wire",
> >>> meaning that a packet that is received from the wire can be modified using
> >>> offloaded action and then sent back to the wire without application
> >> intervention
> >>> which save CPU cycles.
> >>>
> >>> The hairpin is the inverse function of loopback in which application
> >>> sends a packet then it is received again by the
> >>> application without being sent to the wire.
> >>>
> >>> The hairpin can be used by a number of different NVF, for example load
> >>> balancer, gateway and so on.
> >>>
> >>> As can be seen from the hairpin description, hairpin is basically RX queue
> >>> connected to TX queue.
> >> Is it just a pipe or RTE flow API rules required?
> >> If it is just a pipe, what about transformations which could be useful
> >> in this
> >> case (encaps/decaps, NAT etc)? How to achieve it?
> >> If it is not a pipe and flow API rules are required, why is peer information
> >> required?
> >>
> > RTE flow is required, and the peer information is needed in order to connect
> between the RX queue to the
> > TX queue. From application it simply set ingress RTE flow rule that has queue
> or RSS actions,
> > with queues that are hairpin queues.
> > It may be possible to have one RX connected to number of TX queues in order
> to distribute the sending.
> 
> It looks like I start to understand. First, RTE flow does its job and
> redirects some packets to hairpin Rx queue(s). Then, connection
> of hairpin Rx queues to Tx queues does its job. What happens if
> an Rx queue is connected to many Tx queues? Are packets duplicated?
> 


Yes you are correct in your understanding.
Regarding number of TX to a single Rx queue, that is an answer I can't
give you, it depends on the nic. It could duplicate or it could RSS it.
In Mellanox we currently support only 1 to 1 connection.

> >>> During the design phase I was thinking of two ways to implement this
> >>> feature the first one is adding a new rte flow action. and the second
> >>> one is create a special kind of queue.
> >>>
> >>> The advantages of using the queue approch:
> >>> 1. More control for the application. queue depth (the memory size that
> >>> should be used).
> >> But it inherits many parameters which are not really applicable to hairpin
> >> queues. If all parameters are applicable, it should be explained in the
> >> context of the hairpin queues.
> >>
> > Most if not all parameters can be applicable also for hairpin queue.
> > And the one that wasn’t for example mempool was removed.
> 
> I would really like to understand meaning of each Rx/Tx queue
> configuration parameter for hairpin case. So, I hope to see it in the
> documentation.
> 

Those are just like the normal queue, maybe some nics needs this information.

> >>> 2. Enable QoS. QoS is normaly a parametr of queue, so in this approch it
> >>> will be easy to integrate with such system.
> >> Could you elaborate it.
> >>
> > I will try.
> > If you are asking about use cases, we can assume a cloud provider that has
> number
> > of customers each with different bandwidth. We can configure a Tx queue
> with higher
> > priority which will result in that this queue will get more bandwidth.
> > This is true also for hairpin and non-hairpin.
> > We are working on more detail API how to use it, but the HW can support it.
> 
> OK, a bit abstract still, but makes sense.
> 
😊 
> >>> 3. Native integression with the rte flow API. Just setting the target
> >>> queue/rss to hairpin queue, will result that the traffic will be routed
> >>> to the hairpin queue.
> >> It sounds like queues are not required for flow API at all.
> >> If the goal is to send traffic outside to specified physical port,
> >> just specify it as an flow API action. That's it.
> >>
> > This was one of the possible options, but like stated above we think that there
> is more meaning to look
> > at it as a queue, which will give the application better control, for example
> selecting which queues
> > to connect to which queues. If it would have been done as RTE flow action
> then the PMD will create the queues and
> > binding internally and the application will lose control.
> >
> >>> 4. Enable queue offloading.
> >> Which offloads are applicable to hairpin queues?
> >>
> > Vlan striping for example,  and all of the rte flow actions that targets a
> queue.
> 
> Can it be done with VLAN_POP action at RTE flow level?
> The question is why we need it here as Rx queue offload.
> Who will get and process stripped VLAN?
> I don't understand what do you mean by the rte flow actions here.
> Sorry, but I still think that many Rx and Tx offloads are not applicable.
> 

I agree with you, first all important actions can be done using RTE flow.
But maybe some nics don't use RTE flows then it is good for them.
The most important reason is that I think that in future we will have shared
offloads, 

> >>> Each hairpin Rxq can be connected Txq / number of Txqs which can belong
> to
> >> a
> >>> different ports assuming the PMD supports it. The same goes the other
> >>> way each hairpin Txq can be connected to one or more Rxqs.
> >>> This is the reason that both the Txq setup and Rxq setup are getting the
> >>> hairpin configuration structure.
> >>>
> >>>   From PMD prespctive the number of Rxq/Txq is the total of standard
> >>> queues + hairpin queues.
> >>>
> >>> To configure hairpin queue the user should call
> >>> rte_eth_rx_hairpin_queue_setup / rte_eth_tx_hairpin_queue_setup
> insteed
> >>> of the normal queue setup functions.
> >>>
> >>> The hairpin queues are not part of the normal RSS functiosn.
> >>>
> >>> To use the queues the user simply create a flow that points to RSS/queue
> >>> actions that are hairpin queues.
> >>> The reason for selecting 2 new functions for hairpin queue setup are:
> >>> 1. avoid API break.
> >>> 2. avoid extra and unused parameters.
> >>>
> >>>
> >>> This series must be applied after series[2]
> >>>
> >>> [1]
> >>
> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Finbox.dpd
> >> k.org%2Fdev%2F1565703468-55617-1-git-send-email-
> >>
> orika%40mellanox.com%2F&amp;data=02%7C01%7Corika%40mellanox.com%7
> >>
> C3f32608241834727763208d7427d9b85%7Ca652971c7d2e4d9ba6a4d149256f4
> >>
> 61b%7C0%7C0%7C637050979561965175&amp;sdata=M%2F9hfQxEeYx23oHeS
> >> AQlzJmeWtOzaL%2FhWNmCC7u3E9g%3D&amp;reserved=0
> >>> [2]
> >>
> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Finbox.dpd
> >> k.org%2Fdev%2F1569398015-6027-1-git-send-email-
> >>
> viacheslavo%40mellanox.com%2F&amp;data=02%7C01%7Corika%40mellanox.
> >>
> com%7C3f32608241834727763208d7427d9b85%7Ca652971c7d2e4d9ba6a4d1
> >>
> 49256f461b%7C0%7C0%7C637050979561965175&amp;sdata=MP8hZ81ZO6br
> >> RoGeUY5v4%2FMIlFAhzAzryH4NW0MmnTI%3D&amp;reserved=0
> >>
> >> [snip]
> > Thanks
> > Ori

Thanks,
Ori