[v4] ethdev: add send to kernel action

Message ID 20220929145445.181369-1-michaelsav@nvidia.com (mailing list archive)
State Superseded, archived
Delegated to: Andrew Rybchenko
Headers
Series [v4] ethdev: add send to kernel action |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK
ci/github-robot: build success github build: passed
ci/iol-mellanox-Performance success Performance Testing PASS
ci/iol-aarch64-compile-testing success Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-x86_64-unit-testing success Testing PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-x86_64-compile-testing success Testing PASS
ci/iol-aarch64-unit-testing success Testing PASS
ci/intel-Testing success Testing PASS

Commit Message

Michael Savisko Sept. 29, 2022, 2:54 p.m. UTC
  In some cases application may receive a packet that should have been
received by the kernel. In this case application uses KNI or other means
to transfer the packet to the kernel.

With bifurcated driver we can have a rule to route packets matching
a pattern (example: IPv4 packets) to the DPDK application and the rest
of the traffic will be received by the kernel.
But if we want to receive most of the traffic in DPDK except specific
pattern (example: ICMP packets) that should be processed by the kernel,
then it's easier to re-route these packets with a single rule.

This commit introduces new rte_flow action which allows application to
re-route packets directly to the kernel without software involvement.

Add new testpmd rte_flow action 'send_to_kernel'. The application
may use this action to route the packet to the kernel while still
in the HW.

Example with testpmd command:

flow create 0 ingress priority 0 group 1 pattern eth type spec 0x0800
type mask 0xffff / end actions send_to_kernel / end

Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
v4:
- improve description comment above RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL

v3:
http://patches.dpdk.org/project/dpdk/patch/20220919155013.61473-1-michaelsav@nvidia.com/

v2:
http://patches.dpdk.org/project/dpdk/patch/20220914093219.11728-1-michaelsav@nvidia.com/

---
 app/test-pmd/cmdline_flow.c                 |  9 +++++++++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  2 ++
 lib/ethdev/rte_flow.c                       |  1 +
 lib/ethdev/rte_flow.h                       | 12 ++++++++++++
 4 files changed, 24 insertions(+)
  

Comments

Andrew Rybchenko Oct. 3, 2022, 7:53 a.m. UTC | #1
On 9/29/22 17:54, Michael Savisko wrote:
> In some cases application may receive a packet that should have been
> received by the kernel. In this case application uses KNI or other means
> to transfer the packet to the kernel.
> 
> With bifurcated driver we can have a rule to route packets matching
> a pattern (example: IPv4 packets) to the DPDK application and the rest
> of the traffic will be received by the kernel.
> But if we want to receive most of the traffic in DPDK except specific
> pattern (example: ICMP packets) that should be processed by the kernel,
> then it's easier to re-route these packets with a single rule.
> 
> This commit introduces new rte_flow action which allows application to
> re-route packets directly to the kernel without software involvement.
> 
> Add new testpmd rte_flow action 'send_to_kernel'. The application
> may use this action to route the packet to the kernel while still
> in the HW.
> 
> Example with testpmd command:
> 
> flow create 0 ingress priority 0 group 1 pattern eth type spec 0x0800
> type mask 0xffff / end actions send_to_kernel / end
> 
> Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
> Acked-by: Ori Kam <orika@nvidia.com>
> ---
> v4:
> - improve description comment above RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL
> 
> v3:
> http://patches.dpdk.org/project/dpdk/patch/20220919155013.61473-1-michaelsav@nvidia.com/
> 
> v2:
> http://patches.dpdk.org/project/dpdk/patch/20220914093219.11728-1-michaelsav@nvidia.com/
> 
> ---
>   app/test-pmd/cmdline_flow.c                 |  9 +++++++++
>   doc/guides/testpmd_app_ug/testpmd_funcs.rst |  2 ++
>   lib/ethdev/rte_flow.c                       |  1 +
>   lib/ethdev/rte_flow.h                       | 12 ++++++++++++
>   4 files changed, 24 insertions(+)
> 
> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
> index 7f50028eb7..042f6b34a6 100644
> --- a/app/test-pmd/cmdline_flow.c
> +++ b/app/test-pmd/cmdline_flow.c
> @@ -612,6 +612,7 @@ enum index {
>   	ACTION_PORT_REPRESENTOR_PORT_ID,
>   	ACTION_REPRESENTED_PORT,
>   	ACTION_REPRESENTED_PORT_ETHDEV_PORT_ID,
> +	ACTION_SEND_TO_KERNEL,
>   };
>   
>   /** Maximum size for pattern in struct rte_flow_item_raw. */
> @@ -1872,6 +1873,7 @@ static const enum index next_action[] = {
>   	ACTION_CONNTRACK_UPDATE,
>   	ACTION_PORT_REPRESENTOR,
>   	ACTION_REPRESENTED_PORT,
> +	ACTION_SEND_TO_KERNEL,
>   	ZERO,
>   };
>   
> @@ -6341,6 +6343,13 @@ static const struct token token_list[] = {
>   		.help = "submit a list of associated actions for red",
>   		.next = NEXT(next_action),
>   	},
> +	[ACTION_SEND_TO_KERNEL] = {
> +		.name = "send_to_kernel",
> +		.help = "send packets to kernel",
> +		.priv = PRIV_ACTION(SEND_TO_KERNEL, 0),
> +		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
> +		.call = parse_vc,
> +	},
>   
>   	/* Top-level command. */
>   	[ADD] = {
> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> index 330e34427d..c259c8239a 100644
> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> @@ -4189,6 +4189,8 @@ This section lists supported actions and their attributes, if any.
>   
>     - ``ethdev_port_id {unsigned}``: ethdev port ID
>   
> +- ``send_to_kernel``: send packets to kernel.
> +
>   Destroying flow rules
>   ~~~~~~~~~~~~~~~~~~~~~
>   
> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
> index 501be9d602..627c671ce4 100644
> --- a/lib/ethdev/rte_flow.c
> +++ b/lib/ethdev/rte_flow.c
> @@ -259,6 +259,7 @@ static const struct rte_flow_desc_data rte_flow_desc_action[] = {
>   	MK_FLOW_ACTION(CONNTRACK, sizeof(struct rte_flow_action_conntrack)),
>   	MK_FLOW_ACTION(PORT_REPRESENTOR, sizeof(struct rte_flow_action_ethdev)),
>   	MK_FLOW_ACTION(REPRESENTED_PORT, sizeof(struct rte_flow_action_ethdev)),
> +	MK_FLOW_ACTION(SEND_TO_KERNEL, 0),
>   };
>   
>   int
> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> index a79f1e7ef0..2c15279a3b 100644
> --- a/lib/ethdev/rte_flow.h
> +++ b/lib/ethdev/rte_flow.h
> @@ -2879,6 +2879,18 @@ enum rte_flow_action_type {
>   	 * @see struct rte_flow_action_ethdev
>   	 */
>   	RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
> +
> +	/**
> +	 * Send packets to the kernel, without going to userspace at all.
> +	 * The packets will be received by the kernel driver sharing
> +	 * the same device as the DPDK port on which this action is
> +	 * configured. This action is mostly suits bifurcated driver
> +	 * model.
> +	 * This is an ingress non-transfer action only.

May be we should not limit the definition to ingress only?
It could be useful on egress as a way to reroute packet
back to kernel.


> +	 *
> +	 * No associated configuration structure.
> +	 */
> +	RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL,
>   };
>   
>   /**
  
Ori Kam Oct. 3, 2022, 8:23 a.m. UTC | #2
Hi Andrew

> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Monday, 3 October 2022 10:54
> On 9/29/22 17:54, Michael Savisko wrote:
> > In some cases application may receive a packet that should have been
> > received by the kernel. In this case application uses KNI or other means
> > to transfer the packet to the kernel.
> >
> > With bifurcated driver we can have a rule to route packets matching
> > a pattern (example: IPv4 packets) to the DPDK application and the rest
> > of the traffic will be received by the kernel.
> > But if we want to receive most of the traffic in DPDK except specific
> > pattern (example: ICMP packets) that should be processed by the kernel,
> > then it's easier to re-route these packets with a single rule.
> >
> > This commit introduces new rte_flow action which allows application to
> > re-route packets directly to the kernel without software involvement.
> >
> > Add new testpmd rte_flow action 'send_to_kernel'. The application
> > may use this action to route the packet to the kernel while still
> > in the HW.
> >
> > Example with testpmd command:
> >
> > flow create 0 ingress priority 0 group 1 pattern eth type spec 0x0800
> > type mask 0xffff / end actions send_to_kernel / end
> >
> > Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
> > Acked-by: Ori Kam <orika@nvidia.com>
> > ---
> > v4:
> > - improve description comment above
> RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL
> >
> > v3:
> > http://patches.dpdk.org/project/dpdk/patch/20220919155013.61473-1-
> michaelsav@nvidia.com/
> >
> > v2:
> > http://patches.dpdk.org/project/dpdk/patch/20220914093219.11728-1-
> michaelsav@nvidia.com/
> >
> > ---
> >   app/test-pmd/cmdline_flow.c                 |  9 +++++++++
> >   doc/guides/testpmd_app_ug/testpmd_funcs.rst |  2 ++
> >   lib/ethdev/rte_flow.c                       |  1 +
> >   lib/ethdev/rte_flow.h                       | 12 ++++++++++++
> >   4 files changed, 24 insertions(+)
> >
> > diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
> > index 7f50028eb7..042f6b34a6 100644
> > --- a/app/test-pmd/cmdline_flow.c
> > +++ b/app/test-pmd/cmdline_flow.c
> > @@ -612,6 +612,7 @@ enum index {
> >   	ACTION_PORT_REPRESENTOR_PORT_ID,
> >   	ACTION_REPRESENTED_PORT,
> >   	ACTION_REPRESENTED_PORT_ETHDEV_PORT_ID,
> > +	ACTION_SEND_TO_KERNEL,
> >   };
> >
> >   /** Maximum size for pattern in struct rte_flow_item_raw. */
> > @@ -1872,6 +1873,7 @@ static const enum index next_action[] = {
> >   	ACTION_CONNTRACK_UPDATE,
> >   	ACTION_PORT_REPRESENTOR,
> >   	ACTION_REPRESENTED_PORT,
> > +	ACTION_SEND_TO_KERNEL,
> >   	ZERO,
> >   };
> >
> > @@ -6341,6 +6343,13 @@ static const struct token token_list[] = {
> >   		.help = "submit a list of associated actions for red",
> >   		.next = NEXT(next_action),
> >   	},
> > +	[ACTION_SEND_TO_KERNEL] = {
> > +		.name = "send_to_kernel",
> > +		.help = "send packets to kernel",
> > +		.priv = PRIV_ACTION(SEND_TO_KERNEL, 0),
> > +		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
> > +		.call = parse_vc,
> > +	},
> >
> >   	/* Top-level command. */
> >   	[ADD] = {
> > diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> > index 330e34427d..c259c8239a 100644
> > --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> > +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> > @@ -4189,6 +4189,8 @@ This section lists supported actions and their
> attributes, if any.
> >
> >     - ``ethdev_port_id {unsigned}``: ethdev port ID
> >
> > +- ``send_to_kernel``: send packets to kernel.
> > +
> >   Destroying flow rules
> >   ~~~~~~~~~~~~~~~~~~~~~
> >
> > diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
> > index 501be9d602..627c671ce4 100644
> > --- a/lib/ethdev/rte_flow.c
> > +++ b/lib/ethdev/rte_flow.c
> > @@ -259,6 +259,7 @@ static const struct rte_flow_desc_data
> rte_flow_desc_action[] = {
> >   	MK_FLOW_ACTION(CONNTRACK, sizeof(struct
> rte_flow_action_conntrack)),
> >   	MK_FLOW_ACTION(PORT_REPRESENTOR, sizeof(struct
> rte_flow_action_ethdev)),
> >   	MK_FLOW_ACTION(REPRESENTED_PORT, sizeof(struct
> rte_flow_action_ethdev)),
> > +	MK_FLOW_ACTION(SEND_TO_KERNEL, 0),
> >   };
> >
> >   int
> > diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> > index a79f1e7ef0..2c15279a3b 100644
> > --- a/lib/ethdev/rte_flow.h
> > +++ b/lib/ethdev/rte_flow.h
> > @@ -2879,6 +2879,18 @@ enum rte_flow_action_type {
> >   	 * @see struct rte_flow_action_ethdev
> >   	 */
> >   	RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
> > +
> > +	/**
> > +	 * Send packets to the kernel, without going to userspace at all.
> > +	 * The packets will be received by the kernel driver sharing
> > +	 * the same device as the DPDK port on which this action is
> > +	 * configured. This action is mostly suits bifurcated driver
> > +	 * model.
> > +	 * This is an ingress non-transfer action only.
> 
> May be we should not limit the definition to ingress only?
> It could be useful on egress as a way to reroute packet
> back to kernel.
> 

Interesting, but there are no Kernel queues on egress that can receive packets (by definition of egress)
do you mean that this will also do loopback from the egress back to the ingress of the same port and then
send to kernel?
if so, I think we need a new action "loop_back" 
 
> 
> > +	 *
> > +	 * No associated configuration structure.
> > +	 */
> > +	RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL,
> >   };
> >
> >   /**
  
Andrew Rybchenko Oct. 3, 2022, 9:44 a.m. UTC | #3
On 10/3/22 11:23, Ori Kam wrote:
> Hi Andrew
> 
>> -----Original Message-----
>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>> Sent: Monday, 3 October 2022 10:54
>> On 9/29/22 17:54, Michael Savisko wrote:
>>> In some cases application may receive a packet that should have been
>>> received by the kernel. In this case application uses KNI or other means
>>> to transfer the packet to the kernel.
>>>
>>> With bifurcated driver we can have a rule to route packets matching
>>> a pattern (example: IPv4 packets) to the DPDK application and the rest
>>> of the traffic will be received by the kernel.
>>> But if we want to receive most of the traffic in DPDK except specific
>>> pattern (example: ICMP packets) that should be processed by the kernel,
>>> then it's easier to re-route these packets with a single rule.
>>>
>>> This commit introduces new rte_flow action which allows application to
>>> re-route packets directly to the kernel without software involvement.
>>>
>>> Add new testpmd rte_flow action 'send_to_kernel'. The application
>>> may use this action to route the packet to the kernel while still
>>> in the HW.
>>>
>>> Example with testpmd command:
>>>
>>> flow create 0 ingress priority 0 group 1 pattern eth type spec 0x0800
>>> type mask 0xffff / end actions send_to_kernel / end
>>>
>>> Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
>>> Acked-by: Ori Kam <orika@nvidia.com>
>>> ---
>>> v4:
>>> - improve description comment above
>> RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL
>>>
>>> v3:
>>> http://patches.dpdk.org/project/dpdk/patch/20220919155013.61473-1-
>> michaelsav@nvidia.com/
>>>
>>> v2:
>>> http://patches.dpdk.org/project/dpdk/patch/20220914093219.11728-1-
>> michaelsav@nvidia.com/
>>>
>>> ---
>>>    app/test-pmd/cmdline_flow.c                 |  9 +++++++++
>>>    doc/guides/testpmd_app_ug/testpmd_funcs.rst |  2 ++
>>>    lib/ethdev/rte_flow.c                       |  1 +
>>>    lib/ethdev/rte_flow.h                       | 12 ++++++++++++
>>>    4 files changed, 24 insertions(+)
>>>
>>> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
>>> index 7f50028eb7..042f6b34a6 100644
>>> --- a/app/test-pmd/cmdline_flow.c
>>> +++ b/app/test-pmd/cmdline_flow.c
>>> @@ -612,6 +612,7 @@ enum index {
>>>    	ACTION_PORT_REPRESENTOR_PORT_ID,
>>>    	ACTION_REPRESENTED_PORT,
>>>    	ACTION_REPRESENTED_PORT_ETHDEV_PORT_ID,
>>> +	ACTION_SEND_TO_KERNEL,
>>>    };
>>>
>>>    /** Maximum size for pattern in struct rte_flow_item_raw. */
>>> @@ -1872,6 +1873,7 @@ static const enum index next_action[] = {
>>>    	ACTION_CONNTRACK_UPDATE,
>>>    	ACTION_PORT_REPRESENTOR,
>>>    	ACTION_REPRESENTED_PORT,
>>> +	ACTION_SEND_TO_KERNEL,
>>>    	ZERO,
>>>    };
>>>
>>> @@ -6341,6 +6343,13 @@ static const struct token token_list[] = {
>>>    		.help = "submit a list of associated actions for red",
>>>    		.next = NEXT(next_action),
>>>    	},
>>> +	[ACTION_SEND_TO_KERNEL] = {
>>> +		.name = "send_to_kernel",
>>> +		.help = "send packets to kernel",
>>> +		.priv = PRIV_ACTION(SEND_TO_KERNEL, 0),
>>> +		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
>>> +		.call = parse_vc,
>>> +	},
>>>
>>>    	/* Top-level command. */
>>>    	[ADD] = {
>>> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
>> b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
>>> index 330e34427d..c259c8239a 100644
>>> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
>>> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
>>> @@ -4189,6 +4189,8 @@ This section lists supported actions and their
>> attributes, if any.
>>>
>>>      - ``ethdev_port_id {unsigned}``: ethdev port ID
>>>
>>> +- ``send_to_kernel``: send packets to kernel.
>>> +
>>>    Destroying flow rules
>>>    ~~~~~~~~~~~~~~~~~~~~~
>>>
>>> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
>>> index 501be9d602..627c671ce4 100644
>>> --- a/lib/ethdev/rte_flow.c
>>> +++ b/lib/ethdev/rte_flow.c
>>> @@ -259,6 +259,7 @@ static const struct rte_flow_desc_data
>> rte_flow_desc_action[] = {
>>>    	MK_FLOW_ACTION(CONNTRACK, sizeof(struct
>> rte_flow_action_conntrack)),
>>>    	MK_FLOW_ACTION(PORT_REPRESENTOR, sizeof(struct
>> rte_flow_action_ethdev)),
>>>    	MK_FLOW_ACTION(REPRESENTED_PORT, sizeof(struct
>> rte_flow_action_ethdev)),
>>> +	MK_FLOW_ACTION(SEND_TO_KERNEL, 0),
>>>    };
>>>
>>>    int
>>> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
>>> index a79f1e7ef0..2c15279a3b 100644
>>> --- a/lib/ethdev/rte_flow.h
>>> +++ b/lib/ethdev/rte_flow.h
>>> @@ -2879,6 +2879,18 @@ enum rte_flow_action_type {
>>>    	 * @see struct rte_flow_action_ethdev
>>>    	 */
>>>    	RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
>>> +
>>> +	/**
>>> +	 * Send packets to the kernel, without going to userspace at all.
>>> +	 * The packets will be received by the kernel driver sharing
>>> +	 * the same device as the DPDK port on which this action is
>>> +	 * configured. This action is mostly suits bifurcated driver
>>> +	 * model.
>>> +	 * This is an ingress non-transfer action only.
>>
>> May be we should not limit the definition to ingress only?
>> It could be useful on egress as a way to reroute packet
>> back to kernel.
>>
> 
> Interesting, but there are no Kernel queues on egress that can receive packets (by definition of egress)
> do you mean that this will also do loopback from the egress back to the ingress of the same port and then
> send to kernel?
> if so, I think we need a new action "loop_back"

Yes, I meant intercept packet on egress and send to kernel.
But we still need loopback+send_to_kernel. Loopback itself
cannot send to kernel. Moreover it should be two rules:
loopback on egress plus send-to-kernel on ingress. Does
it really worse it? I'm not sure. Yes, it sounds a bit
better from arch point of view, but I'm still unsure.
I'd allow send-to-kernel on egress. Up to you.

>   
>>
>>> +	 *
>>> +	 * No associated configuration structure.
>>> +	 */
>>> +	RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL,
>>>    };
>>>
>>>    /**
>
  
Ori Kam Oct. 3, 2022, 9:57 a.m. UTC | #4
Hi Andrew,

> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Monday, 3 October 2022 12:44
> 
> On 10/3/22 11:23, Ori Kam wrote:
> > Hi Andrew
> >
> >> -----Original Message-----
> >> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >> Sent: Monday, 3 October 2022 10:54
> >> On 9/29/22 17:54, Michael Savisko wrote:
> >>> In some cases application may receive a packet that should have been
> >>> received by the kernel. In this case application uses KNI or other means
> >>> to transfer the packet to the kernel.
> >>>
> >>> With bifurcated driver we can have a rule to route packets matching
> >>> a pattern (example: IPv4 packets) to the DPDK application and the rest
> >>> of the traffic will be received by the kernel.
> >>> But if we want to receive most of the traffic in DPDK except specific
> >>> pattern (example: ICMP packets) that should be processed by the
> kernel,
> >>> then it's easier to re-route these packets with a single rule.
> >>>
> >>> This commit introduces new rte_flow action which allows application to
> >>> re-route packets directly to the kernel without software involvement.
> >>>
> >>> Add new testpmd rte_flow action 'send_to_kernel'. The application
> >>> may use this action to route the packet to the kernel while still
> >>> in the HW.
> >>>
> >>> Example with testpmd command:
> >>>
> >>> flow create 0 ingress priority 0 group 1 pattern eth type spec 0x0800
> >>> type mask 0xffff / end actions send_to_kernel / end
> >>>
> >>> Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
> >>> Acked-by: Ori Kam <orika@nvidia.com>
> >>> ---
> >>> v4:
> >>> - improve description comment above
> >> RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL
> >>>
> >>> v3:
> >>> http://patches.dpdk.org/project/dpdk/patch/20220919155013.61473-1-
> >> michaelsav@nvidia.com/
> >>>
> >>> v2:
> >>> http://patches.dpdk.org/project/dpdk/patch/20220914093219.11728-1-
> >> michaelsav@nvidia.com/
> >>>
> >>> ---
> >>>    app/test-pmd/cmdline_flow.c                 |  9 +++++++++
> >>>    doc/guides/testpmd_app_ug/testpmd_funcs.rst |  2 ++
> >>>    lib/ethdev/rte_flow.c                       |  1 +
> >>>    lib/ethdev/rte_flow.h                       | 12 ++++++++++++
> >>>    4 files changed, 24 insertions(+)
> >>>
> >>> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-
> pmd/cmdline_flow.c
> >>> index 7f50028eb7..042f6b34a6 100644
> >>> --- a/app/test-pmd/cmdline_flow.c
> >>> +++ b/app/test-pmd/cmdline_flow.c
> >>> @@ -612,6 +612,7 @@ enum index {
> >>>    	ACTION_PORT_REPRESENTOR_PORT_ID,
> >>>    	ACTION_REPRESENTED_PORT,
> >>>    	ACTION_REPRESENTED_PORT_ETHDEV_PORT_ID,
> >>> +	ACTION_SEND_TO_KERNEL,
> >>>    };
> >>>
> >>>    /** Maximum size for pattern in struct rte_flow_item_raw. */
> >>> @@ -1872,6 +1873,7 @@ static const enum index next_action[] = {
> >>>    	ACTION_CONNTRACK_UPDATE,
> >>>    	ACTION_PORT_REPRESENTOR,
> >>>    	ACTION_REPRESENTED_PORT,
> >>> +	ACTION_SEND_TO_KERNEL,
> >>>    	ZERO,
> >>>    };
> >>>
> >>> @@ -6341,6 +6343,13 @@ static const struct token token_list[] = {
> >>>    		.help = "submit a list of associated actions for red",
> >>>    		.next = NEXT(next_action),
> >>>    	},
> >>> +	[ACTION_SEND_TO_KERNEL] = {
> >>> +		.name = "send_to_kernel",
> >>> +		.help = "send packets to kernel",
> >>> +		.priv = PRIV_ACTION(SEND_TO_KERNEL, 0),
> >>> +		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
> >>> +		.call = parse_vc,
> >>> +	},
> >>>
> >>>    	/* Top-level command. */
> >>>    	[ADD] = {
> >>> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> >> b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> >>> index 330e34427d..c259c8239a 100644
> >>> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> >>> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> >>> @@ -4189,6 +4189,8 @@ This section lists supported actions and their
> >> attributes, if any.
> >>>
> >>>      - ``ethdev_port_id {unsigned}``: ethdev port ID
> >>>
> >>> +- ``send_to_kernel``: send packets to kernel.
> >>> +
> >>>    Destroying flow rules
> >>>    ~~~~~~~~~~~~~~~~~~~~~
> >>>
> >>> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
> >>> index 501be9d602..627c671ce4 100644
> >>> --- a/lib/ethdev/rte_flow.c
> >>> +++ b/lib/ethdev/rte_flow.c
> >>> @@ -259,6 +259,7 @@ static const struct rte_flow_desc_data
> >> rte_flow_desc_action[] = {
> >>>    	MK_FLOW_ACTION(CONNTRACK, sizeof(struct
> >> rte_flow_action_conntrack)),
> >>>    	MK_FLOW_ACTION(PORT_REPRESENTOR, sizeof(struct
> >> rte_flow_action_ethdev)),
> >>>    	MK_FLOW_ACTION(REPRESENTED_PORT, sizeof(struct
> >> rte_flow_action_ethdev)),
> >>> +	MK_FLOW_ACTION(SEND_TO_KERNEL, 0),
> >>>    };
> >>>
> >>>    int
> >>> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> >>> index a79f1e7ef0..2c15279a3b 100644
> >>> --- a/lib/ethdev/rte_flow.h
> >>> +++ b/lib/ethdev/rte_flow.h
> >>> @@ -2879,6 +2879,18 @@ enum rte_flow_action_type {
> >>>    	 * @see struct rte_flow_action_ethdev
> >>>    	 */
> >>>    	RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
> >>> +
> >>> +	/**
> >>> +	 * Send packets to the kernel, without going to userspace at all.
> >>> +	 * The packets will be received by the kernel driver sharing
> >>> +	 * the same device as the DPDK port on which this action is
> >>> +	 * configured. This action is mostly suits bifurcated driver
> >>> +	 * model.
> >>> +	 * This is an ingress non-transfer action only.
> >>
> >> May be we should not limit the definition to ingress only?
> >> It could be useful on egress as a way to reroute packet
> >> back to kernel.
> >>
> >
> > Interesting, but there are no Kernel queues on egress that can receive
> packets (by definition of egress)
> > do you mean that this will also do loopback from the egress back to the
> ingress of the same port and then
> > send to kernel?
> > if so, I think we need a new action "loop_back"
> 
> Yes, I meant intercept packet on egress and send to kernel.
> But we still need loopback+send_to_kernel. Loopback itself
> cannot send to kernel. Moreover it should be two rules:
> loopback on egress plus send-to-kernel on ingress. Does
> it really worse it? I'm not sure. Yes, it sounds a bit
> better from arch point of view, but I'm still unsure.
> I'd allow send-to-kernel on egress. Up to you.
> 

It looks more correct with loop_back on the egress and send-to-kernel on egress
I suggest to keep the current design,
and if we see that we can merge those to commands, we will change it

> >
> >>
> >>> +	 *
> >>> +	 * No associated configuration structure.
> >>> +	 */
> >>> +	RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL,
> >>>    };
> >>>
> >>>    /**
> >
  
Andrew Rybchenko Oct. 3, 2022, 10:47 a.m. UTC | #5
On 10/3/22 12:57, Ori Kam wrote:
> Hi Andrew,
> 
>> -----Original Message-----
>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>> Sent: Monday, 3 October 2022 12:44
>>
>> On 10/3/22 11:23, Ori Kam wrote:
>>> Hi Andrew
>>>
>>>> -----Original Message-----
>>>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>>> Sent: Monday, 3 October 2022 10:54
>>>> On 9/29/22 17:54, Michael Savisko wrote:
>>>>> In some cases application may receive a packet that should have been
>>>>> received by the kernel. In this case application uses KNI or other means
>>>>> to transfer the packet to the kernel.
>>>>>
>>>>> With bifurcated driver we can have a rule to route packets matching
>>>>> a pattern (example: IPv4 packets) to the DPDK application and the rest
>>>>> of the traffic will be received by the kernel.
>>>>> But if we want to receive most of the traffic in DPDK except specific
>>>>> pattern (example: ICMP packets) that should be processed by the
>> kernel,
>>>>> then it's easier to re-route these packets with a single rule.
>>>>>
>>>>> This commit introduces new rte_flow action which allows application to
>>>>> re-route packets directly to the kernel without software involvement.
>>>>>
>>>>> Add new testpmd rte_flow action 'send_to_kernel'. The application
>>>>> may use this action to route the packet to the kernel while still
>>>>> in the HW.
>>>>>
>>>>> Example with testpmd command:
>>>>>
>>>>> flow create 0 ingress priority 0 group 1 pattern eth type spec 0x0800
>>>>> type mask 0xffff / end actions send_to_kernel / end
>>>>>
>>>>> Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
>>>>> Acked-by: Ori Kam <orika@nvidia.com>
>>>>> ---
>>>>> v4:
>>>>> - improve description comment above
>>>> RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL
>>>>>
>>>>> v3:
>>>>> http://patches.dpdk.org/project/dpdk/patch/20220919155013.61473-1-
>>>> michaelsav@nvidia.com/
>>>>>
>>>>> v2:
>>>>> http://patches.dpdk.org/project/dpdk/patch/20220914093219.11728-1-
>>>> michaelsav@nvidia.com/
>>>>>
>>>>> ---
>>>>>     app/test-pmd/cmdline_flow.c                 |  9 +++++++++
>>>>>     doc/guides/testpmd_app_ug/testpmd_funcs.rst |  2 ++
>>>>>     lib/ethdev/rte_flow.c                       |  1 +
>>>>>     lib/ethdev/rte_flow.h                       | 12 ++++++++++++
>>>>>     4 files changed, 24 insertions(+)
>>>>>
>>>>> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-
>> pmd/cmdline_flow.c
>>>>> index 7f50028eb7..042f6b34a6 100644
>>>>> --- a/app/test-pmd/cmdline_flow.c
>>>>> +++ b/app/test-pmd/cmdline_flow.c
>>>>> @@ -612,6 +612,7 @@ enum index {
>>>>>     	ACTION_PORT_REPRESENTOR_PORT_ID,
>>>>>     	ACTION_REPRESENTED_PORT,
>>>>>     	ACTION_REPRESENTED_PORT_ETHDEV_PORT_ID,
>>>>> +	ACTION_SEND_TO_KERNEL,
>>>>>     };
>>>>>
>>>>>     /** Maximum size for pattern in struct rte_flow_item_raw. */
>>>>> @@ -1872,6 +1873,7 @@ static const enum index next_action[] = {
>>>>>     	ACTION_CONNTRACK_UPDATE,
>>>>>     	ACTION_PORT_REPRESENTOR,
>>>>>     	ACTION_REPRESENTED_PORT,
>>>>> +	ACTION_SEND_TO_KERNEL,
>>>>>     	ZERO,
>>>>>     };
>>>>>
>>>>> @@ -6341,6 +6343,13 @@ static const struct token token_list[] = {
>>>>>     		.help = "submit a list of associated actions for red",
>>>>>     		.next = NEXT(next_action),
>>>>>     	},
>>>>> +	[ACTION_SEND_TO_KERNEL] = {
>>>>> +		.name = "send_to_kernel",
>>>>> +		.help = "send packets to kernel",
>>>>> +		.priv = PRIV_ACTION(SEND_TO_KERNEL, 0),
>>>>> +		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
>>>>> +		.call = parse_vc,
>>>>> +	},
>>>>>
>>>>>     	/* Top-level command. */
>>>>>     	[ADD] = {
>>>>> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
>>>> b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
>>>>> index 330e34427d..c259c8239a 100644
>>>>> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
>>>>> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
>>>>> @@ -4189,6 +4189,8 @@ This section lists supported actions and their
>>>> attributes, if any.
>>>>>
>>>>>       - ``ethdev_port_id {unsigned}``: ethdev port ID
>>>>>
>>>>> +- ``send_to_kernel``: send packets to kernel.
>>>>> +
>>>>>     Destroying flow rules
>>>>>     ~~~~~~~~~~~~~~~~~~~~~
>>>>>
>>>>> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
>>>>> index 501be9d602..627c671ce4 100644
>>>>> --- a/lib/ethdev/rte_flow.c
>>>>> +++ b/lib/ethdev/rte_flow.c
>>>>> @@ -259,6 +259,7 @@ static const struct rte_flow_desc_data
>>>> rte_flow_desc_action[] = {
>>>>>     	MK_FLOW_ACTION(CONNTRACK, sizeof(struct
>>>> rte_flow_action_conntrack)),
>>>>>     	MK_FLOW_ACTION(PORT_REPRESENTOR, sizeof(struct
>>>> rte_flow_action_ethdev)),
>>>>>     	MK_FLOW_ACTION(REPRESENTED_PORT, sizeof(struct
>>>> rte_flow_action_ethdev)),
>>>>> +	MK_FLOW_ACTION(SEND_TO_KERNEL, 0),
>>>>>     };
>>>>>
>>>>>     int
>>>>> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
>>>>> index a79f1e7ef0..2c15279a3b 100644
>>>>> --- a/lib/ethdev/rte_flow.h
>>>>> +++ b/lib/ethdev/rte_flow.h
>>>>> @@ -2879,6 +2879,18 @@ enum rte_flow_action_type {
>>>>>     	 * @see struct rte_flow_action_ethdev
>>>>>     	 */
>>>>>     	RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
>>>>> +
>>>>> +	/**
>>>>> +	 * Send packets to the kernel, without going to userspace at all.
>>>>> +	 * The packets will be received by the kernel driver sharing
>>>>> +	 * the same device as the DPDK port on which this action is
>>>>> +	 * configured. This action is mostly suits bifurcated driver
>>>>> +	 * model.
>>>>> +	 * This is an ingress non-transfer action only.
>>>>
>>>> May be we should not limit the definition to ingress only?
>>>> It could be useful on egress as a way to reroute packet
>>>> back to kernel.
>>>>
>>>
>>> Interesting, but there are no Kernel queues on egress that can receive
>> packets (by definition of egress)
>>> do you mean that this will also do loopback from the egress back to the
>> ingress of the same port and then
>>> send to kernel?
>>> if so, I think we need a new action "loop_back"
>>
>> Yes, I meant intercept packet on egress and send to kernel.
>> But we still need loopback+send_to_kernel. Loopback itself
>> cannot send to kernel. Moreover it should be two rules:
>> loopback on egress plus send-to-kernel on ingress. Does
>> it really worse it? I'm not sure. Yes, it sounds a bit
>> better from arch point of view, but I'm still unsure.
>> I'd allow send-to-kernel on egress. Up to you.
>>
> 
> It looks more correct with loop_back on the egress and send-to-kernel on egress
> I suggest to keep the current design,
> and if we see that we can merge those to commands, we will change it

OK. And the last question: do we need to announce it in release
notes?

Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

> 
>>>
>>>>
>>>>> +	 *
>>>>> +	 * No associated configuration structure.
>>>>> +	 */
>>>>> +	RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL,
>>>>>     };
>>>>>
>>>>>     /**
>>>
>
  
Ori Kam Oct. 3, 2022, 11:06 a.m. UTC | #6
Hi Andrew,

> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Monday, 3 October 2022 13:47
> 
> On 10/3/22 12:57, Ori Kam wrote:
> > Hi Andrew,
> >
> >> -----Original Message-----
> >> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >> Sent: Monday, 3 October 2022 12:44
> >>
> >> On 10/3/22 11:23, Ori Kam wrote:
> >>> Hi Andrew
> >>>
> >>>> -----Original Message-----
> >>>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >>>> Sent: Monday, 3 October 2022 10:54
> >>>> On 9/29/22 17:54, Michael Savisko wrote:
> >>>>> In some cases application may receive a packet that should have been
> >>>>> received by the kernel. In this case application uses KNI or other
> means
> >>>>> to transfer the packet to the kernel.
> >>>>>
> >>>>> With bifurcated driver we can have a rule to route packets matching
> >>>>> a pattern (example: IPv4 packets) to the DPDK application and the
> rest
> >>>>> of the traffic will be received by the kernel.
> >>>>> But if we want to receive most of the traffic in DPDK except specific
> >>>>> pattern (example: ICMP packets) that should be processed by the
> >> kernel,
> >>>>> then it's easier to re-route these packets with a single rule.
> >>>>>
> >>>>> This commit introduces new rte_flow action which allows application
> to
> >>>>> re-route packets directly to the kernel without software involvement.
> >>>>>
> >>>>> Add new testpmd rte_flow action 'send_to_kernel'. The application
> >>>>> may use this action to route the packet to the kernel while still
> >>>>> in the HW.
> >>>>>
> >>>>> Example with testpmd command:
> >>>>>
> >>>>> flow create 0 ingress priority 0 group 1 pattern eth type spec 0x0800
> >>>>> type mask 0xffff / end actions send_to_kernel / end
> >>>>>
> >>>>> Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
> >>>>> Acked-by: Ori Kam <orika@nvidia.com>
> >>>>> ---
> >>>>> v4:
> >>>>> - improve description comment above
> >>>> RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL
> >>>>>
> >>>>> v3:
> >>>>> http://patches.dpdk.org/project/dpdk/patch/20220919155013.61473-
> 1-
> >>>> michaelsav@nvidia.com/
> >>>>>
> >>>>> v2:
> >>>>> http://patches.dpdk.org/project/dpdk/patch/20220914093219.11728-
> 1-
> >>>> michaelsav@nvidia.com/
> >>>>>
> >>>>> ---
> >>>>>     app/test-pmd/cmdline_flow.c                 |  9 +++++++++
> >>>>>     doc/guides/testpmd_app_ug/testpmd_funcs.rst |  2 ++
> >>>>>     lib/ethdev/rte_flow.c                       |  1 +
> >>>>>     lib/ethdev/rte_flow.h                       | 12 ++++++++++++
> >>>>>     4 files changed, 24 insertions(+)
> >>>>>
> >>>>> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-
> >> pmd/cmdline_flow.c
> >>>>> index 7f50028eb7..042f6b34a6 100644
> >>>>> --- a/app/test-pmd/cmdline_flow.c
> >>>>> +++ b/app/test-pmd/cmdline_flow.c
> >>>>> @@ -612,6 +612,7 @@ enum index {
> >>>>>     	ACTION_PORT_REPRESENTOR_PORT_ID,
> >>>>>     	ACTION_REPRESENTED_PORT,
> >>>>>     	ACTION_REPRESENTED_PORT_ETHDEV_PORT_ID,
> >>>>> +	ACTION_SEND_TO_KERNEL,
> >>>>>     };
> >>>>>
> >>>>>     /** Maximum size for pattern in struct rte_flow_item_raw. */
> >>>>> @@ -1872,6 +1873,7 @@ static const enum index next_action[] = {
> >>>>>     	ACTION_CONNTRACK_UPDATE,
> >>>>>     	ACTION_PORT_REPRESENTOR,
> >>>>>     	ACTION_REPRESENTED_PORT,
> >>>>> +	ACTION_SEND_TO_KERNEL,
> >>>>>     	ZERO,
> >>>>>     };
> >>>>>
> >>>>> @@ -6341,6 +6343,13 @@ static const struct token token_list[] = {
> >>>>>     		.help = "submit a list of associated actions for red",
> >>>>>     		.next = NEXT(next_action),
> >>>>>     	},
> >>>>> +	[ACTION_SEND_TO_KERNEL] = {
> >>>>> +		.name = "send_to_kernel",
> >>>>> +		.help = "send packets to kernel",
> >>>>> +		.priv = PRIV_ACTION(SEND_TO_KERNEL, 0),
> >>>>> +		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
> >>>>> +		.call = parse_vc,
> >>>>> +	},
> >>>>>
> >>>>>     	/* Top-level command. */
> >>>>>     	[ADD] = {
> >>>>> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> >>>> b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> >>>>> index 330e34427d..c259c8239a 100644
> >>>>> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> >>>>> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> >>>>> @@ -4189,6 +4189,8 @@ This section lists supported actions and their
> >>>> attributes, if any.
> >>>>>
> >>>>>       - ``ethdev_port_id {unsigned}``: ethdev port ID
> >>>>>
> >>>>> +- ``send_to_kernel``: send packets to kernel.
> >>>>> +
> >>>>>     Destroying flow rules
> >>>>>     ~~~~~~~~~~~~~~~~~~~~~
> >>>>>
> >>>>> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
> >>>>> index 501be9d602..627c671ce4 100644
> >>>>> --- a/lib/ethdev/rte_flow.c
> >>>>> +++ b/lib/ethdev/rte_flow.c
> >>>>> @@ -259,6 +259,7 @@ static const struct rte_flow_desc_data
> >>>> rte_flow_desc_action[] = {
> >>>>>     	MK_FLOW_ACTION(CONNTRACK, sizeof(struct
> >>>> rte_flow_action_conntrack)),
> >>>>>     	MK_FLOW_ACTION(PORT_REPRESENTOR, sizeof(struct
> >>>> rte_flow_action_ethdev)),
> >>>>>     	MK_FLOW_ACTION(REPRESENTED_PORT, sizeof(struct
> >>>> rte_flow_action_ethdev)),
> >>>>> +	MK_FLOW_ACTION(SEND_TO_KERNEL, 0),
> >>>>>     };
> >>>>>
> >>>>>     int
> >>>>> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> >>>>> index a79f1e7ef0..2c15279a3b 100644
> >>>>> --- a/lib/ethdev/rte_flow.h
> >>>>> +++ b/lib/ethdev/rte_flow.h
> >>>>> @@ -2879,6 +2879,18 @@ enum rte_flow_action_type {
> >>>>>     	 * @see struct rte_flow_action_ethdev
> >>>>>     	 */
> >>>>>     	RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
> >>>>> +
> >>>>> +	/**
> >>>>> +	 * Send packets to the kernel, without going to userspace at
> all.
> >>>>> +	 * The packets will be received by the kernel driver sharing
> >>>>> +	 * the same device as the DPDK port on which this action is
> >>>>> +	 * configured. This action is mostly suits bifurcated driver
> >>>>> +	 * model.
> >>>>> +	 * This is an ingress non-transfer action only.
> >>>>
> >>>> May be we should not limit the definition to ingress only?
> >>>> It could be useful on egress as a way to reroute packet
> >>>> back to kernel.
> >>>>
> >>>
> >>> Interesting, but there are no Kernel queues on egress that can receive
> >> packets (by definition of egress)
> >>> do you mean that this will also do loopback from the egress back to the
> >> ingress of the same port and then
> >>> send to kernel?
> >>> if so, I think we need a new action "loop_back"
> >>
> >> Yes, I meant intercept packet on egress and send to kernel.
> >> But we still need loopback+send_to_kernel. Loopback itself
> >> cannot send to kernel. Moreover it should be two rules:
> >> loopback on egress plus send-to-kernel on ingress. Does
> >> it really worse it? I'm not sure. Yes, it sounds a bit
> >> better from arch point of view, but I'm still unsure.
> >> I'd allow send-to-kernel on egress. Up to you.
> >>
> >
> > It looks more correct with loop_back on the egress and send-to-kernel on
> egress
> > I suggest to keep the current design,
> > and if we see that we can merge those to commands, we will change it
> 
> OK. And the last question: do we need to announce it in release
> notes?
> 

+1 

> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> 
> >
> >>>
> >>>>
> >>>>> +	 *
> >>>>> +	 * No associated configuration structure.
> >>>>> +	 */
> >>>>> +	RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL,
> >>>>>     };
> >>>>>
> >>>>>     /**
> >>>
> >
  
Andrew Rybchenko Oct. 3, 2022, 11:08 a.m. UTC | #7
On 10/3/22 14:06, Ori Kam wrote:
> Hi Andrew,
> 
>> -----Original Message-----
>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>> Sent: Monday, 3 October 2022 13:47
>>
>> On 10/3/22 12:57, Ori Kam wrote:
>>> Hi Andrew,
>>>
>>>> -----Original Message-----
>>>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>>> Sent: Monday, 3 October 2022 12:44
>>>>
>>>> On 10/3/22 11:23, Ori Kam wrote:
>>>>> Hi Andrew
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>>>>> Sent: Monday, 3 October 2022 10:54
>>>>>> On 9/29/22 17:54, Michael Savisko wrote:
>>>>>>> In some cases application may receive a packet that should have been
>>>>>>> received by the kernel. In this case application uses KNI or other
>> means
>>>>>>> to transfer the packet to the kernel.
>>>>>>>
>>>>>>> With bifurcated driver we can have a rule to route packets matching
>>>>>>> a pattern (example: IPv4 packets) to the DPDK application and the
>> rest
>>>>>>> of the traffic will be received by the kernel.
>>>>>>> But if we want to receive most of the traffic in DPDK except specific
>>>>>>> pattern (example: ICMP packets) that should be processed by the
>>>> kernel,
>>>>>>> then it's easier to re-route these packets with a single rule.
>>>>>>>
>>>>>>> This commit introduces new rte_flow action which allows application
>> to
>>>>>>> re-route packets directly to the kernel without software involvement.
>>>>>>>
>>>>>>> Add new testpmd rte_flow action 'send_to_kernel'. The application
>>>>>>> may use this action to route the packet to the kernel while still
>>>>>>> in the HW.
>>>>>>>
>>>>>>> Example with testpmd command:
>>>>>>>
>>>>>>> flow create 0 ingress priority 0 group 1 pattern eth type spec 0x0800
>>>>>>> type mask 0xffff / end actions send_to_kernel / end
>>>>>>>
>>>>>>> Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
>>>>>>> Acked-by: Ori Kam <orika@nvidia.com>
>>>>>>> ---
>>>>>>> v4:
>>>>>>> - improve description comment above
>>>>>> RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL
>>>>>>>
>>>>>>> v3:
>>>>>>> http://patches.dpdk.org/project/dpdk/patch/20220919155013.61473-
>> 1-
>>>>>> michaelsav@nvidia.com/
>>>>>>>
>>>>>>> v2:
>>>>>>> http://patches.dpdk.org/project/dpdk/patch/20220914093219.11728-
>> 1-
>>>>>> michaelsav@nvidia.com/
>>>>>>>
>>>>>>> ---
>>>>>>>      app/test-pmd/cmdline_flow.c                 |  9 +++++++++
>>>>>>>      doc/guides/testpmd_app_ug/testpmd_funcs.rst |  2 ++
>>>>>>>      lib/ethdev/rte_flow.c                       |  1 +
>>>>>>>      lib/ethdev/rte_flow.h                       | 12 ++++++++++++
>>>>>>>      4 files changed, 24 insertions(+)
>>>>>>>
>>>>>>> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-
>>>> pmd/cmdline_flow.c
>>>>>>> index 7f50028eb7..042f6b34a6 100644
>>>>>>> --- a/app/test-pmd/cmdline_flow.c
>>>>>>> +++ b/app/test-pmd/cmdline_flow.c
>>>>>>> @@ -612,6 +612,7 @@ enum index {
>>>>>>>      	ACTION_PORT_REPRESENTOR_PORT_ID,
>>>>>>>      	ACTION_REPRESENTED_PORT,
>>>>>>>      	ACTION_REPRESENTED_PORT_ETHDEV_PORT_ID,
>>>>>>> +	ACTION_SEND_TO_KERNEL,
>>>>>>>      };
>>>>>>>
>>>>>>>      /** Maximum size for pattern in struct rte_flow_item_raw. */
>>>>>>> @@ -1872,6 +1873,7 @@ static const enum index next_action[] = {
>>>>>>>      	ACTION_CONNTRACK_UPDATE,
>>>>>>>      	ACTION_PORT_REPRESENTOR,
>>>>>>>      	ACTION_REPRESENTED_PORT,
>>>>>>> +	ACTION_SEND_TO_KERNEL,
>>>>>>>      	ZERO,
>>>>>>>      };
>>>>>>>
>>>>>>> @@ -6341,6 +6343,13 @@ static const struct token token_list[] = {
>>>>>>>      		.help = "submit a list of associated actions for red",
>>>>>>>      		.next = NEXT(next_action),
>>>>>>>      	},
>>>>>>> +	[ACTION_SEND_TO_KERNEL] = {
>>>>>>> +		.name = "send_to_kernel",
>>>>>>> +		.help = "send packets to kernel",
>>>>>>> +		.priv = PRIV_ACTION(SEND_TO_KERNEL, 0),
>>>>>>> +		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
>>>>>>> +		.call = parse_vc,
>>>>>>> +	},
>>>>>>>
>>>>>>>      	/* Top-level command. */
>>>>>>>      	[ADD] = {
>>>>>>> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
>>>>>> b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
>>>>>>> index 330e34427d..c259c8239a 100644
>>>>>>> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
>>>>>>> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
>>>>>>> @@ -4189,6 +4189,8 @@ This section lists supported actions and their
>>>>>> attributes, if any.
>>>>>>>
>>>>>>>        - ``ethdev_port_id {unsigned}``: ethdev port ID
>>>>>>>
>>>>>>> +- ``send_to_kernel``: send packets to kernel.
>>>>>>> +
>>>>>>>      Destroying flow rules
>>>>>>>      ~~~~~~~~~~~~~~~~~~~~~
>>>>>>>
>>>>>>> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
>>>>>>> index 501be9d602..627c671ce4 100644
>>>>>>> --- a/lib/ethdev/rte_flow.c
>>>>>>> +++ b/lib/ethdev/rte_flow.c
>>>>>>> @@ -259,6 +259,7 @@ static const struct rte_flow_desc_data
>>>>>> rte_flow_desc_action[] = {
>>>>>>>      	MK_FLOW_ACTION(CONNTRACK, sizeof(struct
>>>>>> rte_flow_action_conntrack)),
>>>>>>>      	MK_FLOW_ACTION(PORT_REPRESENTOR, sizeof(struct
>>>>>> rte_flow_action_ethdev)),
>>>>>>>      	MK_FLOW_ACTION(REPRESENTED_PORT, sizeof(struct
>>>>>> rte_flow_action_ethdev)),
>>>>>>> +	MK_FLOW_ACTION(SEND_TO_KERNEL, 0),
>>>>>>>      };
>>>>>>>
>>>>>>>      int
>>>>>>> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
>>>>>>> index a79f1e7ef0..2c15279a3b 100644
>>>>>>> --- a/lib/ethdev/rte_flow.h
>>>>>>> +++ b/lib/ethdev/rte_flow.h
>>>>>>> @@ -2879,6 +2879,18 @@ enum rte_flow_action_type {
>>>>>>>      	 * @see struct rte_flow_action_ethdev
>>>>>>>      	 */
>>>>>>>      	RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
>>>>>>> +
>>>>>>> +	/**
>>>>>>> +	 * Send packets to the kernel, without going to userspace at
>> all.
>>>>>>> +	 * The packets will be received by the kernel driver sharing
>>>>>>> +	 * the same device as the DPDK port on which this action is
>>>>>>> +	 * configured. This action is mostly suits bifurcated driver
>>>>>>> +	 * model.
>>>>>>> +	 * This is an ingress non-transfer action only.
>>>>>>
>>>>>> May be we should not limit the definition to ingress only?
>>>>>> It could be useful on egress as a way to reroute packet
>>>>>> back to kernel.
>>>>>>
>>>>>
>>>>> Interesting, but there are no Kernel queues on egress that can receive
>>>> packets (by definition of egress)
>>>>> do you mean that this will also do loopback from the egress back to the
>>>> ingress of the same port and then
>>>>> send to kernel?
>>>>> if so, I think we need a new action "loop_back"
>>>>
>>>> Yes, I meant intercept packet on egress and send to kernel.
>>>> But we still need loopback+send_to_kernel. Loopback itself
>>>> cannot send to kernel. Moreover it should be two rules:
>>>> loopback on egress plus send-to-kernel on ingress. Does
>>>> it really worse it? I'm not sure. Yes, it sounds a bit
>>>> better from arch point of view, but I'm still unsure.
>>>> I'd allow send-to-kernel on egress. Up to you.
>>>>
>>>
>>> It looks more correct with loop_back on the egress and send-to-kernel on
>> egress
>>> I suggest to keep the current design,
>>> and if we see that we can merge those to commands, we will change it
>>
>> OK. And the last question: do we need to announce it in release
>> notes?
>>
> 
> +1

Michael,

please, send v5 with release notes update. Don't forget to
rebase in on current next-net/main, please.

Thanks,
Andrew.

> 
>> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>
>>>
>>>>>
>>>>>>
>>>>>>> +	 *
>>>>>>> +	 * No associated configuration structure.
>>>>>>> +	 */
>>>>>>> +	RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL,
>>>>>>>      };
>>>>>>>
>>>>>>>      /**
>>>>>
>>>
>
  

Patch

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 7f50028eb7..042f6b34a6 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -612,6 +612,7 @@  enum index {
 	ACTION_PORT_REPRESENTOR_PORT_ID,
 	ACTION_REPRESENTED_PORT,
 	ACTION_REPRESENTED_PORT_ETHDEV_PORT_ID,
+	ACTION_SEND_TO_KERNEL,
 };
 
 /** Maximum size for pattern in struct rte_flow_item_raw. */
@@ -1872,6 +1873,7 @@  static const enum index next_action[] = {
 	ACTION_CONNTRACK_UPDATE,
 	ACTION_PORT_REPRESENTOR,
 	ACTION_REPRESENTED_PORT,
+	ACTION_SEND_TO_KERNEL,
 	ZERO,
 };
 
@@ -6341,6 +6343,13 @@  static const struct token token_list[] = {
 		.help = "submit a list of associated actions for red",
 		.next = NEXT(next_action),
 	},
+	[ACTION_SEND_TO_KERNEL] = {
+		.name = "send_to_kernel",
+		.help = "send packets to kernel",
+		.priv = PRIV_ACTION(SEND_TO_KERNEL, 0),
+		.next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+		.call = parse_vc,
+	},
 
 	/* Top-level command. */
 	[ADD] = {
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 330e34427d..c259c8239a 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -4189,6 +4189,8 @@  This section lists supported actions and their attributes, if any.
 
   - ``ethdev_port_id {unsigned}``: ethdev port ID
 
+- ``send_to_kernel``: send packets to kernel.
+
 Destroying flow rules
 ~~~~~~~~~~~~~~~~~~~~~
 
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index 501be9d602..627c671ce4 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -259,6 +259,7 @@  static const struct rte_flow_desc_data rte_flow_desc_action[] = {
 	MK_FLOW_ACTION(CONNTRACK, sizeof(struct rte_flow_action_conntrack)),
 	MK_FLOW_ACTION(PORT_REPRESENTOR, sizeof(struct rte_flow_action_ethdev)),
 	MK_FLOW_ACTION(REPRESENTED_PORT, sizeof(struct rte_flow_action_ethdev)),
+	MK_FLOW_ACTION(SEND_TO_KERNEL, 0),
 };
 
 int
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index a79f1e7ef0..2c15279a3b 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -2879,6 +2879,18 @@  enum rte_flow_action_type {
 	 * @see struct rte_flow_action_ethdev
 	 */
 	RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
+
+	/**
+	 * Send packets to the kernel, without going to userspace at all.
+	 * The packets will be received by the kernel driver sharing
+	 * the same device as the DPDK port on which this action is
+	 * configured. This action is mostly suits bifurcated driver
+	 * model.
+	 * This is an ingress non-transfer action only.
+	 *
+	 * No associated configuration structure.
+	 */
+	RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL,
 };
 
 /**