[1/9] net/mlx5: update flex parser arc types support
Checks
Commit Message
Add support for input IPv4 and for ESP output flex parser arcs.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/net/mlx5/mlx5_flow_flex.c | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)
Comments
> -----Original Message-----
> From: Slava Ovsiienko <viacheslavo@nvidia.com>
> Sent: Wednesday, September 18, 2024 15:46
> To: dev@dpdk.org
> Cc: Matan Azrad <matan@nvidia.com>; Raslan Darawsheh
> <rasland@nvidia.com>; Ori Kam <orika@nvidia.com>; Dariusz Sosnowski
> <dsosnowski@nvidia.com>
> Subject: [PATCH v2 0/9] net/mlx5: cumulative fix series for flex item
>
> There is a series of independent patches related to the flex item.
> There is no direct dependency between patches besides the merging dependency
> inferred by git, the latter is reason the patches are sent in series. For more details,
> please see the individual patch commit messages.
>
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
>
> Viacheslav Ovsiienko (9):
> net/mlx5: update flex parser arc types support
> net/mlx5: add flex item query tunnel mode routine
> net/mlx5/hws: fix flex item support as tunnel header
> net/mlx5: fix flex item tunnel mode handling
> net/mlx5: fix number of supported flex parsers
> app/testpmd: remove flex item init command leftover
> net/mlx5: fix next protocol validation after flex item
> net/mlx5: fix non full word sample fields in flex item
> net/mlx5: fix flex item header length field translation
>
> app/test-pmd/cmdline_flow.c | 12 --
> drivers/net/mlx5/hws/mlx5dr_definer.c | 17 +-
> drivers/net/mlx5/mlx5.h | 9 +-
> drivers/net/mlx5/mlx5_flow_dv.c | 7 +-
> drivers/net/mlx5/mlx5_flow_flex.c | 215 ++++++++++++++++----------
> drivers/net/mlx5/mlx5_flow_hw.c | 8 +
> 6 files changed, 167 insertions(+), 101 deletions(-)
>
> --
> 2.34.1
Series-acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Best regards,
Dariusz Sosnowski
Hi,
From: Slava Ovsiienko <viacheslavo@nvidia.com>
Sent: Wednesday, September 18, 2024 4:46 PM
To: dev@dpdk.org
Cc: Matan Azrad; Raslan Darawsheh; Ori Kam; Dariusz Sosnowski
Subject: [PATCH v2 0/9] net/mlx5: cumulative fix series for flex item
There is a series of independent patches related to the flex item.
There is no direct dependency between patches besides the merging
dependency inferred by git, the latter is reason the patches are
sent in series. For more details, please see the individual patch
commit messages.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Viacheslav Ovsiienko (9):
net/mlx5: update flex parser arc types support
net/mlx5: add flex item query tunnel mode routine
net/mlx5/hws: fix flex item support as tunnel header
net/mlx5: fix flex item tunnel mode handling
net/mlx5: fix number of supported flex parsers
app/testpmd: remove flex item init command leftover
net/mlx5: fix next protocol validation after flex item
net/mlx5: fix non full word sample fields in flex item
net/mlx5: fix flex item header length field translation
app/test-pmd/cmdline_flow.c | 12 --
drivers/net/mlx5/hws/mlx5dr_definer.c | 17 +-
drivers/net/mlx5/mlx5.h | 9 +-
drivers/net/mlx5/mlx5_flow_dv.c | 7 +-
drivers/net/mlx5/mlx5_flow_flex.c | 215 ++++++++++++++++----------
drivers/net/mlx5/mlx5_flow_hw.c | 8 +
6 files changed, 167 insertions(+), 101 deletions(-)
--
2.34.1
Series applied to next-net-mlx,
Kindest regards
Raslan Darawsheh
@@ -1111,6 +1111,8 @@ mlx5_flex_arc_type(enum rte_flow_item_type type, int in)
return MLX5_GRAPH_ARC_NODE_GENEVE;
case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
return MLX5_GRAPH_ARC_NODE_VXLAN_GPE;
+ case RTE_FLOW_ITEM_TYPE_ESP:
+ return MLX5_GRAPH_ARC_NODE_IPSEC_ESP;
default:
return -EINVAL;
}
@@ -1148,6 +1150,22 @@ mlx5_flex_arc_in_udp(const struct rte_flow_item *item,
return rte_be_to_cpu_16(spec->hdr.dst_port);
}
+static int
+mlx5_flex_arc_in_ipv4(const struct rte_flow_item *item,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_ipv4 *spec = item->spec;
+ const struct rte_flow_item_ipv4 *mask = item->mask;
+ struct rte_flow_item_ipv4 ip = { .hdr.next_proto_id = 0xff };
+
+ if (memcmp(mask, &ip, sizeof(struct rte_flow_item_ipv4))) {
+ return rte_flow_error_set
+ (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "invalid ipv4 item mask, full mask is desired");
+ }
+ return spec->hdr.next_proto_id;
+}
+
static int
mlx5_flex_arc_in_ipv6(const struct rte_flow_item *item,
struct rte_flow_error *error)
@@ -1210,6 +1228,9 @@ mlx5_flex_translate_arc_in(struct mlx5_hca_flex_attr *attr,
case RTE_FLOW_ITEM_TYPE_UDP:
ret = mlx5_flex_arc_in_udp(rte_item, error);
break;
+ case RTE_FLOW_ITEM_TYPE_IPV4:
+ ret = mlx5_flex_arc_in_ipv4(rte_item, error);
+ break;
case RTE_FLOW_ITEM_TYPE_IPV6:
ret = mlx5_flex_arc_in_ipv6(rte_item, error);
break;