From patchwork Fri Sep 30 12:53:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 117225 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 67248A00C4; Fri, 30 Sep 2022 14:55:54 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E864342BC4; Fri, 30 Sep 2022 14:54:24 +0200 (CEST) Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1anam02on2056.outbound.protection.outlook.com [40.107.96.56]) by mails.dpdk.org (Postfix) with ESMTP id 85A8F42B9D for ; Fri, 30 Sep 2022 14:54:18 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=nRiJyPqLPmO78/ECHIwuxVqapfkgdOOWzPG9grroXnPH9xjYi6RDFdpS5blIgGYacs63+JxfcvXJTBoyBFXgqbL4qEW5FMLiKASbZUlQw+CFuyOZ8bw+uPqOS7Lqk0+wLIXSRmmRZUCUJPyjsRjvERuI9LR58Akq0V+RIgWTprxhWdsfAXk9Qhc3T0MwDIud5u0F8BSsAWdxxCHJcijv0eehZB8bIkuEIZPI9MD0lnLbNguSCEWvGxUxGNxqQFuoMWwTpyE0X5SIDWDJ8YKDbV2qpOPFRbOyyMD5YegzClyg8ksjRcriaqQwfAomZkvckTlVn2XHvuCZoHgNWuLQJg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Z+TJnbLt0KaxppOmng/uo9+kRaEVyuT+rSlm6d1Xkz0=; b=h2BiHj9b//52nW5KitaSdtDdZ1ndEa99ELH8ijz5fUing6FuRpSj/yktiliffALwzJlNT/S5tVUyseA6NmDJu6J5rvmrbAVZp0xEPQKjb0TEYdShg7bn+OB5rmaFHWOxaWAs0ARyIuhgtX/bU0XmL0yCBAAMcShPviOZqp4OSDAb1UCxZGbgYiOXU6sOkVaGl2tyxVi0ITM/sxcu3J8Fm+WeXEVYBuxIemmsHgWAdBFRis05kGSE3s1b1IDnvPqCIjmSETncq8X6Bkk3QF1DjPOZubNIpDqg3485E/IejOGFZwLCCbVJrdERa74tlnMMkqqlTqvYSAq6pNp4uExTuQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Z+TJnbLt0KaxppOmng/uo9+kRaEVyuT+rSlm6d1Xkz0=; b=LlotXvNf151C/ml/EFuGYAJiv4ITlJl586N/qLQ7hWKOS+wE85GoW6bPvvs0xGJjvITyaBloQkOo0iy/4aynl5fUyjD4pkZI4QaarMrkKD7hetylzvPKjK2+l/ryFglYUh0zlKy9DoTRx5Y2niwyJCsOYC/OdVEWkr3qp4CrdSAT/a1h02n4Z20BC0Wm1O/GjijeFQ9TcdO4VSkJ+rVXNDq2dGUhipY3Az+Herwb8cftC8FizHOmHQ0p+esN9OGrYjb6ns+6Wm/Yjki8Rg46TI43QcLy83MSaQBSX6uLoXXapqRg/SHGka4xJnG9DMa/jndQD42IZotrilB9iYcb3w== Received: from DS7PR05CA0063.namprd05.prod.outlook.com (2603:10b6:8:57::19) by MN0PR12MB6056.namprd12.prod.outlook.com (2603:10b6:208:3cc::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.20; Fri, 30 Sep 2022 12:54:16 +0000 Received: from DM6NAM11FT018.eop-nam11.prod.protection.outlook.com (2603:10b6:8:57:cafe::4f) by DS7PR05CA0063.outlook.office365.com (2603:10b6:8:57::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.14 via Frontend Transport; Fri, 30 Sep 2022 12:54:16 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by DM6NAM11FT018.mail.protection.outlook.com (10.13.172.110) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.17 via Frontend Transport; Fri, 30 Sep 2022 12:54:16 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Fri, 30 Sep 2022 05:54:08 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Fri, 30 Sep 2022 05:54:06 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko CC: , , , Gregory Etelson Subject: [PATCH v3 15/17] net/mlx5: support flow integrity in HWS group 0 Date: Fri, 30 Sep 2022 15:53:13 +0300 Message-ID: <20220930125315.5079-16-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220930125315.5079-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> <20220930125315.5079-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT018:EE_|MN0PR12MB6056:EE_ X-MS-Office365-Filtering-Correlation-Id: ee85d648-e39e-4b7e-77a9-08daa2e2e201 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: TwFO6ITCBKGJE/xDfPnZJsMmrvblcmOMYSGKtxfg6OPJ2DYDzrojfqA6Q8ScW5k0QnLFvBj06+l/zpGPVZ7XWNKvrV5Oqfh+NM0fHfEoLxBHqCPqWKTdkXSuIMB0gamkk3YRuBK5D7eRNTeHUYBsJHIIuR9OPZxUTVuSmmwirJmY4JZ2A4RhJZdcDtw8zt/C0FJJY1zQm5vGnuwkcvXX+fnTey14koW0nhEQDb2VI9Ebb0lL4ml99VSKO30/2/UWbvQ+tsDObxsg9rSUyiiN/y2kCm8iB8a3FVkKqrTrikfpZPrtS0N3nclU+x31cxBq4WJI5e/9sPyitI4AjdafzlM41opNHxbxyJAfFykLKtnE1Os2V7XB+TFMcVNWUwxdNKMV1gFSqmlcDu2+Dff+zVOSUEPYZ1L2Qv6jbpPrY3b2K9MZtu9u43ATAU/YnDwvZzadFZjlR7qHxyL+9gnfCX/1XyjDZGATWHDx/j8K98IH0qQPvnELo/Ia6lqJmvUuqSelN4iGfsQ15unce03YtOUCa1NCI+E1rCLwZygGbZBhpwcIcolzcClXtFtH/7+LzF3Z43jyt3Rd8jA4+XjNefKgbIQffgmT0LSKmQKBy5yH3bqnkKb+CwG1fs8eLo5fAwtJ6T4tvize95MpuNnpShQSCTMYQ+1XMppilVvG2FW4PswpM794YJp28GN5DsVSaa64ZxusXuGArl6B13PaxlxezjdBvEOznDOwstzIfdciasTRjYlLyNe2fobVA7mGXSt0Qegpy19UDHZI8mg8z9JJ6fsIRUpTBLb8KR5uP8s= X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(39860400002)(346002)(376002)(396003)(136003)(451199015)(46966006)(36840700001)(40470700004)(2616005)(30864003)(26005)(6286002)(40460700003)(82740400003)(4326008)(356005)(8676002)(110136005)(316002)(36756003)(7636003)(55016003)(36860700001)(54906003)(86362001)(40480700001)(47076005)(1076003)(186003)(336012)(82310400005)(8936002)(83380400001)(478600001)(6666004)(107886003)(426003)(16526019)(7696005)(6636002)(5660300002)(70586007)(2906002)(41300700001)(70206006)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Sep 2022 12:54:16.2839 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ee85d648-e39e-4b7e-77a9-08daa2e2e201 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT018.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB6056 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Gregory Etelson - Reformat flow integrity item translation for HWS code. - Support flow integrity bits in HWS group 0. - Update integrity item translation to match positive semantics only. Positive flow semantics was described in patch [ae37c0f60c]. Signed-off-by: Gregory Etelson --- drivers/net/mlx5/mlx5_flow.h | 1 + drivers/net/mlx5/mlx5_flow_dv.c | 163 ++++++++++++++++---------------- drivers/net/mlx5/mlx5_flow_hw.c | 8 ++ 3 files changed, 90 insertions(+), 82 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index e45869a890..3f4aa080bb 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1462,6 +1462,7 @@ struct mlx5_dv_matcher_workspace { struct mlx5_flow_rss_desc *rss_desc; /* RSS descriptor. */ const struct rte_flow_item *tunnel_item; /* Flow tunnel item. */ const struct rte_flow_item *gre_item; /* Flow GRE item. */ + const struct rte_flow_item *integrity_items[2]; }; struct mlx5_flow_split_info { diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index d31838e26e..e86a06eae6 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -12648,132 +12648,121 @@ flow_dv_aso_age_params_init(struct rte_eth_dev *dev, static void flow_dv_translate_integrity_l4(const struct rte_flow_item_integrity *mask, - const struct rte_flow_item_integrity *value, - void *headers_m, void *headers_v) + void *headers) { + /* + * In HWS mode MLX5_ITEM_UPDATE() macro assigns the same pointer to + * both mask and value, therefore ether can be used. + * In SWS SW_V mode mask points to item mask and value points to item + * spec. Integrity item value is used only if matching mask is set. + * Use mask reference here to keep SWS functionality. + */ if (mask->l4_ok) { /* RTE l4_ok filter aggregates hardware l4_ok and * l4_checksum_ok filters. * Positive RTE l4_ok match requires hardware match on both L4 * hardware integrity bits. - * For negative match, check hardware l4_checksum_ok bit only, - * because hardware sets that bit to 0 for all packets - * with bad L4. + * PMD supports positive integrity item semantics only. */ - if (value->l4_ok) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, l4_ok, 1); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, l4_ok, 1); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, l4_checksum_ok, 1); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, l4_checksum_ok, - !!value->l4_ok); - } - if (mask->l4_csum_ok) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, l4_checksum_ok, 1); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, l4_checksum_ok, - value->l4_csum_ok); + MLX5_SET(fte_match_set_lyr_2_4, headers, l4_ok, 1); + MLX5_SET(fte_match_set_lyr_2_4, headers, l4_checksum_ok, 1); + } else if (mask->l4_csum_ok) { + MLX5_SET(fte_match_set_lyr_2_4, headers, l4_checksum_ok, 1); } } static void flow_dv_translate_integrity_l3(const struct rte_flow_item_integrity *mask, - const struct rte_flow_item_integrity *value, - void *headers_m, void *headers_v, bool is_ipv4) + void *headers, bool is_ipv4) { + /* + * In HWS mode MLX5_ITEM_UPDATE() macro assigns the same pointer to + * both mask and value, therefore ether can be used. + * In SWS SW_V mode mask points to item mask and value points to item + * spec. Integrity item value used only if matching mask is set. + * Use mask reference here to keep SWS functionality. + */ if (mask->l3_ok) { /* RTE l3_ok filter aggregates for IPv4 hardware l3_ok and * ipv4_csum_ok filters. * Positive RTE l3_ok match requires hardware match on both L3 * hardware integrity bits. - * For negative match, check hardware l3_csum_ok bit only, - * because hardware sets that bit to 0 for all packets - * with bad L3. + * PMD supports positive integrity item semantics only. */ + MLX5_SET(fte_match_set_lyr_2_4, headers, l3_ok, 1); if (is_ipv4) { - if (value->l3_ok) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, - l3_ok, 1); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, - l3_ok, 1); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, + MLX5_SET(fte_match_set_lyr_2_4, headers, ipv4_checksum_ok, 1); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, - ipv4_checksum_ok, !!value->l3_ok); - } else { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, l3_ok, 1); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, l3_ok, - value->l3_ok); } - } - if (mask->ipv4_csum_ok) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ipv4_checksum_ok, 1); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ipv4_checksum_ok, - value->ipv4_csum_ok); + } else if (is_ipv4 && mask->ipv4_csum_ok) { + MLX5_SET(fte_match_set_lyr_2_4, headers, ipv4_checksum_ok, 1); } } static void -set_integrity_bits(void *headers_m, void *headers_v, - const struct rte_flow_item *integrity_item, bool is_l3_ip4) +set_integrity_bits(void *headers, const struct rte_flow_item *integrity_item, + bool is_l3_ip4, uint32_t key_type) { - const struct rte_flow_item_integrity *spec = integrity_item->spec; - const struct rte_flow_item_integrity *mask = integrity_item->mask; + const struct rte_flow_item_integrity *spec; + const struct rte_flow_item_integrity *mask; /* Integrity bits validation cleared spec pointer */ - MLX5_ASSERT(spec != NULL); - if (!mask) - mask = &rte_flow_item_integrity_mask; - flow_dv_translate_integrity_l3(mask, spec, headers_m, headers_v, - is_l3_ip4); - flow_dv_translate_integrity_l4(mask, spec, headers_m, headers_v); + if (MLX5_ITEM_VALID(integrity_item, key_type)) + return; + MLX5_ITEM_UPDATE(integrity_item, key_type, spec, mask, + &rte_flow_item_integrity_mask); + flow_dv_translate_integrity_l3(mask, headers, is_l3_ip4); + flow_dv_translate_integrity_l4(mask, headers); } static void -flow_dv_translate_item_integrity_post(void *matcher, void *key, +flow_dv_translate_item_integrity_post(void *key, const struct rte_flow_item *integrity_items[2], - uint64_t pattern_flags) + uint64_t pattern_flags, uint32_t key_type) { - void *headers_m, *headers_v; + void *headers; bool is_l3_ip4; if (pattern_flags & MLX5_FLOW_ITEM_INNER_INTEGRITY) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); + headers = MLX5_ADDR_OF(fte_match_param, key, inner_headers); is_l3_ip4 = (pattern_flags & MLX5_FLOW_LAYER_INNER_L3_IPV4) != 0; - set_integrity_bits(headers_m, headers_v, - integrity_items[1], is_l3_ip4); + set_integrity_bits(headers, integrity_items[1], is_l3_ip4, + key_type); } if (pattern_flags & MLX5_FLOW_ITEM_OUTER_INTEGRITY) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); + headers = MLX5_ADDR_OF(fte_match_param, key, outer_headers); is_l3_ip4 = (pattern_flags & MLX5_FLOW_LAYER_OUTER_L3_IPV4) != 0; - set_integrity_bits(headers_m, headers_v, - integrity_items[0], is_l3_ip4); + set_integrity_bits(headers, integrity_items[0], is_l3_ip4, + key_type); } } -static void +static uint64_t flow_dv_translate_item_integrity(const struct rte_flow_item *item, - const struct rte_flow_item *integrity_items[2], - uint64_t *last_item) + struct mlx5_dv_matcher_workspace *wks, + uint64_t key_type) { - const struct rte_flow_item_integrity *spec = (typeof(spec))item->spec; + if ((key_type & MLX5_SET_MATCHER_SW) != 0) { + const struct rte_flow_item_integrity + *spec = (typeof(spec))item->spec; - /* integrity bits validation cleared spec pointer */ - MLX5_ASSERT(spec != NULL); - if (spec->level > 1) { - integrity_items[1] = item; - *last_item |= MLX5_FLOW_ITEM_INNER_INTEGRITY; + /* SWS integrity bits validation cleared spec pointer */ + if (spec->level > 1) { + wks->integrity_items[1] = item; + wks->last_item |= MLX5_FLOW_ITEM_INNER_INTEGRITY; + } else { + wks->integrity_items[0] = item; + wks->last_item |= MLX5_FLOW_ITEM_OUTER_INTEGRITY; + } } else { - integrity_items[0] = item; - *last_item |= MLX5_FLOW_ITEM_OUTER_INTEGRITY; + /* HWS supports outer integrity only */ + wks->integrity_items[0] = item; + wks->last_item |= MLX5_FLOW_ITEM_OUTER_INTEGRITY; } + return wks->last_item; } /** @@ -13401,6 +13390,10 @@ flow_dv_translate_items(struct rte_eth_dev *dev, flow_dv_translate_item_meter_color(dev, key, items, key_type); last_item = MLX5_FLOW_ITEM_METER_COLOR; break; + case RTE_FLOW_ITEM_TYPE_INTEGRITY: + last_item = flow_dv_translate_item_integrity(items, + wks, key_type); + break; default: break; } @@ -13464,6 +13457,12 @@ flow_dv_translate_items_hws(const struct rte_flow_item *items, if (ret) return ret; } + if (wks.item_flags & MLX5_FLOW_ITEM_INTEGRITY) { + flow_dv_translate_item_integrity_post(key, + wks.integrity_items, + wks.item_flags, + key_type); + } if (wks.item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) { flow_dv_translate_item_vxlan_gpe(key, wks.tunnel_item, @@ -13544,7 +13543,6 @@ flow_dv_translate_items_sws(struct rte_eth_dev *dev, mlx5_flow_get_thread_workspace())->rss_desc, }; struct mlx5_dv_matcher_workspace wks_m = wks; - const struct rte_flow_item *integrity_items[2] = {NULL, NULL}; int ret = 0; int tunnel; @@ -13555,10 +13553,6 @@ flow_dv_translate_items_sws(struct rte_eth_dev *dev, NULL, "item not supported"); tunnel = !!(wks.item_flags & MLX5_FLOW_LAYER_TUNNEL); switch (items->type) { - case RTE_FLOW_ITEM_TYPE_INTEGRITY: - flow_dv_translate_item_integrity(items, integrity_items, - &wks.last_item); - break; case RTE_FLOW_ITEM_TYPE_CONNTRACK: flow_dv_translate_item_aso_ct(dev, match_mask, match_value, items); @@ -13601,9 +13595,14 @@ flow_dv_translate_items_sws(struct rte_eth_dev *dev, return -rte_errno; } if (wks.item_flags & MLX5_FLOW_ITEM_INTEGRITY) { - flow_dv_translate_item_integrity_post(match_mask, match_value, - integrity_items, - wks.item_flags); + flow_dv_translate_item_integrity_post(match_mask, + wks_m.integrity_items, + wks_m.item_flags, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_integrity_post(match_value, + wks.integrity_items, + wks.item_flags, + MLX5_SET_MATCHER_SW_V); } if (wks.item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) { flow_dv_translate_item_vxlan_gpe(match_mask, diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 9f70637fcf..2b5eab6659 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -4656,6 +4656,14 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev, case RTE_FLOW_ITEM_TYPE_ICMP6: case RTE_FLOW_ITEM_TYPE_CONNTRACK: break; + case RTE_FLOW_ITEM_TYPE_INTEGRITY: + /* + * Integrity flow item validation require access to + * both item mask and spec. + * Current HWS model allows item mask in pattern + * template and item spec in flow rule. + */ + break; case RTE_FLOW_ITEM_TYPE_END: items_end = true; break;