From patchwork Tue Oct 26 09:25:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 102886 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E7C10A0C47; Tue, 26 Oct 2021 11:26:02 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CD22341C25; Tue, 26 Oct 2021 11:26:02 +0200 (CEST) Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam08on2057.outbound.protection.outlook.com [40.107.100.57]) by mails.dpdk.org (Postfix) with ESMTP id 2D0E1407FF for ; Tue, 26 Oct 2021 11:26:01 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=LEGk6xwyt0qgySrNVik1PECf1PObS++t0qjA3PVuiwTTlNFMqBF149xYMTulpwF74mC/unwP3glkdc/StZg6QfM+3HjfefULgnOUta0R5X9GnbP7iiF8zPDWeYD5OFf6mpjPa/wrk3nZEX9JATFIhbYl9oJV9JT4PS1rgK4VqUz5zZmmOkaNsgk69Zpx4B/FdWJxH7qTsRnUaycuwBaXT19D+pQjPRBJk9Fo7QHgN1NKxl3NHnsEl7CZaKAiGKCw1sJICei7L78ka9YrsvXjqpGJILek80W6nvHK+CkqGsNu668daPXMZcOucKrSU27bDBpSh3fUip9kKDHyszGosw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=e+UvUQfvOjkkjPMcAdS4PpK8OPUtrQXBN2ZhGzB6y0o=; b=nRgtS4vCVHJvIxp4PNEbnZpsWrVv5nHoAHwcral1yd544IXcEj1BiJuRdQRYr7lQGQrnxiz0Sh7vKTIVDjOm1QFT5I46N9EnMZohQbp/w+xUaLRzQcqxQcX69pCif6fkI0W91UNoor4iAGpCe7njm0/hQoMpWj62Hbc1lHgp6Ta/EHt0anA0LOYCQSajZEJeGH6IgnpuAr+vaUlfOro+MXHPlAzKertoaIp7xIRON0feJTssSeiITTVZwk/qomLt6JmyAvcJ6AHbg7uQAgfxng67A1vomPXcAobUoCPDH/PIZV6+mgkVJPIQgp66428VbhaWZbFTCQhM1BJdjVCp5w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=e+UvUQfvOjkkjPMcAdS4PpK8OPUtrQXBN2ZhGzB6y0o=; b=pCa2iOml31OVZC+AwYOAfXeYcwH7fkzsuF5jICvC47FmysKYoflRV2S0Vl1pOBYYNWm1MpmTI3hgS0LciOUMGIS0S6AxUDK7siZekyq0pSelm3UlHEW26oq+tqlD4DdSW5zkGN5oe+O0ecir+vuz18QxSWGcWkro4ACLlaUfjxZDchX35nO87xzQEdSikSrKseMeGk9JhsQnjY4YayMlHqWJB+GyMKwTtcVySR5hCZIL3GcUIBGu/txDZNY6rOrmHvomc3seryBGF2lSvqG+usWoKFPzrTIedhO/l6S4BP4bg3+KiFi2j/HkVGYTYIjWp0o0TL7W8ebm0NU3PrfiBg== Received: from DM6PR13CA0038.namprd13.prod.outlook.com (2603:10b6:5:134::15) by BYAPR12MB2965.namprd12.prod.outlook.com (2603:10b6:a03:ae::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.13; Tue, 26 Oct 2021 09:25:59 +0000 Received: from DM6NAM11FT023.eop-nam11.prod.protection.outlook.com (2603:10b6:5:134:cafe::62) by DM6PR13CA0038.outlook.office365.com (2603:10b6:5:134::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.11 via Frontend Transport; Tue, 26 Oct 2021 09:25:58 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT023.mail.protection.outlook.com (10.13.173.96) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4628.16 via Frontend Transport; Tue, 26 Oct 2021 09:25:58 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 26 Oct 2021 09:25:56 +0000 From: Gregory Etelson To: , CC: , , Viacheslav Ovsiienko Date: Tue, 26 Oct 2021 12:25:42 +0300 Message-ID: <20211026092543.13224-1-getelson@nvidia.com> X-Mailer: git-send-email 2.33.1 MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 131750c4-eb54-41b5-b8b8-08d998629ee2 X-MS-TrafficTypeDiagnostic: BYAPR12MB2965: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:6430; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 1T1b+kogw85T15Tzuaj7UmGBGzxDLfc6B0zL7mXxvyEWHRLuKz4HGXsNQOvKAG4djc0ughPWndh2oV0Mep/0zpzJrD9zza4VKWnU9ii8A264ubvZP6NmGGoUSuWJPHLnevsTvPRlPq2su/rCedTG5iRdhrR8OF4fpeI8tkOwNOqaWfKZqyz8Odt02YjxZ5p7Qy9ToNrK4QeO28rW87OFJvOqGMOlt9n0YLPlLzABtXNrjoOwGKb0u4oxbHYuKXLR233szwB/0wkxiQd0scA1mu56QvwMOFaslGm0/GMJ5u5EVxTpU+5TcDvTZEkH9uwyDT7973tLfSzNpgWH5EImVF6KJEvi05b9Q38zE1JsZawZK2y1MbDSVdOkn4IsYBLmgsrZbDb1+AGuFLV/a4nLBhdn5eIaxT2o3XXe6wG9D9L52cPyF1CcBa3IwqZ48UoJFxadxZ5FHFEDUrVqIjYg8Iic7XqMdvJ3m3QaqTI1qVgnewD9ZuZU/s/nIi3nbcrFbIpnwAIQ2Rjo9l1tBeXGaEYTC8dopPbiZgqE3gM6/hLV0+Uq+UZHjjPsDMdWQuW042tw2Id8+pS4B4MUL1q6hr81uwGxcTCNgSCU+6LV9+8FkUUdTo4c6SUDX1Jwn3COmNMqGR9zKj8TSjd5twvcFZnaBWW/ik2L3465WQQu7dr8GZx2bCNyyY7cmXZLo0puysjM11HYFHxyzc4rrnvukQ== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(7696005)(83380400001)(36860700001)(70206006)(47076005)(336012)(8676002)(2616005)(426003)(70586007)(508600001)(55016002)(86362001)(82310400003)(6286002)(8936002)(36906005)(2906002)(110136005)(16526019)(36756003)(4326008)(7636003)(1076003)(356005)(7049001)(186003)(316002)(5660300002)(6666004)(54906003)(107886003)(26005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Oct 2021 09:25:58.8421 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 131750c4-eb54-41b5-b8b8-08d998629ee2 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT023.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR12MB2965 Subject: [dpdk-dev] [PATCH 1/2] net/mlx5: fix integrity matching for inner and outer headers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" MLX5 PMD can match on integrity bits for inner and outer headers in a single flow. That means a single flow rule can reference both inner and outer integrity bits. That is implemented by adding 2 flow integrity items to a rule - one item for outer integrity bits and other for inner integrity bits. Integrity item `level` parameter specifies what part is being targeted. Current PMD treated integrity items for outer and inner headers as the same. The patch separates PMD verifications for inner and outer integrity items. Fixes: 79f8952783d0 ("net/mlx5: support integrity flow item") Signed-off-by: Gregory Etelson Acked-by: Viacheslav Ovsiienko --- drivers/net/mlx5/mlx5_flow.h | 7 ++++--- drivers/net/mlx5/mlx5_flow_dv.c | 29 ++++++++++++++++++++++------- 2 files changed, 26 insertions(+), 10 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 4a16f30fb7..41e24deec5 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -170,11 +170,12 @@ enum mlx5_feature_name { #define MLX5_FLOW_LAYER_GENEVE_OPT (UINT64_C(1) << 32) #define MLX5_FLOW_LAYER_GTP_PSC (UINT64_C(1) << 33) -/* INTEGRITY item bit */ -#define MLX5_FLOW_ITEM_INTEGRITY (UINT64_C(1) << 34) +/* INTEGRITY item bits */ +#define MLX5_FLOW_ITEM_OUTER_INTEGRITY (UINT64_C(1) << 34) +#define MLX5_FLOW_ITEM_INNER_INTEGRITY (UINT64_C(1) << 35) /* Conntrack item. */ -#define MLX5_FLOW_LAYER_ASO_CT (UINT64_C(1) << 35) +#define MLX5_FLOW_LAYER_ASO_CT (UINT64_C(1) << 36) /* Outer Masks. */ #define MLX5_FLOW_LAYER_OUTER_L3 \ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 9cba22ca2d..c27c2df5c4 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -6679,6 +6679,7 @@ static int flow_dv_validate_item_integrity(struct rte_eth_dev *dev, const struct rte_flow_item *rule_items, const struct rte_flow_item *integrity_item, + uint64_t item_flags, uint64_t *last_item, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; @@ -6694,6 +6695,11 @@ flow_dv_validate_item_integrity(struct rte_eth_dev *dev, RTE_FLOW_ERROR_TYPE_ITEM, integrity_item, "packet integrity integrity_item not supported"); + if (!spec) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, + integrity_item, + "no spec for integrity item"); if (!mask) mask = &rte_flow_item_integrity_mask; if (!mlx5_validate_integrity_item(mask)) @@ -6703,6 +6709,11 @@ flow_dv_validate_item_integrity(struct rte_eth_dev *dev, "unsupported integrity filter"); tunnel_item = mlx5_flow_find_tunnel_item(rule_items); if (spec->level > 1) { + if (item_flags & MLX5_FLOW_ITEM_INNER_INTEGRITY) + return rte_flow_error_set + (error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, + NULL, "multiple inner integrity items not supported"); if (!tunnel_item) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, @@ -6711,6 +6722,11 @@ flow_dv_validate_item_integrity(struct rte_eth_dev *dev, item = tunnel_item; end_item = mlx5_find_end_item(tunnel_item); } else { + if (item_flags & MLX5_FLOW_ITEM_OUTER_INTEGRITY) + return rte_flow_error_set + (error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, + NULL, "multiple outer integrity items not supported"); end_item = tunnel_item ? tunnel_item : mlx5_find_end_item(integrity_item); } @@ -6730,6 +6746,8 @@ flow_dv_validate_item_integrity(struct rte_eth_dev *dev, integrity_item, "missing L4 protocol"); } + *last_item |= spec->level > 1 ? MLX5_FLOW_ITEM_INNER_INTEGRITY : + MLX5_FLOW_ITEM_OUTER_INTEGRITY; return 0; } @@ -7152,16 +7170,13 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, last_item = MLX5_FLOW_LAYER_ECPRI; break; case RTE_FLOW_ITEM_TYPE_INTEGRITY: - if (item_flags & MLX5_FLOW_ITEM_INTEGRITY) - return rte_flow_error_set - (error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ITEM, - NULL, "multiple integrity items not supported"); ret = flow_dv_validate_item_integrity(dev, rule_items, - items, error); + items, + item_flags, + &last_item, + error); if (ret < 0) return ret; - last_item = MLX5_FLOW_ITEM_INTEGRITY; break; case RTE_FLOW_ITEM_TYPE_CONNTRACK: ret = flow_dv_validate_item_aso_ct(dev, items, From patchwork Tue Oct 26 09:25:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 102887 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D73FDA0C47; Tue, 26 Oct 2021 11:26:08 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 41126426DF; Tue, 26 Oct 2021 11:26:06 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2062.outbound.protection.outlook.com [40.107.94.62]) by mails.dpdk.org (Postfix) with ESMTP id 6421D426D7 for ; Tue, 26 Oct 2021 11:26:04 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=d0H0RfJHRvsnTiHZuL2d5pvqi3PJPYU361e2tvrMORyrKbnY0rCPRvJ8kw++q9wwY5ExkMi3xYPWNZy+LpxTCqegpVoLDl/yzYS77fL4T7coSsifQKawq/QE/tt+laclHeTXmwgWvciCPYRgVTKy4lomaRsLkt7pQ2TbAuPvSmyB4nx3nEXI01T34hHgEZCMxLel1kVzhv9NosHHQ6zHICdPRacRSLB56BAv62cIYyibZLfZKuSct42Yqlnt6xEStejf6pez2IjV+nqeI0Y0yUNwPGCMvBRFouq1BoUpbHS+CPJ3Z50IkfuDUJ0thhHYhi5l85UUJlNv1k0RdpNE7A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=uA/A4Ysf+7f6fwvG+kW6ruU2hprbLcB+Cy3TIutxIdQ=; b=hGhvhkjzKbWGNtc/rvCJtdrLDo9kcEyNfWkzx4FLdmoFxYaigwBOyKxv+dwRncky6WYv/1iryWEcIEupbhzrg2hwww8Shyy2yXW1sPh778Hy3mlc/IHR4cb/rVpZ0/wNYeXW6FmgO8SbtyPpASYCo6LAMJ9GXmlLfh1P3AOgawj3DYf2ZpmN7eLgWvAl00y+aYBcgULuluDTSwmt2+GjgcxD66jZ5lWnvMfOe05u6Xs24lO6lEWTsnBxv1AjRK5+4+jqkftLo7qpkMjD8doncVW9H7P/7DSaiUjgBe3+wOUGa0OuO9dxEYCr8EJmi/WHzkUmoGEr+nsqttzwgnDIjA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=uA/A4Ysf+7f6fwvG+kW6ruU2hprbLcB+Cy3TIutxIdQ=; b=Z0zrCjT07brulWcRHnyHAGnkFyFRVsCijltn5cgfsYJmGJWpNHgvsMorE2CtCu04GvazlrDefk1btEPK/DL2P0Q4YBGJH2278rDi1NiOHLWPCvfAZXsX2RHeu3Yy9eV7iVeTIGZ7kzFyicdgTSGFkj8cutwlaVonPfWSMZ5XztZ46t3g3b1U60tWJXTI3sophFoKZYW1OMOY2/fif4FtW1InguPtzeF65i6C4hOxrzt6Kte9G2GuZ40Ww/VwLv1tZDG89biaIrBvAUIrl2o/jhMuVZpEXTi+oNkgug0hIcJXCkOPbE92LxhWvL2BpMkgVd9sobB4FUN/Dgg6XJIRyg== Received: from DM6PR01CA0010.prod.exchangelabs.com (2603:10b6:5:296::15) by MWHPR12MB1215.namprd12.prod.outlook.com (2603:10b6:300:d::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4628.18; Tue, 26 Oct 2021 09:26:01 +0000 Received: from DM6NAM11FT054.eop-nam11.prod.protection.outlook.com (2603:10b6:5:296:cafe::e3) by DM6PR01CA0010.outlook.office365.com (2603:10b6:5:296::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4628.18 via Frontend Transport; Tue, 26 Oct 2021 09:26:01 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT054.mail.protection.outlook.com (10.13.173.95) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4628.16 via Frontend Transport; Tue, 26 Oct 2021 09:26:00 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 26 Oct 2021 09:25:58 +0000 From: Gregory Etelson To: , CC: , , Viacheslav Ovsiienko Date: Tue, 26 Oct 2021 12:25:43 +0300 Message-ID: <20211026092543.13224-2-getelson@nvidia.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211026092543.13224-1-getelson@nvidia.com> References: <20211026092543.13224-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 102f4f1f-c003-4388-2e49-08d99862a036 X-MS-TrafficTypeDiagnostic: MWHPR12MB1215: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2887; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: V7aTAycL3s2Zk6x1MCFmY3dJEUdsNx4Ne/r04EKVA6x2Dl/JJm0YpSh5BgiSP+pIF2FelvcUHWqqG8E9FjE2uCaBt3LKsccZq/32rOQAUSwjMgYIeUnMK2l88rkefT+lamISq4J07DrOLL6cnWTGtcUlOtTNCIJBXCncr9jACnmpiNIsSAcfi/O6+dFqWBw93rbAuNblswH6ZPkrUpeGoqmlGgkmoWXf6dxBcrFlhT6CQTIQJKFfHPfMgf4OctyXCh2T0ehHd4dJbjfAW7R37qL6LAolw1eBrtzJQWinIf3VjO11PY1jJzz02ExGZBTsVDDL+L9cqwBcr7Zdn5qN5IJmw8zKp12kf/HDldEpQXrBEb3NwhRXmXME/s6cvp6zMgXBi1A5fVoa33tYb68k0qerqS2WnenLwEQBnBTwM99FQthWtT5qcXjhpfR6KktgwWaVnQsfKfSBmsm+nXl/6wJukonNI4GXhTZW1vWxsgjOAJd68+NnU6fP9i19sEfIuErQOF+ds0a/WeZRaGNSzPVy0HCHZAki+8eaE4vu3epAokjAs0X9Pv1tilTGN07iLcounGD4iPsESQUZJtxNO6VMGXpjcWLFr0WOT5guBlaV8wmF4NINvTq75mJJ7odUyByI0ulD8V2zMyYQ8SsSM0C7HF50876xVMtL33RWfW1SD/+1DtuQBoqUZJIhDGanJDWVbciMzCKghpft5PMLvA== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(54906003)(47076005)(36756003)(1076003)(110136005)(5660300002)(356005)(83380400001)(70586007)(7696005)(336012)(6666004)(26005)(8676002)(7049001)(107886003)(7636003)(70206006)(30864003)(16526019)(86362001)(36906005)(36860700001)(8936002)(316002)(2906002)(4326008)(426003)(2616005)(6286002)(508600001)(55016002)(82310400003)(186003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Oct 2021 09:26:00.9439 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 102f4f1f-c003-4388-2e49-08d99862a036 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT054.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR12MB1215 Subject: [dpdk-dev] [PATCH 2/2] net/mlx5: fix integrity flow item validation and translation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Integrity item validation and translation must verify that integrity item bits match L3 and L4 items in flow rule pattern. For cases when integrity item was positioned before L3 header, such verification must be split into two stages. The first stage detects integrity flow item and makes intializations for the second stage. The second stage is activated after PMD completes processing of all flow items in rule pattern. PMD accumulates information about flow items in flow pattern. When all pattern flow items were processed, PMD can apply that data to complete integrity item validation and translation. Fixes: 79f8952783d0 ("net/mlx5: support integrity flow item") Signed-off-by: Gregory Etelson Acked-by: Viacheslav Ovsiienko --- drivers/net/mlx5/mlx5_flow.h | 2 + drivers/net/mlx5/mlx5_flow_dv.c | 293 +++++++++++++------------------- 2 files changed, 119 insertions(+), 176 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 41e24deec5..5a07afa8df 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -173,6 +173,8 @@ enum mlx5_feature_name { /* INTEGRITY item bits */ #define MLX5_FLOW_ITEM_OUTER_INTEGRITY (UINT64_C(1) << 34) #define MLX5_FLOW_ITEM_INNER_INTEGRITY (UINT64_C(1) << 35) +#define MLX5_FLOW_ITEM_INTEGRITY \ + (MLX5_FLOW_ITEM_OUTER_INTEGRITY | MLX5_FLOW_ITEM_INNER_INTEGRITY) /* Conntrack item. */ #define MLX5_FLOW_LAYER_ASO_CT (UINT64_C(1) << 36) diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index c27c2df5c4..a2bcaf0f1c 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -287,31 +287,6 @@ struct field_modify_info modify_tcp[] = { {0, 0, 0}, }; -static const struct rte_flow_item * -mlx5_flow_find_tunnel_item(const struct rte_flow_item *item) -{ - for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { - switch (item->type) { - default: - break; - case RTE_FLOW_ITEM_TYPE_VXLAN: - case RTE_FLOW_ITEM_TYPE_VXLAN_GPE: - case RTE_FLOW_ITEM_TYPE_GRE: - case RTE_FLOW_ITEM_TYPE_MPLS: - case RTE_FLOW_ITEM_TYPE_NVGRE: - case RTE_FLOW_ITEM_TYPE_GENEVE: - return item; - case RTE_FLOW_ITEM_TYPE_IPV4: - case RTE_FLOW_ITEM_TYPE_IPV6: - if (item[1].type == RTE_FLOW_ITEM_TYPE_IPV4 || - item[1].type == RTE_FLOW_ITEM_TYPE_IPV6) - return item; - break; - } - } - return NULL; -} - static void mlx5_flow_tunnel_ip_check(const struct rte_flow_item *item __rte_unused, uint8_t next_protocol, uint64_t *item_flags, @@ -6581,114 +6556,74 @@ flow_dv_validate_attributes(struct rte_eth_dev *dev, return ret; } -static uint16_t -mlx5_flow_locate_proto_l3(const struct rte_flow_item **head, - const struct rte_flow_item *end) +static int +validate_integrity_bits(const struct rte_flow_item_integrity *mask, + int64_t pattern_flags, uint64_t l3_flags, + uint64_t l4_flags, uint64_t ip4_flag, + struct rte_flow_error *error) { - const struct rte_flow_item *item = *head; - uint16_t l3_protocol; + if (mask->l3_ok && !(pattern_flags & l3_flags)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + NULL, "missing L3 protocol"); + + if (mask->ipv4_csum_ok && !(pattern_flags & ip4_flag)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + NULL, "missing IPv4 protocol"); + + if ((mask->l4_ok || mask->l4_csum_ok) && !(pattern_flags & l4_flags)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + NULL, "missing L4 protocol"); - for (; item != end; item++) { - switch (item->type) { - default: - break; - case RTE_FLOW_ITEM_TYPE_IPV4: - l3_protocol = RTE_ETHER_TYPE_IPV4; - goto l3_ok; - case RTE_FLOW_ITEM_TYPE_IPV6: - l3_protocol = RTE_ETHER_TYPE_IPV6; - goto l3_ok; - case RTE_FLOW_ITEM_TYPE_ETH: - if (item->mask && item->spec) { - MLX5_ETHER_TYPE_FROM_HEADER(rte_flow_item_eth, - type, item, - l3_protocol); - if (l3_protocol == RTE_ETHER_TYPE_IPV4 || - l3_protocol == RTE_ETHER_TYPE_IPV6) - goto l3_ok; - } - break; - case RTE_FLOW_ITEM_TYPE_VLAN: - if (item->mask && item->spec) { - MLX5_ETHER_TYPE_FROM_HEADER(rte_flow_item_vlan, - inner_type, item, - l3_protocol); - if (l3_protocol == RTE_ETHER_TYPE_IPV4 || - l3_protocol == RTE_ETHER_TYPE_IPV6) - goto l3_ok; - } - break; - } - } return 0; -l3_ok: - *head = item; - return l3_protocol; } -static uint8_t -mlx5_flow_locate_proto_l4(const struct rte_flow_item **head, - const struct rte_flow_item *end) +static int +flow_dv_validate_item_integrity_post(const struct + rte_flow_item *integrity_items[2], + int64_t pattern_flags, + struct rte_flow_error *error) { - const struct rte_flow_item *item = *head; - uint8_t l4_protocol; + const struct rte_flow_item_integrity *mask; + int ret; - for (; item != end; item++) { - switch (item->type) { - default: - break; - case RTE_FLOW_ITEM_TYPE_TCP: - l4_protocol = IPPROTO_TCP; - goto l4_ok; - case RTE_FLOW_ITEM_TYPE_UDP: - l4_protocol = IPPROTO_UDP; - goto l4_ok; - case RTE_FLOW_ITEM_TYPE_IPV4: - if (item->mask && item->spec) { - const struct rte_flow_item_ipv4 *mask, *spec; - - mask = (typeof(mask))item->mask; - spec = (typeof(spec))item->spec; - l4_protocol = mask->hdr.next_proto_id & - spec->hdr.next_proto_id; - if (l4_protocol == IPPROTO_TCP || - l4_protocol == IPPROTO_UDP) - goto l4_ok; - } - break; - case RTE_FLOW_ITEM_TYPE_IPV6: - if (item->mask && item->spec) { - const struct rte_flow_item_ipv6 *mask, *spec; - mask = (typeof(mask))item->mask; - spec = (typeof(spec))item->spec; - l4_protocol = mask->hdr.proto & spec->hdr.proto; - if (l4_protocol == IPPROTO_TCP || - l4_protocol == IPPROTO_UDP) - goto l4_ok; - } - break; - } + if (pattern_flags & MLX5_FLOW_ITEM_OUTER_INTEGRITY) { + mask = (typeof(mask))integrity_items[0]->mask; + ret = validate_integrity_bits(mask, pattern_flags, + MLX5_FLOW_LAYER_OUTER_L3, + MLX5_FLOW_LAYER_OUTER_L4, + MLX5_FLOW_LAYER_OUTER_L3_IPV4, + error); + if (ret) + return ret; + } + if (pattern_flags & MLX5_FLOW_ITEM_INNER_INTEGRITY) { + mask = (typeof(mask))integrity_items[1]->mask; + ret = validate_integrity_bits(mask, pattern_flags, + MLX5_FLOW_LAYER_INNER_L3, + MLX5_FLOW_LAYER_INNER_L4, + MLX5_FLOW_LAYER_INNER_L3_IPV4, + error); + if (ret) + return ret; } return 0; -l4_ok: - *head = item; - return l4_protocol; } static int flow_dv_validate_item_integrity(struct rte_eth_dev *dev, - const struct rte_flow_item *rule_items, const struct rte_flow_item *integrity_item, - uint64_t item_flags, uint64_t *last_item, + uint64_t pattern_flags, uint64_t *last_item, + const struct rte_flow_item *integrity_items[2], struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; - const struct rte_flow_item *tunnel_item, *end_item, *item = rule_items; const struct rte_flow_item_integrity *mask = (typeof(mask)) integrity_item->mask; const struct rte_flow_item_integrity *spec = (typeof(spec)) integrity_item->spec; - uint32_t protocol; if (!priv->config.hca_attr.pkt_integrity_match) return rte_flow_error_set(error, ENOTSUP, @@ -6707,47 +6642,23 @@ flow_dv_validate_item_integrity(struct rte_eth_dev *dev, RTE_FLOW_ERROR_TYPE_ITEM, integrity_item, "unsupported integrity filter"); - tunnel_item = mlx5_flow_find_tunnel_item(rule_items); if (spec->level > 1) { - if (item_flags & MLX5_FLOW_ITEM_INNER_INTEGRITY) + if (pattern_flags & MLX5_FLOW_ITEM_INNER_INTEGRITY) return rte_flow_error_set (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, NULL, "multiple inner integrity items not supported"); - if (!tunnel_item) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ITEM, - integrity_item, - "missing tunnel item"); - item = tunnel_item; - end_item = mlx5_find_end_item(tunnel_item); + integrity_items[1] = integrity_item; + *last_item |= MLX5_FLOW_ITEM_INNER_INTEGRITY; } else { - if (item_flags & MLX5_FLOW_ITEM_OUTER_INTEGRITY) + if (pattern_flags & MLX5_FLOW_ITEM_OUTER_INTEGRITY) return rte_flow_error_set (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, NULL, "multiple outer integrity items not supported"); - end_item = tunnel_item ? tunnel_item : - mlx5_find_end_item(integrity_item); + integrity_items[0] = integrity_item; + *last_item |= MLX5_FLOW_ITEM_OUTER_INTEGRITY; } - if (mask->l3_ok || mask->ipv4_csum_ok) { - protocol = mlx5_flow_locate_proto_l3(&item, end_item); - if (!protocol) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - integrity_item, - "missing L3 protocol"); - } - if (mask->l4_ok || mask->l4_csum_ok) { - protocol = mlx5_flow_locate_proto_l4(&item, end_item); - if (!protocol) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - integrity_item, - "missing L4 protocol"); - } - *last_item |= spec->level > 1 ? MLX5_FLOW_ITEM_INNER_INTEGRITY : - MLX5_FLOW_ITEM_OUTER_INTEGRITY; return 0; } @@ -6843,7 +6754,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, .std_tbl_fix = true, }; const struct rte_eth_hairpin_conf *conf; - const struct rte_flow_item *rule_items = items; + const struct rte_flow_item *integrity_items[2] = {NULL, NULL}; const struct rte_flow_item *port_id_item = NULL; bool def_policy = false; uint16_t udp_dport = 0; @@ -7170,10 +7081,10 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, last_item = MLX5_FLOW_LAYER_ECPRI; break; case RTE_FLOW_ITEM_TYPE_INTEGRITY: - ret = flow_dv_validate_item_integrity(dev, rule_items, - items, + ret = flow_dv_validate_item_integrity(dev, items, item_flags, &last_item, + integrity_items, error); if (ret < 0) return ret; @@ -7196,6 +7107,12 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, } item_flags |= last_item; } + if (item_flags & MLX5_FLOW_ITEM_INTEGRITY) { + ret = flow_dv_validate_item_integrity_post(integrity_items, + item_flags, error); + if (ret) + return ret; + } for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) { int type = actions->type; bool shared_count = false; @@ -12083,8 +12000,7 @@ flow_dv_translate_integrity_l4(const struct rte_flow_item_integrity *mask, static void flow_dv_translate_integrity_l3(const struct rte_flow_item_integrity *mask, const struct rte_flow_item_integrity *value, - void *headers_m, void *headers_v, - bool is_ipv4) + void *headers_m, void *headers_v, bool is_ipv4) { if (mask->l3_ok) { /* application l3_ok filter aggregates all hardware l3 filters @@ -12115,45 +12031,66 @@ flow_dv_translate_integrity_l3(const struct rte_flow_item_integrity *mask, } static void -flow_dv_translate_item_integrity(void *matcher, void *key, - const struct rte_flow_item *head_item, - const struct rte_flow_item *integrity_item) +set_integrity_bits(void *headers_m, void *headers_v, + const struct rte_flow_item *integrity_item, bool is_l3_ip4) { + const struct rte_flow_item_integrity *spec = integrity_item->spec; const struct rte_flow_item_integrity *mask = integrity_item->mask; - const struct rte_flow_item_integrity *value = integrity_item->spec; - const struct rte_flow_item *tunnel_item, *end_item, *item; - void *headers_m; - void *headers_v; - uint32_t l3_protocol; - if (!value) - return; + /* Integrity bits validation cleared spec pointer */ + MLX5_ASSERT(spec != NULL); if (!mask) mask = &rte_flow_item_integrity_mask; - if (value->level > 1) { + flow_dv_translate_integrity_l3(mask, spec, headers_m, headers_v, + is_l3_ip4); + flow_dv_translate_integrity_l4(mask, spec, headers_m, headers_v); +} + +static void +flow_dv_translate_item_integrity_post(void *matcher, void *key, + const + struct rte_flow_item *integrity_items[2], + uint64_t pattern_flags) +{ + void *headers_m, *headers_v; + bool is_l3_ip4; + + if (pattern_flags & MLX5_FLOW_ITEM_INNER_INTEGRITY) { headers_m = MLX5_ADDR_OF(fte_match_param, matcher, inner_headers); headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { + is_l3_ip4 = (pattern_flags & MLX5_FLOW_LAYER_INNER_L3_IPV4) != + 0; + set_integrity_bits(headers_m, headers_v, + integrity_items[1], is_l3_ip4); + } + if (pattern_flags & MLX5_FLOW_ITEM_OUTER_INTEGRITY) { headers_m = MLX5_ADDR_OF(fte_match_param, matcher, outer_headers); headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); + is_l3_ip4 = (pattern_flags & MLX5_FLOW_LAYER_OUTER_L3_IPV4) != + 0; + set_integrity_bits(headers_m, headers_v, + integrity_items[0], is_l3_ip4); } - tunnel_item = mlx5_flow_find_tunnel_item(head_item); - if (value->level > 1) { - /* tunnel item was verified during the item validation */ - item = tunnel_item; - end_item = mlx5_find_end_item(tunnel_item); +} + +static void +flow_dv_translate_item_integrity(const struct rte_flow_item *item, + const struct rte_flow_item *integrity_items[2], + uint64_t *last_item) +{ + const struct rte_flow_item_integrity *spec = (typeof(spec))item->spec; + + /* integrity bits validation cleared spec pointer */ + MLX5_ASSERT(spec != NULL); + if (spec->level > 1) { + integrity_items[1] = item; + *last_item |= MLX5_FLOW_ITEM_INNER_INTEGRITY; } else { - item = head_item; - end_item = tunnel_item ? tunnel_item : - mlx5_find_end_item(integrity_item); + integrity_items[0] = item; + *last_item |= MLX5_FLOW_ITEM_OUTER_INTEGRITY; } - l3_protocol = mask->l3_ok ? - mlx5_flow_locate_proto_l3(&item, end_item) : 0; - flow_dv_translate_integrity_l3(mask, value, headers_m, headers_v, - l3_protocol == RTE_ETHER_TYPE_IPV4); - flow_dv_translate_integrity_l4(mask, value, headers_m, headers_v); } /** @@ -12569,7 +12506,7 @@ flow_dv_translate(struct rte_eth_dev *dev, (1 << MLX5_SCALE_FLOW_GROUP_BIT), .std_tbl_fix = true, }; - const struct rte_flow_item *head_item = items; + const struct rte_flow_item *integrity_items[2] = {NULL, NULL}; if (!wks) return rte_flow_error_set(error, ENOMEM, @@ -13462,9 +13399,8 @@ flow_dv_translate(struct rte_eth_dev *dev, last_item = MLX5_FLOW_LAYER_ECPRI; break; case RTE_FLOW_ITEM_TYPE_INTEGRITY: - flow_dv_translate_item_integrity(match_mask, - match_value, - head_item, items); + flow_dv_translate_item_integrity(items, integrity_items, + &last_item); break; case RTE_FLOW_ITEM_TYPE_CONNTRACK: flow_dv_translate_item_aso_ct(dev, match_mask, @@ -13488,6 +13424,11 @@ flow_dv_translate(struct rte_eth_dev *dev, match_value, NULL, attr)) return -rte_errno; } + if (item_flags & MLX5_FLOW_ITEM_INTEGRITY) { + flow_dv_translate_item_integrity_post(match_mask, match_value, + integrity_items, + item_flags); + } #ifdef RTE_LIBRTE_MLX5_DEBUG MLX5_ASSERT(!flow_dv_check_valid_spec(matcher.mask.buf, dev_flow->dv.value.buf));