From patchwork Mon Apr 17 09:25:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rongwei Liu X-Patchwork-Id: 126178 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B88404296B; Mon, 17 Apr 2023 11:26:48 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C9EA642D37; Mon, 17 Apr 2023 11:26:26 +0200 (CEST) Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1nam02on2052.outbound.protection.outlook.com [40.107.96.52]) by mails.dpdk.org (Postfix) with ESMTP id 5A71442D2C for ; Mon, 17 Apr 2023 11:26:24 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=oT9vR1lMXDUBgtr8HHIfmE36VnWn50saDKj6W6Qh3TiLJKemIUWPAlCWRfm9CNTf7YCFHlzzcBqTz+WT98bP7rpx6SaCvjHOQR1WdcI/LC0OY9futNnTbchtYvpJgMBUQBzJtppiCvp+PKXK5K/U2tXFyqk4Loz/T8+mTDzjFnArfWJRkit+Dp2LY0rtFBLLHkhmhORIR/YYjIr17i9xOxnsXdZDddkaagCslmrrH0BgOZOxPqj2PKbwSa/bb9qjbYb581EwIc4qKBasAbbAcpIvnK2GNkriUiYWShuy9ZZ3T2n242N0zYFdJ0wBMnMiMUNdoNgXA+xxAXKqLXaqfA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=5LYhBnVsHdORGiIFTQTojYb/ojcym9K8mPyM+5dZfL4=; b=Z3+VIfWUTc99TyOTtysN5euA0Uqfb1JK3qXOjkws6eIxwrlfZ/3MvsOuSjHxjLBdya57cTgyW/az/pJbJTfhlp0Nr5+OXhBY5IGKDcVTJrKCxZaE42I2r3coDEBzgbI1srdv0Kw34wK61vwRgH2YsZRsHWRNdAMNkgmAhXnYig6Z/57qTiK6eimUoNiSVXKJXscQ/danJHmscMqMY8nuG5ZPauONNucRczkk80OKwOWSgBaHekqTXLP4TGsLGkN/xR4L0cKyLE6NBUsmTizNbmb+ho95xle1YNhGdzPkJwiTdyS/6bw/8+GZE8gNoTlNU1rJ7rVYzV1jICgpGVcDhg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=5LYhBnVsHdORGiIFTQTojYb/ojcym9K8mPyM+5dZfL4=; b=YJAMrCeq95lz2C3d3Geo0/m5o3GAxTS0QNeGEWaVpZJtAKrbFnD67ThMHzOIqV+LKZzo9EbrPIj8KZkLNvVxl48xL4WM084GI21PUXCGthRANCNuR2UPN91fQxASL4+a1LAo1+6xdVSIzYQRc5+mOUmZTTdXo7uTlXOYkBFXuZgLoAC+ZneSJNFH+0ZunL3f1EojuMEvIZAhdXyvTKKMTfc2SWAa4C9JrppHJ6Cyuj8xt2vbWyQltDaiD5IBXI2rxcgc86LzaNxqgmhU+mQfExx/ecThI1DpV9EB/+KYL4e1BWXq7edroEpXxaDBe+aSODgZoTgHdxfTtH0uoqh0Qg== Received: from BN9PR03CA0899.namprd03.prod.outlook.com (2603:10b6:408:13c::34) by CYYPR12MB8961.namprd12.prod.outlook.com (2603:10b6:930:bf::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Mon, 17 Apr 2023 09:26:22 +0000 Received: from BN8NAM11FT100.eop-nam11.prod.protection.outlook.com (2603:10b6:408:13c:cafe::8e) by BN9PR03CA0899.outlook.office365.com (2603:10b6:408:13c::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend Transport; Mon, 17 Apr 2023 09:26:21 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT100.mail.protection.outlook.com (10.13.177.100) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.19 via Frontend Transport; Mon, 17 Apr 2023 09:26:21 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 17 Apr 2023 02:26:06 -0700 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 17 Apr 2023 02:26:04 -0700 From: Rongwei Liu To: , , , , Subject: [PATCH v1 5/8] net/mlx5: generate srv6 modify header resource Date: Mon, 17 Apr 2023 12:25:37 +0300 Message-ID: <20230417092540.2617450-6-rongweil@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230417092540.2617450-1-rongweil@nvidia.com> References: <20230417092540.2617450-1-rongweil@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT100:EE_|CYYPR12MB8961:EE_ X-MS-Office365-Filtering-Correlation-Id: a3b76d2c-3320-4a24-ab2a-08db3f25cec2 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: rBYrnpOln09aVMUdAdViS6O8NK5jMR029ty5Os214RdS5eRElgIL3hJzFh0ddja04nrCOvHk82ACCAz81WsfN4ZhVpK6nIcS7NIRNUxuNNiVErjtTpqa0mAtBiEuvFi4fjRXzsij79orgAUa+p5A61h6Xtk9DdL54TBcxm5h5FZkO+tLQPUKSRmJxKtYMT8Fl8gcBWnTeXcgnDodfkSwEkmHzC+p9Er3G59KS4VLuheLcf5I1hiQOaP7cYCDHnQaYU1Rnugwt/0bTmq7GB+KrdqbZEYcTRFvtbaOmuZBThPJ1yIEn3Y8HwlJYdRI2dqHHn6SptU06WJv9KAVitJizLsBFIL8q9T5FpKdgpZIYbm/fhyRv1cR4vz8fuET15WbODM8o+sedpCasnXUK716T2G/UytAYQigWnpAqXo5EGfmpfhmSxmWnj5Q4B6clKJx+lKg5LkMtyocNfiK688wXvNtQgF+TqKGkzYco+L0cU1EExpNg6exjMjaXgsCaZGnw0yvAR1qFLfScNzYTKoEuJ6ewBVFujkQ+y56P07JiG3/BUzQMUy9XKFGj7W7ROxw6BQQm1sW0LDYs3BLwdpdEJfU1NOJpDY8Eks9KAWeTvj+qQL+MtjnJF2UiyQv87Q7qmpxv1mCiQpoZjnakYk6pyQup9cSyplLSfJ2JpPt1SP9LOED3TNoE0OEpzvIRBtlpyBESHKHXZI3Q2JaoaTvhsgm4DoVBt4IfKPW3kvbMDSm2Iw9zb4glIYzLtG6TXKd1pxs+/MZhM+ZkbS0PA5n+Psb6j1WwPVW6MTfYNSQfOw= X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(39860400002)(376002)(346002)(136003)(396003)(451199021)(46966006)(40470700004)(36840700001)(36860700001)(82310400005)(30864003)(1076003)(26005)(8936002)(8676002)(36756003)(41300700001)(86362001)(316002)(5660300002)(70586007)(70206006)(40460700003)(110136005)(55016003)(2906002)(7696005)(356005)(82740400003)(7636003)(6666004)(34020700004)(47076005)(186003)(6286002)(16526019)(2616005)(478600001)(40480700001)(336012)(426003)(83380400001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Apr 2023 09:26:21.6230 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a3b76d2c-3320-4a24-ab2a-08db3f25cec2 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT100.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CYYPR12MB8961 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Both checksum and IPv6 next_hdr needs to be updated when adding or removing srv6 header into/from IPv6 packets. 1. Add srv6 ste1 (push buffer with next_hdr 0) --> ste2 (IPv6 next_hdr to 0x2b) --> ste3 (load next hop IPv6 address, and srv6 next_hdr restore) 2. Remove srv6 ste1 (set srv6 next_hdr 0 and save original) --> ste2 (load final IPv6 destination, restore srv6 next_hdr) --> ste3 (remove srv6 and copy srv6 next_hdr to ipv6 next_hdr) Add helpers to generate the 2 modify header resources for add/remove actions. Remove srv6 should be shared globally and add srv6 can be shared or unique per each flow rules. Signed-off-by: Rongwei Liu --- drivers/net/mlx5/mlx5.h | 29 +++ drivers/net/mlx5/mlx5_flow_dv.c | 386 ++++++++++++++++++++++++++++++++ 2 files changed, 415 insertions(+) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 3fbec4db9e..2cb6364957 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -2314,4 +2314,33 @@ void mlx5_flex_parser_clone_free_cb(void *tool_ctx, int mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev); void mlx5_free_srh_flex_parser(struct rte_eth_dev *dev); + +int +flow_dv_generate_ipv6_routing_pop_mhdr1(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + struct mlx5_modification_cmd *cmd, + uint32_t cmd_num); + +int +flow_dv_generate_ipv6_routing_pop_mhdr2(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + struct mlx5_modification_cmd *cmd, + uint32_t cmd_num); + +int +flow_dv_generate_ipv6_routing_push_mhdr1(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + struct mlx5_modification_cmd *cmd, + uint32_t cmd_num); + +int +flow_dv_generate_ipv6_routing_push_mhdr2(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + struct mlx5_modification_cmd *cmd, + uint32_t cmd_num, uint8_t *buf); + +int +flow_dv_ipv6_routing_pop_mhdr_cmd(struct rte_eth_dev *dev, uint8_t *mh_data, + uint8_t *anchor_id); + #endif /* RTE_PMD_MLX5_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index f136f43b0a..4a1f61eeb7 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -2128,6 +2128,392 @@ flow_dv_convert_action_modify_field field, dcopy, resource, type, error); } +/** + * Generate the 1st modify header data for IPv6 routing pop. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] attr + * Pointer to the rte_flow table attribute. + * @param[in,out] cmd + * Pointer to modify header command buffer. + * @param[in] cmd_num + * Modify header command number. + * + * @return + * Positive on success, a negative value otherwise. + */ +int +flow_dv_generate_ipv6_routing_pop_mhdr1(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + struct mlx5_modification_cmd *cmd, + uint32_t cmd_num) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct rte_flow_action_modify_data data; + struct field_modify_info field[MLX5_ACT_MAX_MOD_FIELDS] = { + {0, 0, MLX5_MODI_OUT_NONE} }; + struct field_modify_info dcopy[MLX5_ACT_MAX_MOD_FIELDS] = { + {0, 0, MLX5_MODI_OUT_NONE} }; + uint32_t mask[MLX5_ACT_MAX_MOD_FIELDS] = { 0 }; + struct rte_flow_item item = { + .spec = NULL, + .mask = NULL + }; + union { + struct mlx5_flow_dv_modify_hdr_resource resource; + uint8_t data[sizeof(struct mlx5_flow_dv_modify_hdr_resource) + + sizeof(struct mlx5_modification_cmd) * MLX5_MHDR_MAX_CMD]; + } dummy; + struct mlx5_flow_dv_modify_hdr_resource *resource; + uint32_t value = 0; + struct rte_flow_error error; + +#define IPV6_ROUTING_POP_MHDR_NUM1 3 + if (cmd_num < IPV6_ROUTING_POP_MHDR_NUM1) { + DRV_LOG(ERR, "Not enough modify header buffer"); + return -1; + } + memset(&data, 0, sizeof(data)); + memset(&dummy, 0, sizeof(dummy)); + /* save next_hdr to seg_left. */ + data.field = RTE_FLOW_FIELD_FLEX_ITEM; + data.flex_handle = (struct rte_flow_item_flex_handle *) + (uintptr_t)&priv->sh->srh_flex_parser.flex; + data.offset = offsetof(struct rte_ipv6_routing_ext, segments_left) * CHAR_BIT; + /* For COPY fill the destination field (dcopy) without mask. */ + mlx5_flow_field_id_to_modify_info(&data, dcopy, NULL, 8, dev, attr, &error); + /* Then construct the source field (field) with mask. */ + data.offset = offsetof(struct rte_ipv6_routing_ext, next_hdr) * CHAR_BIT; + mlx5_flow_field_id_to_modify_info(&data, field, mask, 8, dev, attr, &error); + item.mask = &mask; + resource = &dummy.resource; + if (flow_dv_convert_modify_action(&item, field, dcopy, resource, + MLX5_MODIFICATION_TYPE_COPY, &error)) { + DRV_LOG(ERR, "Generate save srv6 next header modify header failed"); + return -1; + } + MLX5_ASSERT(resource->actions_num == 1); + /* add nop. */ + resource->actions[1].data0 = 0; + resource->actions[1].action_type = MLX5_MODIFICATION_TYPE_NOP; + resource->actions[1].data0 = RTE_BE32(resource->actions[1].data0); + resource->actions[1].data1 = 0; + resource->actions_num += 1; + /* clear srv6 next_hdr. */ + memset(&field, 0, sizeof(field)); + memset(&dcopy, 0, sizeof(dcopy)); + memset(&mask, 0, sizeof(mask)); + mlx5_flow_field_id_to_modify_info(&data, field, mask, 8, dev, attr, &error); + item.spec = (void *)(uintptr_t)&value; + if (flow_dv_convert_modify_action(&item, field, dcopy, resource, + MLX5_MODIFICATION_TYPE_SET, &error)) { + DRV_LOG(ERR, "Generate clear srv6 next header modify header failed"); + return -1; + } + MLX5_ASSERT(resource->actions_num == IPV6_ROUTING_POP_MHDR_NUM1); +#undef IPV6_ROUTING_POP_MHDR_NUM1 + memcpy(cmd, resource->actions, + resource->actions_num * sizeof(struct mlx5_modification_cmd)); + return resource->actions_num; +} + +/** + * Generate the 2nd modify header data for IPv6 routing pop. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] attr + * Pointer to the rte_flow table attribute. + * @param[in,out] cmd + * Pointer to modify header command buffer. + * @param[in] cmd_num + * Modify header command number. + * + * @return + * Positive on success, a negative value otherwise. + */ +int +flow_dv_generate_ipv6_routing_pop_mhdr2(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + struct mlx5_modification_cmd *cmd, + uint32_t cmd_num) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct rte_flow_action_modify_data data; + struct field_modify_info field[MLX5_ACT_MAX_MOD_FIELDS] = { + {0, 0, MLX5_MODI_OUT_NONE} }; + struct field_modify_info dcopy[MLX5_ACT_MAX_MOD_FIELDS] = { + {0, 0, MLX5_MODI_OUT_NONE} }; + uint32_t mask[MLX5_ACT_MAX_MOD_FIELDS] = { 0 }; + struct rte_flow_item item = { + .spec = NULL, + .mask = NULL + }; + union { + struct mlx5_flow_dv_modify_hdr_resource resource; + uint8_t data[sizeof(struct mlx5_flow_dv_modify_hdr_resource) + + sizeof(struct mlx5_modification_cmd) * MLX5_MHDR_MAX_CMD]; + } dummy; + struct mlx5_flow_dv_modify_hdr_resource *resource; + struct rte_flow_error error; + +#define IPV6_ROUTING_POP_MHDR_NUM2 5 + if (cmd_num < IPV6_ROUTING_POP_MHDR_NUM2) { + DRV_LOG(ERR, "Note enough modify header buffer"); + return -1; + } + memset(&data, 0, sizeof(data)); + memset(&dummy, 0, sizeof(dummy)); + resource = &dummy.resource; + item.mask = &mask; + data.field = RTE_FLOW_FIELD_IPV6_DST; + data.level = 0; + data.offset = 0; + mlx5_flow_field_id_to_modify_info(&data, dcopy, NULL, 128, dev, attr, &error); + data.field = RTE_FLOW_FIELD_FLEX_ITEM; + data.offset = 32; + data.flex_handle = (struct rte_flow_item_flex_handle *) + (uintptr_t)&priv->sh->srh_flex_parser.flex; + mlx5_flow_field_id_to_modify_info(&data, field, mask, 128, dev, attr, &error); + if (flow_dv_convert_modify_action(&item, field, dcopy, resource, + MLX5_MODIFICATION_TYPE_COPY, &error)) { + DRV_LOG(ERR, "Generate load final IPv6 address modify header failed"); + return -1; + } + MLX5_ASSERT(resource->actions_num == 4); + memset(&field, 0, sizeof(field)); + memset(&dcopy, 0, sizeof(dcopy)); + memset(&mask, 0, sizeof(mask)); + /* copy seg_left to srv6.next_hdr */ + data.offset = offsetof(struct rte_ipv6_routing_ext, next_hdr) * CHAR_BIT; + mlx5_flow_field_id_to_modify_info(&data, dcopy, NULL, 8, dev, attr, &error); + data.offset = offsetof(struct rte_ipv6_routing_ext, segments_left) * CHAR_BIT; + mlx5_flow_field_id_to_modify_info(&data, field, mask, 8, dev, attr, &error); + if (flow_dv_convert_modify_action(&item, field, dcopy, resource, + MLX5_MODIFICATION_TYPE_COPY, &error)) { + DRV_LOG(ERR, "Generate restore srv6 next header modify header failed"); + return -1; + } + MLX5_ASSERT(resource->actions_num == IPV6_ROUTING_POP_MHDR_NUM2); +#undef IPV6_ROUTING_POP_MHDR_NUM2 + memcpy(cmd, resource->actions, + resource->actions_num * sizeof(struct mlx5_modification_cmd)); + return resource->actions_num; +} + +/** + * Generate the 2nd modify header data for IPv6 routing push. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] attr + * Pointer to the rte_flow table attribute. + * @param[in,out] cmd + * Pointer to modify header command buffer. + * @param[in] cmd_num + * Modify header command number. + * + * @return + * Positive on success, a negative value otherwise. + */ +int +flow_dv_generate_ipv6_routing_push_mhdr1(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + struct mlx5_modification_cmd *cmd, + uint32_t cmd_num) +{ + struct rte_flow_action_modify_data data; + struct field_modify_info field[MLX5_ACT_MAX_MOD_FIELDS] = { + {0, 0, MLX5_MODI_OUT_NONE} }; + struct field_modify_info dcopy[MLX5_ACT_MAX_MOD_FIELDS] = { + {0, 0, MLX5_MODI_OUT_NONE} }; + uint32_t mask[MLX5_ACT_MAX_MOD_FIELDS] = { 0 }; + struct rte_flow_item item = { + .spec = NULL, + .mask = NULL + }; + union { + struct mlx5_flow_dv_modify_hdr_resource resource; + uint8_t data[sizeof(struct mlx5_flow_dv_modify_hdr_resource) + + sizeof(struct mlx5_modification_cmd) * MLX5_MHDR_MAX_CMD]; + } dummy; + struct mlx5_flow_dv_modify_hdr_resource *resource; + struct rte_flow_error error; + uint8_t value; + +#define IPV6_ROUTING_PUSH_MHDR_NUM1 1 + if (cmd_num < IPV6_ROUTING_PUSH_MHDR_NUM1) { + DRV_LOG(ERR, "Not enough modify header buffer"); + return -1; + } + memset(&data, 0, sizeof(data)); + memset(&dummy, 0, sizeof(dummy)); + resource = &dummy.resource; + /* Set IPv6 proto to 0x2b. */ + data.field = RTE_FLOW_FIELD_IPV6_PROTO; + mlx5_flow_field_id_to_modify_info(&data, field, mask, 8, dev, attr, &error); + resource = &dummy.resource; + item.mask = &mask; + value = IPPROTO_ROUTING; + item.spec = (void *)(uintptr_t)&value; + if (flow_dv_convert_modify_action(&item, field, dcopy, resource, + MLX5_MODIFICATION_TYPE_SET, &error)) { + DRV_LOG(ERR, "Generate modify IPv6 protocol to 0x2b failed"); + return -1; + } + MLX5_ASSERT(resource->actions_num == IPV6_ROUTING_PUSH_MHDR_NUM1); +#undef IPV6_ROUTING_PUSH_MHDR_NUM1 + memcpy(cmd, resource->actions, + resource->actions_num * sizeof(struct mlx5_modification_cmd)); + return resource->actions_num; +} + +/** + * Generate the 2nd modify header data for IPv6 routing push. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] attr + * Pointer to the rte_flow table attribute. + * @param[in,out] cmd + * Pointer to modify header command buffer. + * @param[in] cmd_num + * Modify header command number. + * + * @return + * Positive on success, a negative value otherwise. + */ +int +flow_dv_generate_ipv6_routing_push_mhdr2(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + struct mlx5_modification_cmd *cmd, + uint32_t cmd_num, uint8_t *buf) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct rte_flow_action_modify_data data; + struct field_modify_info field[MLX5_ACT_MAX_MOD_FIELDS] = { + {0, 0, MLX5_MODI_OUT_NONE} }; + struct field_modify_info dcopy[MLX5_ACT_MAX_MOD_FIELDS] = { + {0, 0, MLX5_MODI_OUT_NONE} }; + uint32_t mask[MLX5_ACT_MAX_MOD_FIELDS] = { 0 }; + struct rte_flow_item item = { + .spec = NULL, + .mask = NULL + }; + union { + struct mlx5_flow_dv_modify_hdr_resource resource; + uint8_t data[sizeof(struct mlx5_flow_dv_modify_hdr_resource) + + sizeof(struct mlx5_modification_cmd) * MLX5_MHDR_MAX_CMD]; + } dummy; + struct mlx5_flow_dv_modify_hdr_resource *resource; + struct rte_flow_error error; + uint8_t next_hdr = *buf; + +#define IPV6_ROUTING_PUSH_MHDR_NUM2 5 + if (cmd_num < IPV6_ROUTING_PUSH_MHDR_NUM2) { + DRV_LOG(ERR, "Not enough modify header buffer"); + return -1; + } + memset(&data, 0, sizeof(data)); + memset(&dummy, 0, sizeof(dummy)); + resource = &dummy.resource; + item.mask = &mask; + item.spec = buf + sizeof(struct rte_ipv6_routing_ext) + + (*(buf + 3) - 1) * 16; /* seg_left-1 IPv6 address */ + data.field = RTE_FLOW_FIELD_IPV6_DST; + mlx5_flow_field_id_to_modify_info(&data, field, mask, 128, dev, attr, &error); + if (flow_dv_convert_modify_action(&item, field, dcopy, resource, + MLX5_MODIFICATION_TYPE_SET, &error)) { + DRV_LOG(ERR, "Generate load srv6 next hop modify header failed"); + return -1; + } + MLX5_ASSERT(resource->actions_num == 4); + memset(&field, 0, sizeof(field)); + memset(&mask, 0, sizeof(mask)); + data.field = RTE_FLOW_FIELD_FLEX_ITEM; + data.flex_handle = (struct rte_flow_item_flex_handle *) + (uintptr_t)&priv->sh->srh_flex_parser.flex; + data.offset = offsetof(struct rte_ipv6_routing_ext, next_hdr) * CHAR_BIT; + item.spec = (void *)(uintptr_t)&next_hdr; + mlx5_flow_field_id_to_modify_info(&data, field, mask, 8, dev, attr, &error); + if (flow_dv_convert_modify_action(&item, field, dcopy, resource, + MLX5_MODIFICATION_TYPE_SET, &error)) { + DRV_LOG(ERR, "Generate srv6 next header restore modify header failed"); + return -1; + } + MLX5_ASSERT(resource->actions_num == IPV6_ROUTING_PUSH_MHDR_NUM2); +#undef IPV6_ROUTING_PUSH_MHDR_NUM2 + memcpy(cmd, resource->actions, + resource->actions_num * sizeof(struct mlx5_modification_cmd)); + return resource->actions_num; +} + +/** + * Generate IPv6 routing pop modification_cmd. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in,out] mh_data + * Pointer to modify header data buffer. + * @param[in,out] anchor_id + * Anchor ID for REMOVE command. + * + * @return + * Positive on success, a negative value otherwise. + */ +int +flow_dv_ipv6_routing_pop_mhdr_cmd(struct rte_eth_dev *dev, uint8_t *mh_data, + uint8_t *anchor_id) +{ + struct rte_flow_action_modify_data data; + struct field_modify_info field[MLX5_ACT_MAX_MOD_FIELDS] = { + {0, 0, MLX5_MODI_OUT_NONE} }; + struct field_modify_info dcopy[MLX5_ACT_MAX_MOD_FIELDS] = { + {0, 0, MLX5_MODI_OUT_NONE} }; + uint32_t mask[MLX5_ACT_MAX_MOD_FIELDS] = { 0 }; + struct rte_flow_item item = { + .spec = NULL, + .mask = NULL + }; + union { + struct mlx5_flow_dv_modify_hdr_resource resource; + uint8_t data[sizeof(struct mlx5_flow_dv_modify_hdr_resource) + + sizeof(struct mlx5_modification_cmd) * MLX5_MHDR_MAX_CMD]; + } dummy; + struct mlx5_flow_dv_modify_hdr_resource *resource; + struct rte_flow_error error; + struct mlx5_priv *priv = dev->data->dev_private; + + if (!priv || !priv->sh->cdev->config.hca_attr.flex.parse_graph_anchor) { + DRV_LOG(ERR, "Doesn't support srv6 as reformat anchor"); + return -1; + } + /* Restore IPv6 protocol from flex parser. */ + memset(&data, 0, sizeof(data)); + memset(&dummy, 0, sizeof(dummy)); + data.field = RTE_FLOW_FIELD_IPV6_PROTO; + mlx5_flow_field_id_to_modify_info(&data, dcopy, NULL, 8, dev, NULL, &error); + /* Then construct the source field (field) with mask. */ + data.field = RTE_FLOW_FIELD_FLEX_ITEM; + data.flex_handle = (struct rte_flow_item_flex_handle *) + (uintptr_t)&priv->sh->srh_flex_parser.flex; + data.offset = offsetof(struct rte_ipv6_routing_ext, next_hdr) * CHAR_BIT; + mlx5_flow_field_id_to_modify_info(&data, field, mask, 8, dev, NULL, &error); + item.mask = &mask; + resource = &dummy.resource; + if (flow_dv_convert_modify_action(&item, field, dcopy, resource, + MLX5_MODIFICATION_TYPE_COPY, + &error)) { + DRV_LOG(ERR, "Generate copy IPv6 protocol from srv6 next header failed"); + return -1; + } + MLX5_ASSERT(resource->actions_num == 1); + memcpy(mh_data, resource->actions, sizeof(struct mlx5_modification_cmd)); + *anchor_id = priv->sh->srh_flex_parser.flex.devx_fp->anchor_id; + return 1; +} + /** * Validate MARK item. *