From patchwork Sun Jul 2 04:57:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Itamar Gozlan X-Patchwork-Id: 129197 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7AFF242DA8; Mon, 3 Jul 2023 11:21:25 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AE15642D13; Mon, 3 Jul 2023 11:21:16 +0200 (CEST) Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam04on2079.outbound.protection.outlook.com [40.107.100.79]) by mails.dpdk.org (Postfix) with ESMTP id 5D40D40689 for ; Sun, 2 Jul 2023 06:58:30 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=AXIknOzX4crhCmtR3gOvQmSv7dhcNCxhn6oNa3BssAz3uVzYkGTKFyq1vClpzLnO7+dfYkIMF8FbXMVL2EOuUF397QpQFu66gHngoW7qrmWX9xw0W07DpOpdkhXgoTeMWZ1LlmPmOaQKttUll4raUtytu8dedqPAClN7q8FOPTMogVkGli6rTsZTH5MxvnkDpAsEiEDyfzRD6tKOuTaceHEKJxcNE9jtH8CKL5m01ZicjEmZCPltcmxs1EeEzOGYPugU2ycdbVbLQcN8FOChlgVIvbWa3g4aZlkgViaQDZPQoe4mBwR6ssyfSiKTuQAyI35RQBLr7DAp3jk3WQc3sw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=opmdIgxMN2ETTXP26wL4Hy7CHMKjIjHhdpDSRjGn/IE=; b=ADIlHDtGunEMBLExWEsfZUeBAmlQqiim5+Ye9/zf4Zv4KLGmZlGNbFOUKgSD3pEaI66FUiYU4skO9QeqJF9rep6DiF2Wtr8TVhAQ9BUMVblYOChKInilpyyGuiqQvmecAvR831yP5ihJd11XrG90Xi0hZrDHepiOl5uWJNZ0QNxDBaAaORe9b3AUcNj2Xj0VjjvAJE/bgZ+WODTaUPhvnD8s1eeuRKUcImZnrI3Kbn+E6OdNGXcjLmueDqEaFOIGgBICPhlreDbGkLL7OsWt6LBgQuxXpWb/1Mnec1U5IfZ6/mfM7DftBXva/IuN/S888KAu/4sMC80OtE4pYImg2A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=opmdIgxMN2ETTXP26wL4Hy7CHMKjIjHhdpDSRjGn/IE=; b=naxqTCtyEHLADob46S83M+u9sWuCyQZch+SeEC0fbU3634bU1yZNU4nFqM6jC3qryHoCuu4ceseU+Ipr0csGwAlBjGdsUN1UFC2dXCSrMG8f3L0Hd1SYI9BqC6AhcIHveUbQWwUwxdZQSIKoAkPK7alR32jirSfbkLqN+y4vQ0IoHNmb/8AHpLCsIb0KrRvnhvgly4MZjK+gC/PEvlraNAc0JiqbzuelUv0K9rAPklNIIe+Vh3sBPBMZLxLF7+hDfrbOsulpGbIgbLT4+KM9sus9X4f94qvHBc5Mt6eLxJYNH7YnSLn+P5idelFcKPxBADKGY1g/8Z2vptBmnXiUPw== Received: from DM6PR04CA0013.namprd04.prod.outlook.com (2603:10b6:5:334::18) by PH8PR12MB7205.namprd12.prod.outlook.com (2603:10b6:510:227::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6544.24; Sun, 2 Jul 2023 04:58:26 +0000 Received: from DM6NAM11FT084.eop-nam11.prod.protection.outlook.com (2603:10b6:5:334:cafe::d0) by DM6PR04CA0013.outlook.office365.com (2603:10b6:5:334::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6544.26 via Frontend Transport; Sun, 2 Jul 2023 04:58:26 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by DM6NAM11FT084.mail.protection.outlook.com (10.13.172.132) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6521.43 via Frontend Transport; Sun, 2 Jul 2023 04:58:25 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Sat, 1 Jul 2023 21:58:13 -0700 Received: from rnnvmail205.nvidia.com (10.129.68.10) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Sat, 1 Jul 2023 21:58:12 -0700 Received: from nvidia.com (10.127.8.12) by mail.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37 via Frontend Transport; Sat, 1 Jul 2023 21:58:10 -0700 From: Itamar Gozlan To: , , , , CC: , , Rongwei Liu Subject: [v2 3/5] net/mlx5: add indirect encap decap support Date: Sun, 2 Jul 2023 07:57:56 +0300 Message-ID: <20230702045758.23244-3-igozlan@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20230702045758.23244-1-igozlan@nvidia.com> References: <20230629072125.20369-5-igozlan@nvidia.com> <20230702045758.23244-1-igozlan@nvidia.com> MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT084:EE_|PH8PR12MB7205:EE_ X-MS-Office365-Filtering-Correlation-Id: aab2afd5-2983-4590-90a2-08db7ab8f836 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Ww4ldrNoh/EVPng1wgzaJ1FRFv5L6AmrAF1h7JtBeqyat23mO/H4bd8Gm/l8b/JbuJ6I7PhvqWNViR1pw7y8G2AVPfVpFxd8WhYNiW5ka7UsrUwZ2ER664gUHrfZj7KZfqMnJmafE6h3ScVLLK2rwitW247EhU/woWjChU0PGhvMeORk0+6IIAZAUrwZtC+C/SVaYoYQ05DjERXeKsjT8jmHCH/D5vBhVWSIhYtJ8Lm1UOz/nJfSfXN6MhT5J9XAJw/YzJAJV5h7af9OpbQT6lRjbzmsLRlCLaFhBAvI3aTZhNwoG7AirzVg163c7qf6x2bA+U6S/G+ActE1xwJ3X4MuQKjej2slBEytp0/6KABeBylItilJwr8Cp8A+xGhYHNcw+KmS9pDy0h+v2T80nBZjWpCd4G6AUITwgIIXTYdm7XD0UjRJ/NX3uph/Vt0dQRbgacgUvCEwwzGrI5uQIUG5ePgH4OqEIKxW+4oKkI70HvzGHV8GK8jCms888hJqLWJdv+OaXAjnqqkXHgqM+K6HHjvfsdjDJOjEPDozMOZk1XfC/KgCjA2FdA+js9+w+7rJ88twHxsWZfmOcMknrQSwhXM7P35+0MeDSKJQDxhJHcwP97IshFp6XsHj8ed2vymlBw3bAn2nkGPnODXdHTORVcm0jmmSTSeA2ZE4Axcpksa1NLddOgaCMLx0MOMWv3s1aOxTOS4oWZ1VOJUQsuHtfpTdZaPvxOomIfZAto9zPwSH0o4oSPhASrShg0bU X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(136003)(39860400002)(376002)(346002)(396003)(451199021)(46966006)(40470700004)(36840700001)(1076003)(82740400003)(40480700001)(356005)(7636003)(40460700003)(316002)(55016003)(70586007)(70206006)(6636002)(4326008)(107886003)(36860700001)(47076005)(2616005)(83380400001)(82310400005)(426003)(336012)(186003)(6286002)(26005)(478600001)(110136005)(54906003)(2906002)(30864003)(8676002)(8936002)(36756003)(5660300002)(7696005)(86362001)(6666004)(41300700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jul 2023 04:58:25.8391 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: aab2afd5-2983-4590-90a2-08db7ab8f836 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT084.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR12MB7205 X-Mailman-Approved-At: Mon, 03 Jul 2023 11:21:13 +0200 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Rongwei Liu Support the raw_encap/decap combinations in the indirect action list, and translates to 4 types of underlayer tunnel operations: 1. Layer 2 encapsulation like VxLAN. 2. Layer 2 decapsulation like VxLAN. 3. Layer 3 encapsulation like GRE. 4. Layer 3 decapsulation like GRE. Each indirect action list has a unique handle ID and stands for different tunnel operations. The operation is shared globally with fixed patterns. It means there is no configuration associated with each handle ID and conf pointer should be NULL always no matter in the action template or flow rules. If the handle ID mask in the action template is NULL, each flow rule can take its own indirect handle, otherwise, the ID in action template is used for all rules. The handle ID used in the flow rules must be the same type as the one in the action template. Testpmd cli example: flow indirect_action 0 create action_id 10 transfer list actions raw_decap index 1 / raw_encap index 2 / end  flow pattern_template 0 create transfer pattern_template_id 1 template eth / ipv4 / udp / end flow actions_template 0 create transfer actions_template_id 1 template indirect_list handle 10 / jump / end mask indirect_list / jump / end flow template_table 0 create table_id 1 group 1 priority 0 transfer rules_number 64 pattern_template 1 actions_template 1 flow queue 0 create 0 template_table 1 pattern_template 0 actions_template 0 postpone no pattern eth / ipv4 / udp / end actions indirect_list handle 11 / jump group 10 / end  Signed-off-by: Rongwei Liu --- drivers/net/mlx5/mlx5_flow.c | 5 + drivers/net/mlx5/mlx5_flow.h | 16 ++ drivers/net/mlx5/mlx5_flow_hw.c | 323 ++++++++++++++++++++++++++++++++ 3 files changed, 344 insertions(+) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index fb7b82fa26..45f2210ae7 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -54,6 +54,7 @@ void mlx5_indirect_list_handles_release(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; + struct rte_flow_error error; while (!LIST_EMPTY(&priv->indirect_list_head)) { struct mlx5_indirect_list *e = @@ -68,6 +69,10 @@ mlx5_indirect_list_handles_release(struct rte_eth_dev *dev) case MLX5_INDIRECT_ACTION_LIST_TYPE_LEGACY: mlx5_destroy_legacy_indirect(dev, e); break; + case MLX5_INDIRECT_ACTION_LIST_TYPE_REFORMAT: + mlx5_reformat_action_destroy(dev, + (struct rte_flow_action_list_handle *)e, &error); + break; #endif default: DRV_LOG(ERR, "invalid indirect list type"); diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 46bfd4d8a7..e273bd958d 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -116,6 +116,7 @@ enum mlx5_indirect_list_type { MLX5_INDIRECT_ACTION_LIST_TYPE_ERR = 0, MLX5_INDIRECT_ACTION_LIST_TYPE_LEGACY = 1, MLX5_INDIRECT_ACTION_LIST_TYPE_MIRROR = 2, + MLX5_INDIRECT_ACTION_LIST_TYPE_REFORMAT = 3, }; /** @@ -1433,6 +1434,8 @@ struct mlx5_hw_jump_action { /* Encap decap action struct. */ struct mlx5_hw_encap_decap_action { + struct mlx5_indirect_list indirect; + enum mlx5dr_action_type action_type; struct mlx5dr_action *action; /* Action object. */ /* Is header_reformat action shared across flows in table. */ bool shared; @@ -2596,6 +2599,16 @@ flow_hw_validate_action_ipsec(struct rte_eth_dev *dev, uint64_t action_flags, struct rte_flow_error *error); +struct mlx5_hw_encap_decap_action* +mlx5_reformat_action_create(struct rte_eth_dev *dev, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *encap_action, + const struct rte_flow_action *decap_action, + struct rte_flow_error *error); +int mlx5_reformat_action_destroy(struct rte_eth_dev *dev, + struct rte_flow_action_list_handle *handle, + struct rte_flow_error *error); + int mlx5_flow_validate_action_count(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, struct rte_flow_error *error); @@ -3041,5 +3054,8 @@ mlx5_hw_mirror_destroy(struct rte_eth_dev *dev, struct mlx5_mirror *mirror); void mlx5_destroy_legacy_indirect(struct rte_eth_dev *dev, struct mlx5_indirect_list *ptr); +void +mlx5_hw_decap_encap_destroy(struct rte_eth_dev *dev, + struct mlx5_indirect_list *reformat); #endif #endif /* RTE_PMD_MLX5_FLOW_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 7b4661ad4f..5e5ebbe620 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -1472,6 +1472,49 @@ hws_table_tmpl_translate_indirect_mirror(struct rte_eth_dev *dev, return ret; } +static int +flow_hw_reformat_action(__rte_unused struct rte_eth_dev *dev, + __rte_unused const struct mlx5_action_construct_data *data, + const struct rte_flow_action *action, + struct mlx5dr_rule_action *dr_rule) +{ + const struct rte_flow_action_indirect_list *indlst_conf = action->conf; + + dr_rule->action = ((struct mlx5_hw_encap_decap_action *) + (indlst_conf->handle))->action; + if (!dr_rule->action) + return -EINVAL; + return 0; +} + +/** + * Template conf must not be masked. If handle is masked, use the one in template, + * otherwise update per flow rule. + */ +static int +hws_table_tmpl_translate_indirect_reformat(struct rte_eth_dev *dev, + const struct rte_flow_action *action, + const struct rte_flow_action *mask, + struct mlx5_hw_actions *acts, + uint16_t action_src, uint16_t action_dst) +{ + int ret = -1; + const struct rte_flow_action_indirect_list *mask_conf = mask->conf; + struct mlx5_priv *priv = dev->data->dev_private; + + if (mask_conf && mask_conf->handle && !mask_conf->conf) + /** + * If handle was masked, assign fixed DR action. + */ + ret = flow_hw_reformat_action(dev, NULL, action, + &acts->rule_acts[action_dst]); + else if (mask_conf && !mask_conf->handle && !mask_conf->conf) + ret = flow_hw_act_data_indirect_list_append + (priv, acts, RTE_FLOW_ACTION_TYPE_INDIRECT_LIST, + action_src, action_dst, flow_hw_reformat_action); + return ret; +} + static int flow_dr_set_meter(struct mlx5_priv *priv, struct mlx5dr_rule_action *dr_rule, @@ -1628,6 +1671,13 @@ table_template_translate_indirect_list(struct rte_eth_dev *dev, acts, action_src, action_dst); break; + case MLX5_INDIRECT_ACTION_LIST_TYPE_REFORMAT: + if (list_conf->conf) + return -EINVAL; + ret = hws_table_tmpl_translate_indirect_reformat(dev, action, mask, + acts, action_src, + action_dst); + break; default: return -EINVAL; } @@ -4966,6 +5016,7 @@ flow_hw_template_actions_list(struct rte_flow_actions_template *at, struct mlx5_indlst_legacy *legacy; struct rte_flow_action_list_handle *handle; } indlst_obj = { .handle = indlst_conf->handle }; + enum mlx5dr_action_type type; switch (list_type) { case MLX5_INDIRECT_ACTION_LIST_TYPE_LEGACY: @@ -4979,6 +5030,11 @@ flow_hw_template_actions_list(struct rte_flow_actions_template *at, action_template_set_type(at, action_types, action_src, curr_off, MLX5DR_ACTION_TYP_DEST_ARRAY); break; + case MLX5_INDIRECT_ACTION_LIST_TYPE_REFORMAT: + type = ((struct mlx5_hw_encap_decap_action *) + (indlst_conf->handle))->action_type; + action_template_set_type(at, action_types, action_src, curr_off, type); + break; default: DRV_LOG(ERR, "Unsupported indirect list type"); return -EINVAL; @@ -10055,12 +10111,79 @@ flow_hw_inlist_type_get(const struct rte_flow_action *actions) return actions[1].type == RTE_FLOW_ACTION_TYPE_END ? MLX5_INDIRECT_ACTION_LIST_TYPE_LEGACY : MLX5_INDIRECT_ACTION_LIST_TYPE_ERR; + case RTE_FLOW_ACTION_TYPE_RAW_DECAP: + case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: + return MLX5_INDIRECT_ACTION_LIST_TYPE_REFORMAT; default: break; } return MLX5_INDIRECT_ACTION_LIST_TYPE_ERR; } +static struct rte_flow_action_list_handle* +mlx5_hw_decap_encap_handle_create(struct rte_eth_dev *dev, + const struct mlx5_flow_template_table_cfg *table_cfg, + const struct rte_flow_action *actions, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + const struct rte_flow_attr *flow_attr = &table_cfg->attr.flow_attr; + const struct rte_flow_action *encap = NULL; + const struct rte_flow_action *decap = NULL; + struct rte_flow_indir_action_conf indirect_conf = { + .ingress = flow_attr->ingress, + .egress = flow_attr->egress, + .transfer = flow_attr->transfer, + }; + struct mlx5_hw_encap_decap_action *handle; + uint64_t action_flags = 0; + + /* + * Allow + * 1. raw_decap / raw_encap / end + * 2. raw_encap / end + * 3. raw_decap / end + */ + while (actions->type != RTE_FLOW_ACTION_TYPE_END) { + if (actions->type == RTE_FLOW_ACTION_TYPE_RAW_DECAP) { + if (action_flags) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + actions, "Invalid indirect action list sequence"); + return NULL; + } + action_flags |= MLX5_FLOW_ACTION_DECAP; + decap = actions; + } else if (actions->type == RTE_FLOW_ACTION_TYPE_RAW_ENCAP) { + if (action_flags & MLX5_FLOW_ACTION_ENCAP) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + actions, "Invalid indirect action list sequence"); + return NULL; + } + action_flags |= MLX5_FLOW_ACTION_ENCAP; + encap = actions; + } else { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + actions, "Invalid indirect action type in list"); + return NULL; + } + actions++; + } + if (!decap && !encap) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + actions, "Invalid indirect action combinations"); + return NULL; + } + handle = mlx5_reformat_action_create(dev, &indirect_conf, encap, decap, error); + if (!handle) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + actions, "Failed to create HWS decap_encap action"); + return NULL; + } + handle->indirect.type = MLX5_INDIRECT_ACTION_LIST_TYPE_REFORMAT; + LIST_INSERT_HEAD(&priv->indirect_list_head, &handle->indirect, entry); + return (struct rte_flow_action_list_handle *)handle; +} + static struct rte_flow_action_list_handle * flow_hw_async_action_list_handle_create(struct rte_eth_dev *dev, uint32_t queue, const struct rte_flow_op_attr *attr, @@ -10112,6 +10235,10 @@ flow_hw_async_action_list_handle_create(struct rte_eth_dev *dev, uint32_t queue, handle = mlx5_hw_mirror_handle_create(dev, &table_cfg, actions, error); break; + case MLX5_INDIRECT_ACTION_LIST_TYPE_REFORMAT: + handle = mlx5_hw_decap_encap_handle_create(dev, &table_cfg, + actions, error); + break; default: handle = NULL; rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, @@ -10171,6 +10298,11 @@ flow_hw_async_action_list_handle_destroy case MLX5_INDIRECT_ACTION_LIST_TYPE_MIRROR: mlx5_hw_mirror_destroy(dev, (struct mlx5_mirror *)handle); break; + case MLX5_INDIRECT_ACTION_LIST_TYPE_REFORMAT: + LIST_REMOVE(&((struct mlx5_hw_encap_decap_action *)handle)->indirect, + entry); + mlx5_reformat_action_destroy(dev, handle, error); + break; default: ret = rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, @@ -11468,4 +11600,195 @@ mlx5_flow_hw_put_dr_action(struct rte_eth_dev *dev, } } +static __rte_always_inline uint32_t +mlx5_reformat_domain_to_tbl_type(const struct rte_flow_indir_action_conf *domain) +{ + uint32_t tbl_type; + + if (domain->transfer) + tbl_type = MLX5DR_ACTION_FLAG_HWS_FDB; + else if (domain->egress) + tbl_type = MLX5DR_ACTION_FLAG_HWS_TX; + else if (domain->ingress) + tbl_type = MLX5DR_ACTION_FLAG_HWS_RX; + else + tbl_type = UINT32_MAX; + return tbl_type; +} + +static struct mlx5_hw_encap_decap_action * +__mlx5_reformat_create(struct rte_eth_dev *dev, + const struct rte_flow_action_raw_encap *encap_conf, + const struct rte_flow_indir_action_conf *domain, + enum mlx5dr_action_type type) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_hw_encap_decap_action *handle; + struct mlx5dr_action_reformat_header hdr; + uint32_t flags; + + flags = mlx5_reformat_domain_to_tbl_type(domain); + flags |= (uint32_t)MLX5DR_ACTION_FLAG_SHARED; + if (flags == UINT32_MAX) { + DRV_LOG(ERR, "Reformat: invalid indirect action configuration"); + return NULL; + } + /* Allocate new list entry. */ + handle = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*handle), 0, SOCKET_ID_ANY); + if (!handle) { + DRV_LOG(ERR, "Reformat: failed to allocate reformat entry"); + return NULL; + } + handle->action_type = type; + hdr.sz = encap_conf ? encap_conf->size : 0; + hdr.data = encap_conf ? encap_conf->data : NULL; + handle->action = mlx5dr_action_create_reformat(priv->dr_ctx, + type, 1, &hdr, 0, flags); + if (!handle->action) { + DRV_LOG(ERR, "Reformat: failed to create reformat action"); + mlx5_free(handle); + return NULL; + } + return handle; +} + +/** + * Create mlx5 reformat action. + * + * @param[in] dev + * Pointer to rte_eth_dev structure. + * @param[in] conf + * Pointer to the indirect action parameters. + * @param[in] encap_action + * Pointer to the raw_encap action configuration. + * @param[in] decap_action + * Pointer to the raw_decap action configuration. + * @param[out] error + * Pointer to error structure. + * + * @return + * A valid shared action handle in case of success, NULL otherwise and + * rte_errno is set. + */ +struct mlx5_hw_encap_decap_action* +mlx5_reformat_action_create(struct rte_eth_dev *dev, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *encap_action, + const struct rte_flow_action *decap_action, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_hw_encap_decap_action *handle; + const struct rte_flow_action_raw_encap *encap = NULL; + const struct rte_flow_action_raw_decap *decap = NULL; + enum mlx5dr_action_type type = MLX5DR_ACTION_TYP_LAST; + + MLX5_ASSERT(!encap_action || encap_action->type == RTE_FLOW_ACTION_TYPE_RAW_ENCAP); + MLX5_ASSERT(!decap_action || decap_action->type == RTE_FLOW_ACTION_TYPE_RAW_DECAP); + if (priv->sh->config.dv_flow_en != 2) { + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, encap_action, + "Reformat: hardware does not support"); + return NULL; + } + if (!conf || (conf->transfer + conf->egress + conf->ingress != 1)) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, encap_action, + "Reformat: domain should be specified"); + return NULL; + } + if ((encap_action && !encap_action->conf) || (decap_action && !decap_action->conf)) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, encap_action, + "Reformat: missed action configuration"); + return NULL; + } + if (encap_action && !decap_action) { + encap = (const struct rte_flow_action_raw_encap *)encap_action->conf; + if (!encap->size || encap->size > MLX5_ENCAP_MAX_LEN || + encap->size < MLX5_ENCAPSULATION_DECISION_SIZE) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, encap_action, + "Reformat: Invalid encap length"); + return NULL; + } + type = MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2; + } else if (decap_action && !encap_action) { + decap = (const struct rte_flow_action_raw_decap *)decap_action->conf; + if (!decap->size || decap->size < MLX5_ENCAPSULATION_DECISION_SIZE) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, encap_action, + "Reformat: Invalid decap length"); + return NULL; + } + type = MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2; + } else if (encap_action && decap_action) { + decap = (const struct rte_flow_action_raw_decap *)decap_action->conf; + encap = (const struct rte_flow_action_raw_encap *)encap_action->conf; + if (decap->size < MLX5_ENCAPSULATION_DECISION_SIZE && + encap->size >= MLX5_ENCAPSULATION_DECISION_SIZE && + encap->size <= MLX5_ENCAP_MAX_LEN) { + type = MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3; + } else if (decap->size >= MLX5_ENCAPSULATION_DECISION_SIZE && + encap->size < MLX5_ENCAPSULATION_DECISION_SIZE) { + type = MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2; + } else { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, encap_action, + "Reformat: Invalid decap & encap length"); + return NULL; + } + } else if (!encap_action && !decap_action) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, encap_action, + "Reformat: Invalid decap & encap configurations"); + return NULL; + } + if (!priv->dr_ctx) { + rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, + encap_action, "Reformat: HWS not supported"); + return NULL; + } + handle = __mlx5_reformat_create(dev, encap, conf, type); + if (!handle) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, encap_action, + "Reformat: failed to create indirect action"); + return NULL; + } + return handle; +} + +/** + * Destroy the indirect reformat action. + * Release action related resources on the NIC and the memory. + * Lock free, (mutex should be acquired by caller). + * + * @param[in] dev + * Pointer to the Ethernet device structure. + * @param[in] handle + * The indirect action list handle to be removed. + * @param[out] error + * Perform verbose error reporting if not NULL. Initialized in case of + * error only. + * + * @return + * 0 on success, otherwise negative errno value. + */ +int +mlx5_reformat_action_destroy(struct rte_eth_dev *dev, + struct rte_flow_action_list_handle *handle, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_hw_encap_decap_action *action; + + action = (struct mlx5_hw_encap_decap_action *)handle; + if (!priv->dr_ctx || !action) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, handle, + "Reformat: invalid action handle"); + mlx5dr_action_destroy(action->action); + mlx5_free(handle); + return 0; +} #endif