From patchwork Sun Oct 29 12:53:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Rongwei Liu X-Patchwork-Id: 133571 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E2D1243236; Sun, 29 Oct 2023 13:54:11 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 64E474027C; Sun, 29 Oct 2023 13:54:11 +0100 (CET) Received: from NAM04-DM6-obe.outbound.protection.outlook.com (mail-dm6nam04on2046.outbound.protection.outlook.com [40.107.102.46]) by mails.dpdk.org (Postfix) with ESMTP id C730C400D5 for ; Sun, 29 Oct 2023 13:54:09 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=KLQHOSZh5JosZoUeqHNqGBDvGfyBP83JyXXzexXan9SC+QgiZZBtlH7wRlZho54smHhk4SVDQW6n5vOYsDssXvnVznQmAvPA91hyhpLq2fJTCsfcpKpz+t28F3mflaWJGEgsg2i9lyjhefmODF5ZXT6YC3WgPzB9Vy84q+a0CILOKSfxTpYU708GMOjoP5ar/20GKAH87hcXAyUbEDubLjSQCNclufXpHEMtWpLX/DRQsk49r6B+6gV60Opy+Qn66XjxdqZmUtIUt79ZUvFu58kFm3cWygtXUjkJZjdaMdmuhACPYV3OqfCJ5QwIB6OhCe3HPnLZ/DAULYLhIyrIcg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=y77rU5hGW2FCL9mYjXfEds+xVrQVNnmhg6i/d0vUaaA=; b=CRPNTA75g+bcLED1U7+yoazz/AlmAv0V4HjiBoh4HxSsKgGVDHencci1QpswOsQ5hwEKhHh3J3QgZjEtjIoCt7staTCNMM+xyWeP1hBIMEMJAc+LQ9s1I0xqUDM+SW9mDYxUuVwH9zAWslOe3mkYUUTpmpMNtVPX1TpXXvqKXtVO73zgYzW6A0zt/EnM7570J/DD5RSi/GR9+ZQUQIrAGc2vRyRmBxkxe03hoga4cY9neQKA0u0TQqWDQmI5BssmouJvdhEUeZGGq2jLha6GZunnCSDr5ol4WXvkLs1VO0NlpH5rT8zPtQFzWv3bXQCnC3GZ8kp6QN2kskbBH4RqGg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=y77rU5hGW2FCL9mYjXfEds+xVrQVNnmhg6i/d0vUaaA=; b=BeMu9MB/NiBW+zfB0XBMWgJTWkb9pHX8kN4lMHmdVeMMAf1Aly0zSWpswBQ9kdCIaLdpx0fS32IWooTleJCQZliiGPy2wjfnErqAIDccbXoeGtxjYDuiVcYDUB5gvH3Zj30TIV4ihF4zvgQVRlhX8GJXFRfPZpf1tIbGqPIR0ffb5kWbWAA6eLPonn6UWUp1GVE9uPXAiYufdI6/EL3njUpYRmmBqSWsfAqHgydU3DJ6VDC7lJM96lsyDpDC07dYc0ya8j0z6flRBVVoYUWUKMl/LOIdZjZm4yVcKx8AHLqyfsnMDyBnG+ms5TN5GWfhWFa8gNZkwGhmtOyhRyRl1w== Received: from PH0P220CA0023.NAMP220.PROD.OUTLOOK.COM (2603:10b6:510:d3::23) by DS7PR12MB6069.namprd12.prod.outlook.com (2603:10b6:8:9f::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6933.26; Sun, 29 Oct 2023 12:54:07 +0000 Received: from SN1PEPF000252A1.namprd05.prod.outlook.com (2603:10b6:510:d3:cafe::4a) by PH0P220CA0023.outlook.office365.com (2603:10b6:510:d3::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6933.26 via Frontend Transport; Sun, 29 Oct 2023 12:54:06 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by SN1PEPF000252A1.mail.protection.outlook.com (10.167.242.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6933.22 via Frontend Transport; Sun, 29 Oct 2023 12:54:06 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Sun, 29 Oct 2023 05:54:02 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Sun, 29 Oct 2023 05:54:00 -0700 From: Rongwei Liu To: , , , , , Subject: [PATCH v3] net/mlx5: add indirect encap decap support Date: Sun, 29 Oct 2023 14:53:48 +0200 Message-ID: <20231029125348.274163-1-rongweil@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20231029100956.218978-1-rongweil@nvidia.com> References: <20231029100956.218978-1-rongweil@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SN1PEPF000252A1:EE_|DS7PR12MB6069:EE_ X-MS-Office365-Filtering-Correlation-Id: 0d659850-fdaf-4976-d7ed-08dbd87e22fd X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: RicvZOccSgaN34B8DNeWtAzlKBEXZbk7lSp0AOM1vqcG5WZ0W70G7craReiMZQSxi0FeelaItaRIdmfVAO8pxZh2MkAlhtJSbdQWxlHNtNvQrzBMSrjtMRABp8yZhRzZc68RpAxAkpZBjkbDt+M92ZwyR/MVhWz2z5pQKBcv9addbsaFj0SXBOhfv0CC5szeA9hBXnhbe4QFHc2h5Fxvaq+oCc7wy6+mdbSrtmZZKBp7AOHKB0M0hvtprBiM1wxGbHuhTtfg9vxjdpgj8QkCn1BhV0UjbvRGp+SQqfWrK77QfeUindh5HqZilxgaSDV48oLb1mgn2fEXM6+gQDYDRWaOmdDXi4HDDcKLBUUSHY75niNbijybAGVJirp1V1bgHp0eS4PfOHCnD3ZCn5m3eF+bPkpdlGdCA0rx+TNWUausOK/dP4bBcQh3Wabsgo8uqmtG/jOjw1ORbHo8nMdo8fglRem+8wYLWZwsLsDcH4FhSdZb4Rr5tfi6vUj5tO77NQcj9Qr1dXRpm/zWaue22e0O0HbrgSpyX8zaKpetCLp0cw24g/hYJgALYx9VnhAjLOFB+Pj4bEOzYC2JujIwSW60jAHGR1KrggV+nRPqRqjrp93mGBbOwAQBtWwXkjzFLCPwA2vyrl30rwHrdHd73nL7e0i4n8ckTz7qbjJ7BXh4L1kQ50V6tOtEN2cbm0ts2fJarmZNt7bnMdi0EAwErDNXM/DJMPCpzOQI43XQ4suMpQlFDD9B0+I73ra1bmljniZI0vTv70N4FITb0FlUPCG/ZZqLQfRzuouQS9w8ufk= X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(346002)(39860400002)(376002)(396003)(136003)(230173577357003)(230273577357003)(230922051799003)(64100799003)(82310400011)(1800799009)(186009)(451199024)(40470700004)(46966006)(36840700001)(6666004)(478600001)(7696005)(70206006)(70586007)(110136005)(336012)(426003)(6286002)(16526019)(26005)(1076003)(2616005)(41300700001)(2906002)(316002)(30864003)(8676002)(8936002)(5660300002)(86362001)(36756003)(36860700001)(83380400001)(47076005)(356005)(7636003)(82740400003)(55016003)(40460700003)(40480700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Oct 2023 12:54:06.6034 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0d659850-fdaf-4976-d7ed-08dbd87e22fd X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SN1PEPF000252A1.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB6069 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Support the raw_encap/decap combinations in the indirect action list, and translates to 4 types of underlayer tunnel operations: 1. Layer 2 encapsulation like VxLAN. 2. Layer 2 decapsulation like VxLAN. 3. Layer 3 encapsulation like GRE. 4. Layer 3 decapsulation like GRE. Each indirect action list has a unique handle ID and stands for different tunnel operations. The operation is shared globally with fixed patterns. It means there is no configuration associated with each handle ID and conf pointer should be NULL always no matter in the action template or flow rules. If the handle ID mask in the action template is NULL, each flow rule can take its own indirect handle, otherwise, the ID in action template is used for all rules. The handle ID used in the flow rules must be the same type as the one in the action template. Testpmd cli example: flow indirect_action 0 create action_id 10 transfer list actions raw_decap index 1 / raw_encap index 2 / end  flow pattern_template 0 create transfer pattern_template_id 1 template eth / ipv4 / udp / end flow actions_template 0 create transfer actions_template_id 1 template indirect_list handle 10 / jump / end mask indirect_list / jump / end flow template_table 0 create table_id 1 group 1 priority 0 transfer rules_number 64 pattern_template 1 actions_template 1 flow queue 0 create 0 template_table 1 pattern_template 0 actions_template 0 postpone no pattern eth / ipv4 / udp / end actions indirect_list handle 11 / jump group 10 / end  Signed-off-by: Rongwei Liu Acked-by: Ori Kam Acked-by: Suanming Mou v3: Protect with macro to fix warning. v2: Code rebase --- drivers/net/mlx5/mlx5_flow.c | 5 + drivers/net/mlx5/mlx5_flow.h | 16 ++ drivers/net/mlx5/mlx5_flow_hw.c | 323 ++++++++++++++++++++++++++++++++ 3 files changed, 344 insertions(+) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index a500afd4f7..4a28d13422 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -66,6 +66,7 @@ void mlx5_indirect_list_handles_release(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; + struct rte_flow_error error; while (!LIST_EMPTY(&priv->indirect_list_head)) { struct mlx5_indirect_list *e = @@ -80,6 +81,10 @@ mlx5_indirect_list_handles_release(struct rte_eth_dev *dev) case MLX5_INDIRECT_ACTION_LIST_TYPE_LEGACY: mlx5_destroy_legacy_indirect(dev, e); break; + case MLX5_INDIRECT_ACTION_LIST_TYPE_REFORMAT: + mlx5_reformat_action_destroy(dev, + (struct rte_flow_action_list_handle *)e, &error); + break; #endif default: DRV_LOG(ERR, "invalid indirect list type"); diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 3ea2548d2b..2b94a4355c 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -101,6 +101,7 @@ enum mlx5_indirect_list_type { MLX5_INDIRECT_ACTION_LIST_TYPE_ERR = 0, MLX5_INDIRECT_ACTION_LIST_TYPE_LEGACY = 1, MLX5_INDIRECT_ACTION_LIST_TYPE_MIRROR = 2, + MLX5_INDIRECT_ACTION_LIST_TYPE_REFORMAT = 3, }; /** @@ -1366,6 +1367,8 @@ struct mlx5_hw_jump_action { /* Encap decap action struct. */ struct mlx5_hw_encap_decap_action { + struct mlx5_indirect_list indirect; + enum mlx5dr_action_type action_type; struct mlx5dr_action *action; /* Action object. */ /* Is header_reformat action shared across flows in table. */ bool shared; @@ -2426,6 +2429,16 @@ const struct rte_flow_action *mlx5_flow_find_action int mlx5_validate_action_rss(struct rte_eth_dev *dev, const struct rte_flow_action *action, struct rte_flow_error *error); + +struct mlx5_hw_encap_decap_action* +mlx5_reformat_action_create(struct rte_eth_dev *dev, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *encap_action, + const struct rte_flow_action *decap_action, + struct rte_flow_error *error); +int mlx5_reformat_action_destroy(struct rte_eth_dev *dev, + struct rte_flow_action_list_handle *handle, + struct rte_flow_error *error); int mlx5_flow_validate_action_count(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, struct rte_flow_error *error); @@ -2859,5 +2872,8 @@ mlx5_hw_mirror_destroy(struct rte_eth_dev *dev, struct mlx5_mirror *mirror); void mlx5_destroy_legacy_indirect(struct rte_eth_dev *dev, struct mlx5_indirect_list *ptr); +void +mlx5_hw_decap_encap_destroy(struct rte_eth_dev *dev, + struct mlx5_indirect_list *reformat); #endif #endif /* RTE_PMD_MLX5_FLOW_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 88fe8d9a68..9f356f85c9 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -1468,6 +1468,49 @@ hws_table_tmpl_translate_indirect_mirror(struct rte_eth_dev *dev, return ret; } +static int +flow_hw_reformat_action(__rte_unused struct rte_eth_dev *dev, + __rte_unused const struct mlx5_action_construct_data *data, + const struct rte_flow_action *action, + struct mlx5dr_rule_action *dr_rule) +{ + const struct rte_flow_action_indirect_list *indlst_conf = action->conf; + + dr_rule->action = ((struct mlx5_hw_encap_decap_action *) + (indlst_conf->handle))->action; + if (!dr_rule->action) + return -EINVAL; + return 0; +} + +/** + * Template conf must not be masked. If handle is masked, use the one in template, + * otherwise update per flow rule. + */ +static int +hws_table_tmpl_translate_indirect_reformat(struct rte_eth_dev *dev, + const struct rte_flow_action *action, + const struct rte_flow_action *mask, + struct mlx5_hw_actions *acts, + uint16_t action_src, uint16_t action_dst) +{ + int ret = -1; + const struct rte_flow_action_indirect_list *mask_conf = mask->conf; + struct mlx5_priv *priv = dev->data->dev_private; + + if (mask_conf && mask_conf->handle && !mask_conf->conf) + /** + * If handle was masked, assign fixed DR action. + */ + ret = flow_hw_reformat_action(dev, NULL, action, + &acts->rule_acts[action_dst]); + else if (mask_conf && !mask_conf->handle && !mask_conf->conf) + ret = flow_hw_act_data_indirect_list_append + (priv, acts, RTE_FLOW_ACTION_TYPE_INDIRECT_LIST, + action_src, action_dst, flow_hw_reformat_action); + return ret; +} + static int flow_dr_set_meter(struct mlx5_priv *priv, struct mlx5dr_rule_action *dr_rule, @@ -1624,6 +1667,13 @@ table_template_translate_indirect_list(struct rte_eth_dev *dev, acts, action_src, action_dst); break; + case MLX5_INDIRECT_ACTION_LIST_TYPE_REFORMAT: + if (list_conf->conf) + return -EINVAL; + ret = hws_table_tmpl_translate_indirect_reformat(dev, action, mask, + acts, action_src, + action_dst); + break; default: return -EINVAL; } @@ -4890,6 +4940,7 @@ flow_hw_template_actions_list(struct rte_flow_actions_template *at, struct mlx5_indlst_legacy *legacy; struct rte_flow_action_list_handle *handle; } indlst_obj = { .handle = indlst_conf->handle }; + enum mlx5dr_action_type type; switch (list_type) { case MLX5_INDIRECT_ACTION_LIST_TYPE_LEGACY: @@ -4903,6 +4954,11 @@ flow_hw_template_actions_list(struct rte_flow_actions_template *at, action_template_set_type(at, action_types, action_src, curr_off, MLX5DR_ACTION_TYP_DEST_ARRAY); break; + case MLX5_INDIRECT_ACTION_LIST_TYPE_REFORMAT: + type = ((struct mlx5_hw_encap_decap_action *) + (indlst_conf->handle))->action_type; + action_template_set_type(at, action_types, action_src, curr_off, type); + break; default: DRV_LOG(ERR, "Unsupported indirect list type"); return -EINVAL; @@ -10089,12 +10145,79 @@ flow_hw_inlist_type_get(const struct rte_flow_action *actions) return actions[1].type == RTE_FLOW_ACTION_TYPE_END ? MLX5_INDIRECT_ACTION_LIST_TYPE_LEGACY : MLX5_INDIRECT_ACTION_LIST_TYPE_ERR; + case RTE_FLOW_ACTION_TYPE_RAW_DECAP: + case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: + return MLX5_INDIRECT_ACTION_LIST_TYPE_REFORMAT; default: break; } return MLX5_INDIRECT_ACTION_LIST_TYPE_ERR; } +static struct rte_flow_action_list_handle* +mlx5_hw_decap_encap_handle_create(struct rte_eth_dev *dev, + const struct mlx5_flow_template_table_cfg *table_cfg, + const struct rte_flow_action *actions, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + const struct rte_flow_attr *flow_attr = &table_cfg->attr.flow_attr; + const struct rte_flow_action *encap = NULL; + const struct rte_flow_action *decap = NULL; + struct rte_flow_indir_action_conf indirect_conf = { + .ingress = flow_attr->ingress, + .egress = flow_attr->egress, + .transfer = flow_attr->transfer, + }; + struct mlx5_hw_encap_decap_action *handle; + uint64_t action_flags = 0; + + /* + * Allow + * 1. raw_decap / raw_encap / end + * 2. raw_encap / end + * 3. raw_decap / end + */ + while (actions->type != RTE_FLOW_ACTION_TYPE_END) { + if (actions->type == RTE_FLOW_ACTION_TYPE_RAW_DECAP) { + if (action_flags) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + actions, "Invalid indirect action list sequence"); + return NULL; + } + action_flags |= MLX5_FLOW_ACTION_DECAP; + decap = actions; + } else if (actions->type == RTE_FLOW_ACTION_TYPE_RAW_ENCAP) { + if (action_flags & MLX5_FLOW_ACTION_ENCAP) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + actions, "Invalid indirect action list sequence"); + return NULL; + } + action_flags |= MLX5_FLOW_ACTION_ENCAP; + encap = actions; + } else { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + actions, "Invalid indirect action type in list"); + return NULL; + } + actions++; + } + if (!decap && !encap) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + actions, "Invalid indirect action combinations"); + return NULL; + } + handle = mlx5_reformat_action_create(dev, &indirect_conf, encap, decap, error); + if (!handle) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + actions, "Failed to create HWS decap_encap action"); + return NULL; + } + handle->indirect.type = MLX5_INDIRECT_ACTION_LIST_TYPE_REFORMAT; + LIST_INSERT_HEAD(&priv->indirect_list_head, &handle->indirect, entry); + return (struct rte_flow_action_list_handle *)handle; +} + static struct rte_flow_action_list_handle * flow_hw_async_action_list_handle_create(struct rte_eth_dev *dev, uint32_t queue, const struct rte_flow_op_attr *attr, @@ -10146,6 +10269,10 @@ flow_hw_async_action_list_handle_create(struct rte_eth_dev *dev, uint32_t queue, handle = mlx5_hw_mirror_handle_create(dev, &table_cfg, actions, error); break; + case MLX5_INDIRECT_ACTION_LIST_TYPE_REFORMAT: + handle = mlx5_hw_decap_encap_handle_create(dev, &table_cfg, + actions, error); + break; default: handle = NULL; rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, @@ -10205,6 +10332,11 @@ flow_hw_async_action_list_handle_destroy case MLX5_INDIRECT_ACTION_LIST_TYPE_MIRROR: mlx5_hw_mirror_destroy(dev, (struct mlx5_mirror *)handle); break; + case MLX5_INDIRECT_ACTION_LIST_TYPE_REFORMAT: + LIST_REMOVE(&((struct mlx5_hw_encap_decap_action *)handle)->indirect, + entry); + mlx5_reformat_action_destroy(dev, handle, error); + break; default: ret = rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, @@ -11429,4 +11561,195 @@ mlx5_flow_meter_init(struct rte_eth_dev *dev, return ret; } +static __rte_always_inline uint32_t +mlx5_reformat_domain_to_tbl_type(const struct rte_flow_indir_action_conf *domain) +{ + uint32_t tbl_type; + + if (domain->transfer) + tbl_type = MLX5DR_ACTION_FLAG_HWS_FDB; + else if (domain->egress) + tbl_type = MLX5DR_ACTION_FLAG_HWS_TX; + else if (domain->ingress) + tbl_type = MLX5DR_ACTION_FLAG_HWS_RX; + else + tbl_type = UINT32_MAX; + return tbl_type; +} + +static struct mlx5_hw_encap_decap_action * +__mlx5_reformat_create(struct rte_eth_dev *dev, + const struct rte_flow_action_raw_encap *encap_conf, + const struct rte_flow_indir_action_conf *domain, + enum mlx5dr_action_type type) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_hw_encap_decap_action *handle; + struct mlx5dr_action_reformat_header hdr; + uint32_t flags; + + flags = mlx5_reformat_domain_to_tbl_type(domain); + flags |= (uint32_t)MLX5DR_ACTION_FLAG_SHARED; + if (flags == UINT32_MAX) { + DRV_LOG(ERR, "Reformat: invalid indirect action configuration"); + return NULL; + } + /* Allocate new list entry. */ + handle = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*handle), 0, SOCKET_ID_ANY); + if (!handle) { + DRV_LOG(ERR, "Reformat: failed to allocate reformat entry"); + return NULL; + } + handle->action_type = type; + hdr.sz = encap_conf ? encap_conf->size : 0; + hdr.data = encap_conf ? encap_conf->data : NULL; + handle->action = mlx5dr_action_create_reformat(priv->dr_ctx, + type, 1, &hdr, 0, flags); + if (!handle->action) { + DRV_LOG(ERR, "Reformat: failed to create reformat action"); + mlx5_free(handle); + return NULL; + } + return handle; +} + +/** + * Create mlx5 reformat action. + * + * @param[in] dev + * Pointer to rte_eth_dev structure. + * @param[in] conf + * Pointer to the indirect action parameters. + * @param[in] encap_action + * Pointer to the raw_encap action configuration. + * @param[in] decap_action + * Pointer to the raw_decap action configuration. + * @param[out] error + * Pointer to error structure. + * + * @return + * A valid shared action handle in case of success, NULL otherwise and + * rte_errno is set. + */ +struct mlx5_hw_encap_decap_action* +mlx5_reformat_action_create(struct rte_eth_dev *dev, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *encap_action, + const struct rte_flow_action *decap_action, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_hw_encap_decap_action *handle; + const struct rte_flow_action_raw_encap *encap = NULL; + const struct rte_flow_action_raw_decap *decap = NULL; + enum mlx5dr_action_type type = MLX5DR_ACTION_TYP_LAST; + + MLX5_ASSERT(!encap_action || encap_action->type == RTE_FLOW_ACTION_TYPE_RAW_ENCAP); + MLX5_ASSERT(!decap_action || decap_action->type == RTE_FLOW_ACTION_TYPE_RAW_DECAP); + if (priv->sh->config.dv_flow_en != 2) { + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, encap_action, + "Reformat: hardware does not support"); + return NULL; + } + if (!conf || (conf->transfer + conf->egress + conf->ingress != 1)) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, encap_action, + "Reformat: domain should be specified"); + return NULL; + } + if ((encap_action && !encap_action->conf) || (decap_action && !decap_action->conf)) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, encap_action, + "Reformat: missed action configuration"); + return NULL; + } + if (encap_action && !decap_action) { + encap = (const struct rte_flow_action_raw_encap *)encap_action->conf; + if (!encap->size || encap->size > MLX5_ENCAP_MAX_LEN || + encap->size < MLX5_ENCAPSULATION_DECISION_SIZE) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, encap_action, + "Reformat: Invalid encap length"); + return NULL; + } + type = MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2; + } else if (decap_action && !encap_action) { + decap = (const struct rte_flow_action_raw_decap *)decap_action->conf; + if (!decap->size || decap->size < MLX5_ENCAPSULATION_DECISION_SIZE) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, encap_action, + "Reformat: Invalid decap length"); + return NULL; + } + type = MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2; + } else if (encap_action && decap_action) { + decap = (const struct rte_flow_action_raw_decap *)decap_action->conf; + encap = (const struct rte_flow_action_raw_encap *)encap_action->conf; + if (decap->size < MLX5_ENCAPSULATION_DECISION_SIZE && + encap->size >= MLX5_ENCAPSULATION_DECISION_SIZE && + encap->size <= MLX5_ENCAP_MAX_LEN) { + type = MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3; + } else if (decap->size >= MLX5_ENCAPSULATION_DECISION_SIZE && + encap->size < MLX5_ENCAPSULATION_DECISION_SIZE) { + type = MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2; + } else { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, encap_action, + "Reformat: Invalid decap & encap length"); + return NULL; + } + } else if (!encap_action && !decap_action) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, encap_action, + "Reformat: Invalid decap & encap configurations"); + return NULL; + } + if (!priv->dr_ctx) { + rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, + encap_action, "Reformat: HWS not supported"); + return NULL; + } + handle = __mlx5_reformat_create(dev, encap, conf, type); + if (!handle) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, encap_action, + "Reformat: failed to create indirect action"); + return NULL; + } + return handle; +} + +/** + * Destroy the indirect reformat action. + * Release action related resources on the NIC and the memory. + * Lock free, (mutex should be acquired by caller). + * + * @param[in] dev + * Pointer to the Ethernet device structure. + * @param[in] handle + * The indirect action list handle to be removed. + * @param[out] error + * Perform verbose error reporting if not NULL. Initialized in case of + * error only. + * + * @return + * 0 on success, otherwise negative errno value. + */ +int +mlx5_reformat_action_destroy(struct rte_eth_dev *dev, + struct rte_flow_action_list_handle *handle, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_hw_encap_decap_action *action; + + action = (struct mlx5_hw_encap_decap_action *)handle; + if (!priv->dr_ctx || !action) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, handle, + "Reformat: invalid action handle"); + mlx5dr_action_destroy(action->action); + mlx5_free(handle); + return 0; +} #endif