From patchwork Tue Oct 31 09:42:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rongwei Liu X-Patchwork-Id: 133644 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 184D343251; Tue, 31 Oct 2023 10:44:01 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 37E3A40E64; Tue, 31 Oct 2023 10:43:38 +0100 (CET) Received: from NAM04-DM6-obe.outbound.protection.outlook.com (mail-dm6nam04on2079.outbound.protection.outlook.com [40.107.102.79]) by mails.dpdk.org (Postfix) with ESMTP id 75D7D40E68 for ; Tue, 31 Oct 2023 10:43:36 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Z1nWkucWcKkiQPY+ATsmWDKKKDK0S4hegsHdQiW1yJ9ufT0aXmPfoSdmMF/J5XKbziwouSNWH33yh4B1AXUoWOvO+Soo3MTj53bcZgiXJZ2vCycQx5jIpEHj8egnavRVQI3cFcfA/ul7hEHyD6yFFaMhGUvbrsbTnVIdsKeugvBLb/P6dkRfkr38pwZixi0RFe+eOLrPdF06E+MpjTRhko8u66F1qHYXHTKbXwJnmH/zqb9NmATM+1R5E9KpTn0/aqw9Lo/UEUS3TnbVGSa+IxfvUUBTAYd0ZX/9b92WIQXfknw/1torlhO5LaDitKteYi64FyoxW/KUJlzoL0MGqg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=mNj0chj1By+EZKBwsqhD2ojiSCrqGTGQGI4LGsrHcv8=; b=lmLX+1wdxRD+0sdjFmN4e00gJ3BRLDh0jZWtgPZkcSEDIpaxykpQhTHhosQP/sbZRiRhrUO3g7WWqVUGjGZ86TWsiMzW0ez028Ze+8MeR+GfVByyUaEg9ie2Qn0gHHgi4Fegyt++7E+gqeaZbnFhQqwiky8vuRyMfr7ME+eCeNc2TDMagn3vk2WOUzf6IHAo01a5+CUWXGe5CX6ZsVU/AN8Wddk2nyNXvb0t8YvTjk7iNhns2gJYqj/lezxmQbJsOMvgZ+aQdxI+jw+zD2DkEhNlkRcI19ZCxKPy8OnAhBRi7L/vPV7C+q4LafXwimmb1m7LUWXhTQp4ZD1DJhOhoA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=mNj0chj1By+EZKBwsqhD2ojiSCrqGTGQGI4LGsrHcv8=; b=XZUXRWURe5RLtOmbRxrajQZ48yVxHCWZmdCJoFEEk7Vtrsk/C1Ik5DocSfpfEFOR7TaXiHLer9cv0YclH/c466sNSSfKFdGpAwk7iwubmYRlbDOrUDpd30qoU7Y/8wOJOSGndaHIvU18qVNnkhqZxyG/U/XUbrV8zhicGaiUea2Dpvnvu1VEevnoXok/BShBCRFqeLB9c0lpNiEqMxJgtIuRzTYP9i800Jpdpnrp8ZcKuI6sN2ArHJnpLf/FVcYU8PW8v4DMhtQFj+fHbdgXWCDsYqJgQqX3b9EXL4bwLEe5IQCtzgucfoDsORCkQDd/tJcAEU9q+SJg09UaeRY4og== Received: from PH8PR02CA0013.namprd02.prod.outlook.com (2603:10b6:510:2d0::14) by DM8PR12MB5495.namprd12.prod.outlook.com (2603:10b6:8:33::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6933.29; Tue, 31 Oct 2023 09:43:34 +0000 Received: from SA2PEPF000015CD.namprd03.prod.outlook.com (2603:10b6:510:2d0:cafe::45) by PH8PR02CA0013.outlook.office365.com (2603:10b6:510:2d0::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6933.29 via Frontend Transport; Tue, 31 Oct 2023 09:43:34 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by SA2PEPF000015CD.mail.protection.outlook.com (10.167.241.203) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6954.19 via Frontend Transport; Tue, 31 Oct 2023 09:43:33 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 31 Oct 2023 02:43:12 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 31 Oct 2023 02:43:10 -0700 From: Rongwei Liu To: , , , , , Subject: [PATCH v2 5/6] net/mlx5: implement IPv6 routing push remove Date: Tue, 31 Oct 2023 11:42:43 +0200 Message-ID: <20231031094244.381557-6-rongweil@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20231031094244.381557-1-rongweil@nvidia.com> References: <20230417092540.2617450-5-rongweil@nvidia.com> <20231031094244.381557-1-rongweil@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SA2PEPF000015CD:EE_|DM8PR12MB5495:EE_ X-MS-Office365-Filtering-Correlation-Id: d980ca12-d192-49b9-397f-08dbd9f5d95a X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: qWS+7yWnsiH5RZ8s9gVcQlfYLtcDiG3qBIp8P2V6qbDvqunLihhpRk4u/7sme8LD5O5qAbaOtUvuj5ORyNVDg00hedXjQmIpcV0A19xPwisAXrIejhWTXdv1EWfTXgctWvN/vfF9rKRfSqAdTnULe1hRFxfcq0BQZl+mBRowjx45htWuM/oZe1UM15YOgX2LBcIizUHCh9bAPBOjC9LFrrNu+CvjT6zrcBi6eeRGQoA5PbFTgLAuEsoN6CqFktKPSmBjhg/KKNgFAUQqGHf7XMtua32mi3qRQxwCV7U/7IRgj+yY1PbvEoaxTPeFFAziRAMXC44k1EnviM221Q0gneZOLVGKpI/ap2+5zkaQNjPb3/A4HeSGVLDf0Rt0MQlzdQOQmDEvXLKneSQLY8t5NaiSn4uFzwVv823Ip+Ej+BW1NL+gEFxafkmtlsiOonRzObEAL1Hw/ie7stV36pJS66QFnVLJFYVF3imCi+e6GX/QH1oVBtcWhuDB66W5KV8UFPn+mG12iZWtVVILc/e0sP5V5lZ2vPYYr9kntlkx7txkU7KT9ilnnmGEUYAy0Ow/e2mE4rLEET5JzJQDPJXfLDbukaCW1fZYJ9Fq6KYNchj5VRY8vSGbkBY4XcicMGM4Px737BcGQjRDCM91wKO2Ami25bVSU9iCGurmh8cNBCSFEErFau11xPKHoMYgjet79/O12g4XwFxZsw0Owpc8Bix+fGurc1qLKLn7EzThNNSrSjj+hWOZO4k7+cRsAePVWtkvqfsDgR0R36hFCzIT2719GRpVW+gWFX5yPIq1QVU= X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(136003)(376002)(346002)(39860400002)(396003)(230922051799003)(230173577357003)(230273577357003)(64100799003)(451199024)(1800799009)(82310400011)(186009)(46966006)(40470700004)(36840700001)(30864003)(1076003)(26005)(2616005)(16526019)(6286002)(40460700003)(55016003)(40480700001)(36756003)(86362001)(356005)(7636003)(82740400003)(336012)(2906002)(83380400001)(478600001)(7696005)(36860700001)(6666004)(47076005)(8936002)(8676002)(316002)(41300700001)(70206006)(70586007)(110136005)(5660300002)(426003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Oct 2023 09:43:33.8434 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d980ca12-d192-49b9-397f-08dbd9f5d95a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SA2PEPF000015CD.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM8PR12MB5495 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Reserve the push data buffer for each job and the maximum length is set to 128 for now. Only supports type IPPROTO_ROUTING when translating the rte flow action. Remove actions must be shared globally and only supports next layer as TCP or UDP. Signed-off-by: Rongwei Liu Acked-by: Ori Kam Acked-by: Suanming Mou --- doc/guides/nics/features/mlx5.ini | 2 + doc/guides/nics/mlx5.rst | 11 +- doc/guides/rel_notes/release_23_11.rst | 2 + drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_flow.h | 21 +- drivers/net/mlx5/mlx5_flow_hw.c | 282 ++++++++++++++++++++++++- 6 files changed, 309 insertions(+), 10 deletions(-) diff --git a/doc/guides/nics/features/mlx5.ini b/doc/guides/nics/features/mlx5.ini index 0ed9a6aefc..0739fe9d63 100644 --- a/doc/guides/nics/features/mlx5.ini +++ b/doc/guides/nics/features/mlx5.ini @@ -108,6 +108,8 @@ flag = Y inc_tcp_ack = Y inc_tcp_seq = Y indirect_list = Y +ipv6_ext_push = Y +ipv6_ext_remove = Y jump = Y mark = Y meter = Y diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index be5054e68a..955dedf3db 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -148,7 +148,9 @@ Features - Matching on GTP extension header with raw encap/decap action. - Matching on Geneve TLV option header with raw encap/decap action. - Matching on ESP header SPI field. +- Matching on flex item with specific pattern. - Matching on InfiniBand BTH. +- Modify flex item field. - Modify IPv4/IPv6 ECN field. - RSS support in sample action. - E-Switch mirroring and jump. @@ -166,7 +168,7 @@ Features - Sub-Function. - Matching on represented port. - Matching on aggregated affinity. - +- Push or remove IPv6 routing extension. Limitations ----------- @@ -759,6 +761,13 @@ Limitations to the representor of the source virtual port (SF/VF), while if it is disabled, the traffic will be routed based on the steering rules in the ingress domain. +- IPv6 routing extension push or remove: + + - Supported only with HW Steering enabled (``dv_flow_en`` = 2). + - Supported in non-zero group (No limits on transfer domain if `fdb_def_rule_en` = 1 which is default). + - Only supports TCP or UDP as next layer. + - IPv6 routing header must be the only present extension. + - Not supported on guest port. Statistics ---------- diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst index 322d8b1e0e..78e774cf02 100644 --- a/doc/guides/rel_notes/release_23_11.rst +++ b/doc/guides/rel_notes/release_23_11.rst @@ -150,6 +150,8 @@ New Features * Added support for ``RTE_FLOW_ACTION_TYPE_INDIRECT_LIST`` flow action. * Added support for ``RTE_FLOW_ITEM_TYPE_PTYPE`` flow item. * Added support for ``RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR`` flow action and mirror. + * Added support for ``RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH`` flow action. + * Added support for ``RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE`` flow action. * **Updated Solarflare net driver.** diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index f13a56ee9e..277bbbf407 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -373,6 +373,7 @@ struct mlx5_hw_q_job { }; void *user_data; /* Job user data. */ uint8_t *encap_data; /* Encap data. */ + uint8_t *push_data; /* IPv6 routing push data. */ struct mlx5_modification_cmd *mhdr_cmd; struct rte_flow_item *items; union { diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 43608e15d2..c7be1f3553 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -363,6 +363,8 @@ enum mlx5_feature_name { #define MLX5_FLOW_ACTION_INDIRECT_AGE (1ull << 44) #define MLX5_FLOW_ACTION_QUOTA (1ull << 46) #define MLX5_FLOW_ACTION_PORT_REPRESENTOR (1ull << 47) +#define MLX5_FLOW_ACTION_IPV6_ROUTING_REMOVE (1ull << 48) +#define MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH (1ull << 49) #define MLX5_FLOW_DROP_INCLUSIVE_ACTIONS \ (MLX5_FLOW_ACTION_COUNT | MLX5_FLOW_ACTION_SAMPLE | MLX5_FLOW_ACTION_AGE) @@ -1269,6 +1271,8 @@ typedef int const struct rte_flow_action *, struct mlx5dr_rule_action *); +#define MLX5_MHDR_MAX_CMD ((MLX5_MAX_MODIFY_NUM) * 2 + 1) + /* rte flow action translate to DR action struct. */ struct mlx5_action_construct_data { LIST_ENTRY(mlx5_action_construct_data) next; @@ -1315,6 +1319,10 @@ struct mlx5_action_construct_data { struct { cnt_id_t id; } shared_counter; + struct { + /* IPv6 extension push data len. */ + uint16_t len; + } ipv6_ext; struct { uint32_t id; uint32_t conf_masked:1; @@ -1359,6 +1367,7 @@ struct rte_flow_actions_template { uint16_t *src_off; /* RTE action displacement from app. template */ uint16_t reformat_off; /* Offset of DR reformat action. */ uint16_t mhdr_off; /* Offset of DR modify header action. */ + uint16_t recom_off; /* Offset of DR IPv6 routing push remove action. */ uint32_t refcnt; /* Reference counter. */ uint8_t flex_item; /* flex item index. */ }; @@ -1384,7 +1393,14 @@ struct mlx5_hw_encap_decap_action { uint8_t data[]; /* Action data. */ }; -#define MLX5_MHDR_MAX_CMD ((MLX5_MAX_MODIFY_NUM) * 2 + 1) +/* Push remove action struct. */ +struct mlx5_hw_push_remove_action { + struct mlx5dr_action *action; /* Action object. */ + /* Is push_remove action shared across flows in table. */ + uint8_t shared; + size_t data_size; /* Action metadata size. */ + uint8_t data[]; /* Action data. */ +}; /* Modify field action struct. */ struct mlx5_hw_modify_header_action { @@ -1415,6 +1431,9 @@ struct mlx5_hw_actions { /* Encap/Decap action. */ struct mlx5_hw_encap_decap_action *encap_decap; uint16_t encap_decap_pos; /* Encap/Decap action position. */ + /* Push/remove action. */ + struct mlx5_hw_push_remove_action *push_remove; + uint16_t push_remove_pos; /* Push/remove action position. */ uint32_t mark:1; /* Indicate the mark action. */ cnt_id_t cnt_id; /* Counter id. */ uint32_t mtr_id; /* Meter id. */ diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 977751394e..592d436099 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -624,6 +624,12 @@ __flow_hw_action_template_destroy(struct rte_eth_dev *dev, mlx5_free(acts->encap_decap); acts->encap_decap = NULL; } + if (acts->push_remove) { + if (acts->push_remove->action) + mlx5dr_action_destroy(acts->push_remove->action); + mlx5_free(acts->push_remove); + acts->push_remove = NULL; + } if (acts->mhdr) { flow_hw_template_destroy_mhdr_action(acts->mhdr); mlx5_free(acts->mhdr); @@ -761,6 +767,44 @@ __flow_hw_act_data_encap_append(struct mlx5_priv *priv, return 0; } +/** + * Append dynamic push action to the dynamic action list. + * + * @param[in] dev + * Pointer to the port. + * @param[in] acts + * Pointer to the template HW steering DR actions. + * @param[in] type + * Action type. + * @param[in] action_src + * Offset of source rte flow action. + * @param[in] action_dst + * Offset of destination DR action. + * @param[in] len + * Length of the data to be updated. + * + * @return + * Data pointer on success, NULL otherwise and rte_errno is set. + */ +static __rte_always_inline void * +__flow_hw_act_data_push_append(struct rte_eth_dev *dev, + struct mlx5_hw_actions *acts, + enum rte_flow_action_type type, + uint16_t action_src, + uint16_t action_dst, + uint16_t len) +{ + struct mlx5_action_construct_data *act_data; + struct mlx5_priv *priv = dev->data->dev_private; + + act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst); + if (!act_data) + return NULL; + act_data->ipv6_ext.len = len; + LIST_INSERT_HEAD(&acts->act_list, act_data, next); + return act_data; +} + static __rte_always_inline int __flow_hw_act_data_hdr_modify_append(struct mlx5_priv *priv, struct mlx5_hw_actions *acts, @@ -1924,6 +1968,82 @@ mlx5_tbl_translate_modify_header(struct rte_eth_dev *dev, return 0; } + +static int +mlx5_create_ipv6_ext_reformat(struct rte_eth_dev *dev, + const struct mlx5_flow_template_table_cfg *cfg, + struct mlx5_hw_actions *acts, + struct rte_flow_actions_template *at, + uint8_t *push_data, uint8_t *push_data_m, + size_t push_size, uint16_t recom_src, + enum mlx5dr_action_type recom_type) +{ + struct mlx5_priv *priv = dev->data->dev_private; + const struct rte_flow_template_table_attr *table_attr = &cfg->attr; + const struct rte_flow_attr *attr = &table_attr->flow_attr; + enum mlx5dr_table_type type = get_mlx5dr_table_type(attr); + struct mlx5_action_construct_data *act_data; + struct mlx5dr_action_reformat_header hdr = {0}; + uint32_t flag, bulk = 0; + + flag = mlx5_hw_act_flag[!!attr->group][type]; + acts->push_remove = mlx5_malloc(MLX5_MEM_ZERO, + sizeof(*acts->push_remove) + push_size, + 0, SOCKET_ID_ANY); + if (!acts->push_remove) + return -ENOMEM; + + switch (recom_type) { + case MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT: + if (!push_data || !push_size) + goto err1; + if (!push_data_m) { + bulk = rte_log2_u32(table_attr->nb_flows); + } else { + flag |= MLX5DR_ACTION_FLAG_SHARED; + acts->push_remove->shared = 1; + } + acts->push_remove->data_size = push_size; + memcpy(acts->push_remove->data, push_data, push_size); + hdr.data = push_data; + hdr.sz = push_size; + break; + case MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT: + flag |= MLX5DR_ACTION_FLAG_SHARED; + acts->push_remove->shared = 1; + break; + default: + break; + } + + acts->push_remove->action = + mlx5dr_action_create_reformat_ipv6_ext(priv->dr_ctx, + recom_type, &hdr, bulk, flag); + if (!acts->push_remove->action) + goto err1; + acts->rule_acts[at->recom_off].action = acts->push_remove->action; + acts->rule_acts[at->recom_off].ipv6_ext.header = acts->push_remove->data; + acts->rule_acts[at->recom_off].ipv6_ext.offset = 0; + acts->push_remove_pos = at->recom_off; + if (!acts->push_remove->shared) { + act_data = __flow_hw_act_data_push_append(dev, acts, + RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH, + recom_src, at->recom_off, push_size); + if (!act_data) + goto err; + } + return 0; +err: + if (acts->push_remove->action) + mlx5dr_action_destroy(acts->push_remove->action); +err1: + if (acts->push_remove) { + mlx5_free(acts->push_remove); + acts->push_remove = NULL; + } + return -EINVAL; +} + /** * Translate rte_flow actions to DR action. * @@ -1957,19 +2077,24 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, { struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_template_table_attr *table_attr = &cfg->attr; + struct mlx5_hca_flex_attr *hca_attr = &priv->sh->cdev->config.hca_attr.flex; const struct rte_flow_attr *attr = &table_attr->flow_attr; struct rte_flow_action *actions = at->actions; struct rte_flow_action *masks = at->masks; enum mlx5dr_action_type refmt_type = MLX5DR_ACTION_TYP_LAST; + enum mlx5dr_action_type recom_type = MLX5DR_ACTION_TYP_LAST; const struct rte_flow_action_raw_encap *raw_encap_data; + const struct rte_flow_action_ipv6_ext_push *ipv6_ext_data; const struct rte_flow_item *enc_item = NULL, *enc_item_m = NULL; - uint16_t reformat_src = 0; + uint16_t reformat_src = 0, recom_src = 0; uint8_t *encap_data = NULL, *encap_data_m = NULL; - size_t data_size = 0; + uint8_t *push_data = NULL, *push_data_m = NULL; + size_t data_size = 0, push_size = 0; struct mlx5_hw_modify_header_action mhdr = { 0 }; bool actions_end = false; uint32_t type; bool reformat_used = false; + bool recom_used = false; unsigned int of_vlan_offset; uint16_t jump_pos; uint32_t ct_idx; @@ -2175,6 +2300,36 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, reformat_used = true; refmt_type = MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2; break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + if (!hca_attr->query_match_sample_info || !hca_attr->parse_graph_anchor || + !priv->sh->srh_flex_parser.flex.mapnum) { + DRV_LOG(ERR, "SRv6 anchor is not supported."); + goto err; + } + MLX5_ASSERT(!recom_used && !recom_type); + recom_used = true; + recom_type = MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT; + ipv6_ext_data = + (const struct rte_flow_action_ipv6_ext_push *)masks->conf; + if (ipv6_ext_data) + push_data_m = ipv6_ext_data->data; + ipv6_ext_data = + (const struct rte_flow_action_ipv6_ext_push *)actions->conf; + if (ipv6_ext_data) { + push_data = ipv6_ext_data->data; + push_size = ipv6_ext_data->size; + } + recom_src = src_pos; + break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE: + if (!hca_attr->query_match_sample_info || !hca_attr->parse_graph_anchor || + !priv->sh->srh_flex_parser.flex.mapnum) { + DRV_LOG(ERR, "SRv6 anchor is not supported."); + goto err; + } + recom_used = true; + recom_type = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT; + break; case RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL: flow_hw_translate_group(dev, cfg, attr->group, &target_grp, error); @@ -2322,6 +2477,14 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, if (ret) goto err; } + if (recom_used) { + MLX5_ASSERT(at->recom_off != UINT16_MAX); + ret = mlx5_create_ipv6_ext_reformat(dev, cfg, acts, at, push_data, + push_data_m, push_size, recom_src, + recom_type); + if (ret) + goto err; + } return 0; err: err = rte_errno; @@ -2719,11 +2882,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, const struct mlx5_hw_actions *hw_acts = &hw_at->acts; const struct rte_flow_action *action; const struct rte_flow_action_raw_encap *raw_encap_data; + const struct rte_flow_action_ipv6_ext_push *ipv6_push; const struct rte_flow_item *enc_item = NULL; const struct rte_flow_action_ethdev *port_action = NULL; const struct rte_flow_action_meter *meter = NULL; const struct rte_flow_action_age *age = NULL; uint8_t *buf = job->encap_data; + uint8_t *push_buf = job->push_data; struct rte_flow_attr attr = { .ingress = 1, }; @@ -2854,6 +3019,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, MLX5_ASSERT(raw_encap_data->size == act_data->encap.len); break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + ipv6_push = + (const struct rte_flow_action_ipv6_ext_push *)action->conf; + rte_memcpy((void *)push_buf, ipv6_push->data, + act_data->ipv6_ext.len); + MLX5_ASSERT(ipv6_push->size == act_data->ipv6_ext.len); + break; case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: if (action->type == RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID) ret = flow_hw_set_vlan_vid_construct(dev, job, @@ -3010,6 +3182,11 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, job->flow->res_idx - 1; rule_acts[hw_acts->encap_decap_pos].reformat.data = buf; } + if (hw_acts->push_remove && !hw_acts->push_remove->shared) { + rule_acts[hw_acts->push_remove_pos].ipv6_ext.offset = + job->flow->res_idx - 1; + rule_acts[hw_acts->push_remove_pos].ipv6_ext.header = push_buf; + } if (mlx5_hws_cnt_id_valid(hw_acts->cnt_id)) job->flow->cnt_id = hw_acts->cnt_id; return 0; @@ -5113,6 +5290,38 @@ flow_hw_validate_action_indirect(struct rte_eth_dev *dev, return 0; } +/** + * Validate ipv6_ext_push action. + * + * @param[in] dev + * Pointer to rte_eth_dev structure. + * @param[in] action + * Pointer to the indirect action. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_hw_validate_action_ipv6_ext_push(struct rte_eth_dev *dev __rte_unused, + const struct rte_flow_action *action, + struct rte_flow_error *error) +{ + const struct rte_flow_action_ipv6_ext_push *raw_push_data = action->conf; + + if (!raw_push_data || !raw_push_data->size || !raw_push_data->data) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "invalid ipv6_ext_push data"); + if (raw_push_data->type != IPPROTO_ROUTING || + raw_push_data->size > MLX5_PUSH_MAX_LEN) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "Unsupported ipv6_ext_push type or length"); + return 0; +} + /** * Validate raw_encap action. * @@ -5340,6 +5549,7 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev, #endif uint16_t i; int ret; + const struct rte_flow_action_ipv6_ext_remove *remove_data; /* FDB actions are only valid to proxy port. */ if (attr->transfer && (!priv->sh->config.dv_esw_en || !priv->master)) @@ -5436,6 +5646,21 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev, /* TODO: Validation logic */ action_flags |= MLX5_FLOW_ACTION_DECAP; break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + ret = flow_hw_validate_action_ipv6_ext_push(dev, action, error); + if (ret < 0) + return ret; + action_flags |= MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH; + break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE: + remove_data = action->conf; + /* Remove action must be shared. */ + if (remove_data->type != IPPROTO_ROUTING || !mask) { + DRV_LOG(ERR, "Only supports shared IPv6 routing remove"); + return -EINVAL; + } + action_flags |= MLX5_FLOW_ACTION_IPV6_ROUTING_REMOVE; + break; case RTE_FLOW_ACTION_TYPE_METER: /* TODO: Validation logic */ action_flags |= MLX5_FLOW_ACTION_METER; @@ -5551,6 +5776,8 @@ static enum mlx5dr_action_type mlx5_hw_dr_action_types[] = { [RTE_FLOW_ACTION_TYPE_OF_POP_VLAN] = MLX5DR_ACTION_TYP_POP_VLAN, [RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN] = MLX5DR_ACTION_TYP_PUSH_VLAN, [RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL] = MLX5DR_ACTION_TYP_DEST_ROOT, + [RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH] = MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT, + [RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE] = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT, }; static inline void @@ -5648,6 +5875,8 @@ flow_hw_template_actions_list(struct rte_flow_actions_template *at, /** * Create DR action template based on a provided sequence of flow actions. * + * @param[in] dev + * Pointer to the rte_eth_dev structure. * @param[in] at * Pointer to flow actions template to be updated. * @@ -5656,7 +5885,8 @@ flow_hw_template_actions_list(struct rte_flow_actions_template *at, * NULL otherwise. */ static struct mlx5dr_action_template * -flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) +flow_hw_dr_actions_template_create(struct rte_eth_dev *dev, + struct rte_flow_actions_template *at) { struct mlx5dr_action_template *dr_template; enum mlx5dr_action_type action_types[MLX5_HW_MAX_ACTS] = { MLX5DR_ACTION_TYP_LAST }; @@ -5665,8 +5895,11 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) enum mlx5dr_action_type reformat_act_type = MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2; uint16_t reformat_off = UINT16_MAX; uint16_t mhdr_off = UINT16_MAX; + uint16_t recom_off = UINT16_MAX; uint16_t cnt_off = UINT16_MAX; + enum mlx5dr_action_type recom_type = MLX5DR_ACTION_TYP_LAST; int ret; + for (i = 0, curr_off = 0; at->actions[i].type != RTE_FLOW_ACTION_TYPE_END; ++i) { const struct rte_flow_action_raw_encap *raw_encap_data; size_t data_size; @@ -5698,6 +5931,16 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) reformat_off = curr_off++; reformat_act_type = mlx5_hw_dr_action_types[at->actions[i].type]; break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH: + MLX5_ASSERT(recom_off == UINT16_MAX); + recom_type = MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT; + recom_off = curr_off++; + break; + case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE: + MLX5_ASSERT(recom_off == UINT16_MAX); + recom_type = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT; + recom_off = curr_off++; + break; case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: raw_encap_data = at->actions[i].conf; data_size = raw_encap_data->size; @@ -5770,11 +6013,25 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) at->reformat_off = reformat_off; action_types[reformat_off] = reformat_act_type; } + if (recom_off != UINT16_MAX) { + at->recom_off = recom_off; + action_types[recom_off] = recom_type; + } dr_template = mlx5dr_action_template_create(action_types); - if (dr_template) + if (dr_template) { at->dr_actions_num = curr_off; - else + } else { DRV_LOG(ERR, "Failed to create DR action template: %d", rte_errno); + return NULL; + } + /* Create srh flex parser for remove anchor. */ + if ((recom_type == MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT || + recom_type == MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) && + mlx5_alloc_srh_flex_parser(dev)) { + DRV_LOG(ERR, "Failed to create srv6 flex parser"); + claim_zero(mlx5dr_action_template_destroy(dr_template)); + return NULL; + } return dr_template; err_actions_num: DRV_LOG(ERR, "Number of HW actions (%u) exceeded maximum (%u) allowed in template", @@ -6183,7 +6440,7 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, break; } } - at->tmpl = flow_hw_dr_actions_template_create(at); + at->tmpl = flow_hw_dr_actions_template_create(dev, at); if (!at->tmpl) goto error; at->action_flags = action_flags; @@ -6220,6 +6477,9 @@ flow_hw_actions_template_destroy(struct rte_eth_dev *dev, struct rte_flow_actions_template *template, struct rte_flow_error *error __rte_unused) { + uint64_t flag = MLX5_FLOW_ACTION_IPV6_ROUTING_REMOVE | + MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH; + if (__atomic_load_n(&template->refcnt, __ATOMIC_RELAXED) > 1) { DRV_LOG(WARNING, "Action template %p is still in use.", (void *)template); @@ -6228,6 +6488,8 @@ flow_hw_actions_template_destroy(struct rte_eth_dev *dev, NULL, "action template in using"); } + if (template->action_flags & flag) + mlx5_free_srh_flex_parser(dev); LIST_REMOVE(template, next); flow_hw_flex_item_release(dev, &template->flex_item); if (template->tmpl) @@ -8796,6 +9058,7 @@ flow_hw_configure(struct rte_eth_dev *dev, mem_size += (sizeof(struct mlx5_hw_q_job *) + sizeof(struct mlx5_hw_q_job) + sizeof(uint8_t) * MLX5_ENCAP_MAX_LEN + + sizeof(uint8_t) * MLX5_PUSH_MAX_LEN + sizeof(struct mlx5_modification_cmd) * MLX5_MHDR_MAX_CMD + sizeof(struct rte_flow_item) * @@ -8811,7 +9074,7 @@ flow_hw_configure(struct rte_eth_dev *dev, } for (i = 0; i < nb_q_updated; i++) { char mz_name[RTE_MEMZONE_NAMESIZE]; - uint8_t *encap = NULL; + uint8_t *encap = NULL, *push = NULL; struct mlx5_modification_cmd *mhdr_cmd = NULL; struct rte_flow_item *items = NULL; struct rte_flow_hw *upd_flow = NULL; @@ -8831,13 +9094,16 @@ flow_hw_configure(struct rte_eth_dev *dev, &job[_queue_attr[i]->size]; encap = (uint8_t *) &mhdr_cmd[_queue_attr[i]->size * MLX5_MHDR_MAX_CMD]; - items = (struct rte_flow_item *) + push = (uint8_t *) &encap[_queue_attr[i]->size * MLX5_ENCAP_MAX_LEN]; + items = (struct rte_flow_item *) + &push[_queue_attr[i]->size * MLX5_PUSH_MAX_LEN]; upd_flow = (struct rte_flow_hw *) &items[_queue_attr[i]->size * MLX5_HW_MAX_ITEMS]; for (j = 0; j < _queue_attr[i]->size; j++) { job[j].mhdr_cmd = &mhdr_cmd[j * MLX5_MHDR_MAX_CMD]; job[j].encap_data = &encap[j * MLX5_ENCAP_MAX_LEN]; + job[j].push_data = &push[j * MLX5_PUSH_MAX_LEN]; job[j].items = &items[j * MLX5_HW_MAX_ITEMS]; job[j].upd_flow = &upd_flow[j]; priv->hw_q[i].job[j] = &job[j];