From patchwork Tue Feb 14 12:57:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: rongwei liu X-Patchwork-Id: 123891 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4155441C49; Tue, 14 Feb 2023 13:58:03 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8AFBC42DB8; Tue, 14 Feb 2023 13:57:51 +0100 (CET) Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam02on2042.outbound.protection.outlook.com [40.107.212.42]) by mails.dpdk.org (Postfix) with ESMTP id 2B7484067E for ; Tue, 14 Feb 2023 13:57:49 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lrmLuQ6nYzKhceu6xIz/qlN8nu6fh49OJqH7c1eAkE2d1flVrDoNQ7aPzx5RSnbMko5GyQg1SK6fDkMq4o6jcWpCgpBFQId65N7IUB7lyi1jw2Ed8qySM+KVMCF4hrOaNcP03tkhpAZ7dyHHVIjRTJvXfkW/S+coGBu8Ts2kx3Ogiu1XKS/GrveEB4fWe/CsiF+z7CpQj9hxV5WPq0w/0wMFT+Gcerhs2Iygh0Hb92gYxD5VpxwCA8gRWBJftleQ/GxNKOU/frNp2yt+wIzboQ8R3Ng6TEx1AhvwwJdcXxHFYG0xFC/8YAD9iH+wp15zO65ch6iR8IPBWBWetszBiQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=yTcqrmcMxVg2KO8by8IcznbPNvK4N7HHX20zByokwVU=; b=hlS+VVL6k7T1TSnnRdtodTVpkBu8dkN1cYoFfOf3TDRY4kLTgHp74PKvp6IDpf/vX70vi1qAwVULYRL5OcwXD3B/KmJG/+1+Iney9OOFqKrSk5uOzPUmJsMhrNq+6GUuNamUg8Pkfac2KWRVAjMkvDZJJFib4E46dQ1bHfgLaPACzXnUlYPTmpfN+IhaSk4+jzfe1UIz+2U2ahuaA+yPOrWK8rsjA9n3H1VHgq10w942tQHZxGz13h/oBT+kZrnTgW+/oggVGqvFOaI61zA+qg3O0E6jTGd3lY/+xUcgILGJvxNAh/BNp6H5a0kqDiKnHdpUvnIZrmyDtau/KLKrnw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=yTcqrmcMxVg2KO8by8IcznbPNvK4N7HHX20zByokwVU=; b=f2TbU/PEieHjIeosdGHJVMo2OthDJTk9lviKroCxEXR7wbLdY879TKfhS+WZ/Q68KUiniwBIggaG0zrORhqUrU393m88bBWn4T+XBXLFoqwbvWxa0/F9WAjYma6YH/xf7Rng47CcrPLgJ5QT6+lNA0HhMY81u8t7MGPp3AxAuZWz/zyGiokFqWksUE7hf66V04ngdPhoQGARgR1b0EwrceIY2GkW5ShfaDKKQUo3w08YwT9qwtzvQgCl23mamWj0WxtjdDODxr+9+q0I/YF0ZvZZbHw0RCbcSnT4rCQsm2K+oWsXSBmn/IqTHK51FCEJZR9UeqL0gVqui6XDQU3S9Q== Received: from DM6PR08CA0047.namprd08.prod.outlook.com (2603:10b6:5:1e0::21) by DM4PR12MB5344.namprd12.prod.outlook.com (2603:10b6:5:39a::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6086.24; Tue, 14 Feb 2023 12:57:47 +0000 Received: from DM6NAM11FT023.eop-nam11.prod.protection.outlook.com (2603:10b6:5:1e0:cafe::fa) by DM6PR08CA0047.outlook.office365.com (2603:10b6:5:1e0::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6086.26 via Frontend Transport; Tue, 14 Feb 2023 12:57:47 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DM6NAM11FT023.mail.protection.outlook.com (10.13.173.96) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6086.24 via Frontend Transport; Tue, 14 Feb 2023 12:57:47 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 14 Feb 2023 04:57:34 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 14 Feb 2023 04:57:32 -0800 From: Rongwei Liu To: , , , , CC: , Alex Vesker Subject: [PATCH v3 3/5] net/mlx5/hws: add IPv6 routing extension matching support Date: Tue, 14 Feb 2023 14:57:09 +0200 Message-ID: <20230214125711.3791966-4-rongweil@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230214125711.3791966-1-rongweil@nvidia.com> References: <20230213113747.3677487-2-rongweil@nvidia.com> <20230214125711.3791966-1-rongweil@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT023:EE_|DM4PR12MB5344:EE_ X-MS-Office365-Filtering-Correlation-Id: 7bd7807a-482f-4ac8-3bac-08db0e8b1244 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: n3+pLlkkS/HEDVwPoUZ7NU/EzKs4CG9X/5wLLygUkTlP2vydqavW9khq2hZ23ZZfbOBW0GHQdEghH0ZrFobbcjk7fvIHrgmHjZ7yNgaK10si8CENolG/wXx/QPNo748KqQ3zIhlYcK9/50jJQUNJ/fDNm5qc883ujPMZtHPuLsCEiP7xl7JwPSfZWXu1TinXNeBnDAfFt+G8eZJGQ2Sk/cD60ftOOr0FJTDQ2fSAWjJb61gViQZN0anlgmla2zMaLxkca2LWQxW+Sj1R6Js15DPkPODv4geA/TVAXCGtXqUE63e8ZqYj25N6GgEJdKFJM7msKb9nZ5NWTyhB+dmjxAafPMiHcxARy6l7AfQHi2/lmU7BR7DBZDdn/TsUX6hyQktwpSdIhsgBvcskDPKxO+TfxzQqxfQMpNTi+2BT/v/5pz2+bDIw0Ltfk/wV3ty8jHBubbyD69eAGmVRei/srb5Kt6iPFwkomX+0oZMRvSUzQm4sCi/dULPwbgEko5UEqd3EOF7fhtrpSLsIxDtPNST4KzBXcOAaEWehR8evAAUifhNGxEVlNuGoaF/ageW0ok/sdrZTwVCdJfUhbkjSYnnFP8C5+UFAomSWrWC1VywfkKxsf1zzpmBu1iknL3tC5dIMQwKDzdvSZGAH6JkW0PxNij5KgAtuaesIpdqDNQFd+G6dLIwiZGgceBkpDbUyIxLrOBhxCudIRIHgpqx1mg== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(136003)(376002)(39860400002)(396003)(346002)(451199018)(36840700001)(46966006)(40470700004)(54906003)(2906002)(30864003)(5660300002)(8936002)(41300700001)(70206006)(4326008)(70586007)(55016003)(8676002)(40480700001)(47076005)(316002)(110136005)(6666004)(1076003)(478600001)(7696005)(107886003)(186003)(26005)(36756003)(40460700003)(6286002)(16526019)(86362001)(83380400001)(82310400005)(2616005)(426003)(336012)(7636003)(82740400003)(356005)(36860700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Feb 2023 12:57:47.1199 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 7bd7807a-482f-4ac8-3bac-08db0e8b1244 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT023.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5344 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add mlx5 HWS logic to match IPv6 routing extension header. Once detecting IPv6 matching extension items in pattern template create callback, PMD allocates a flex parser to sample the first dword of srv6 header. Only support next_hdr/segments_left/type for now. Signed-off-by: Rongwei Liu Reviewed-by: Alex Vesker Acked-by: Viacheslav Ovsiienko --- drivers/common/mlx5/mlx5_devx_cmds.c | 7 +- drivers/net/mlx5/hws/mlx5dr_definer.c | 91 ++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_definer.h | 15 +++++ drivers/net/mlx5/mlx5.c | 92 ++++++++++++++++++++++++++- drivers/net/mlx5/mlx5.h | 16 +++++ drivers/net/mlx5/mlx5_flow.h | 28 ++++++++ drivers/net/mlx5/mlx5_flow_hw.c | 29 +++++++-- 7 files changed, 268 insertions(+), 10 deletions(-) diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index 1f65ea7dcb..22a94c1e1a 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -607,7 +607,7 @@ mlx5_devx_cmd_query_hca_vdpa_attr(void *ctx, int mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj, - struct mlx5_ext_sample_id ids[], + struct mlx5_ext_sample_id *ids, uint32_t num, uint8_t *anchor) { uint32_t in[MLX5_ST_SZ_DW(general_obj_in_cmd_hdr)] = {0}; @@ -637,8 +637,9 @@ mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj, (void *)flex_obj); return -rte_errno; } - *anchor = MLX5_GET(parse_graph_flex, flex, head_anchor_id); - for (i = 0; i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) { + if (anchor) + *anchor = MLX5_GET(parse_graph_flex, flex, head_anchor_id); + for (i = 0; i < MLX5_GRAPH_NODE_SAMPLE_NUM && idx <= num; i++) { void *s_off = (void *)((char *)sample + i * MLX5_ST_SZ_BYTES(parse_graph_flow_match_sample)); uint32_t en; diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c index dea460137d..ce7cf0504d 100644 --- a/drivers/net/mlx5/hws/mlx5dr_definer.c +++ b/drivers/net/mlx5/hws/mlx5dr_definer.c @@ -125,6 +125,7 @@ struct mlx5dr_definer_conv_data { X(SET_BE16, ipv4_len, v->total_length, rte_ipv4_hdr) \ X(SET_BE16, ipv6_payload_len, v->hdr.payload_len, rte_flow_item_ipv6) \ X(SET, ipv6_proto, v->hdr.proto, rte_flow_item_ipv6) \ + X(SET, ipv6_routing_hdr, IPPROTO_ROUTING, rte_flow_item_ipv6) \ X(SET, ipv6_hop_limits, v->hdr.hop_limits, rte_flow_item_ipv6) \ X(SET_BE32P, ipv6_src_addr_127_96, &v->hdr.src_addr[0], rte_flow_item_ipv6) \ X(SET_BE32P, ipv6_src_addr_95_64, &v->hdr.src_addr[4], rte_flow_item_ipv6) \ @@ -293,6 +294,21 @@ mlx5dr_definer_integrity_set(struct mlx5dr_definer_fc *fc, DR_SET(tag, ok1_bits, fc->byte_off, fc->bit_off, fc->bit_mask); } +static void +mlx5dr_definer_ipv6_routing_ext_set(struct mlx5dr_definer_fc *fc, + const void *item, + uint8_t *tag) +{ + const struct rte_flow_item_ipv6_routing_ext *v = item; + uint32_t val; + + val = v->hdr.next_hdr << __mlx5_dw_bit_off(header_ipv6_routing_ext, next_hdr); + val |= v->hdr.type << __mlx5_dw_bit_off(header_ipv6_routing_ext, type); + val |= v->hdr.segments_left << + __mlx5_dw_bit_off(header_ipv6_routing_ext, segments_left); + DR_SET(tag, val, fc->byte_off, 0, fc->bit_mask); +} + static void mlx5dr_definer_gre_key_set(struct mlx5dr_definer_fc *fc, const void *item_spec, @@ -1605,6 +1621,76 @@ mlx5dr_definer_conv_item_meter_color(struct mlx5dr_definer_conv_data *cd, return 0; } +static struct mlx5dr_definer_fc * +mlx5dr_definer_get_flex_parser_fc(struct mlx5dr_definer_conv_data *cd, uint32_t byte_off) +{ + uint32_t byte_off_fp7 = MLX5_BYTE_OFF(definer_hl, flex_parser.flex_parser_7); + uint32_t byte_off_fp0 = MLX5_BYTE_OFF(definer_hl, flex_parser.flex_parser_0); + enum mlx5dr_definer_fname fname = MLX5DR_DEFINER_FNAME_FLEX_PARSER_0; + struct mlx5dr_definer_fc *fc; + uint32_t idx; + + if (byte_off < byte_off_fp7 || byte_off > byte_off_fp0) { + rte_errno = EINVAL; + return NULL; + } + idx = (byte_off_fp0 - byte_off) / (sizeof(uint32_t)); + fname += (enum mlx5dr_definer_fname)idx; + fc = &cd->fc[fname]; + fc->byte_off = byte_off; + fc->bit_mask = UINT32_MAX; + return fc; +} + +static int +mlx5dr_definer_conv_item_ipv6_routing_ext(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_ipv6_routing_ext *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + uint32_t byte_off; + + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_VERSION, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_version_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l3_type, inner); + + /* Overwrite - Unset ethertype if present */ + memset(&cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)], 0, sizeof(*fc)); + + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + if (!fc->tag_set) { + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_routing_hdr_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l3, protocol_next_header, inner); + } + } + + if (!m) + return 0; + + if (m->hdr.hdr_len || m->hdr.flags) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->hdr.next_hdr || m->hdr.type || m->hdr.segments_left) { + byte_off = flow_hw_get_srh_flex_parser_byte_off_from_ctx(cd->ctx); + fc = mlx5dr_definer_get_flex_parser_fc(cd, byte_off); + if (!fc) + return rte_errno; + + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_routing_ext_set; + } + return 0; +} + static int mlx5dr_definer_mt_set_fc(struct mlx5dr_match_template *mt, struct mlx5dr_definer_fc *fc, @@ -1786,6 +1872,11 @@ mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx, ret = mlx5dr_definer_conv_item_meter_color(&cd, items, i); item_flags |= MLX5_FLOW_ITEM_METER_COLOR; break; + case RTE_FLOW_ITEM_TYPE_IPV6_ROUTING_EXT: + ret = mlx5dr_definer_conv_item_ipv6_routing_ext(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_ITEM_INNER_IPV6_ROUTING_EXT : + MLX5_FLOW_ITEM_OUTER_IPV6_ROUTING_EXT; + break; default: DR_LOG(ERR, "Unsupported item type %d", items->type); rte_errno = ENOTSUP; diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.h b/drivers/net/mlx5/hws/mlx5dr_definer.h index 464872acd6..7420971f4a 100644 --- a/drivers/net/mlx5/hws/mlx5dr_definer.h +++ b/drivers/net/mlx5/hws/mlx5dr_definer.h @@ -519,6 +519,21 @@ struct mlx5_ifc_header_ipv6_vtc_bits { u8 flow_label[0x14]; }; +struct mlx5_ifc_header_ipv6_routing_ext_bits { + u8 next_hdr[0x8]; + u8 hdr_len[0x8]; + u8 type[0x8]; + u8 segments_left[0x8]; + union { + u8 flags[0x20]; + struct { + u8 last_entry[0x8]; + u8 flag[0x8]; + u8 tag[0x10]; + }; + }; +}; + struct mlx5_ifc_header_vxlan_bits { u8 flags[0x8]; u8 reserved1[0x18]; diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 0b97c4e78d..94fd5a91e3 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -970,7 +970,6 @@ mlx5_flex_parser_ecpri_alloc(struct rte_eth_dev *dev) .modify_field_select = 0, }; struct mlx5_ext_sample_id ids[8]; - uint8_t anchor_id; int ret; if (!priv->sh->cdev->config.hca_attr.parse_graph_flex_node) { @@ -1006,7 +1005,7 @@ mlx5_flex_parser_ecpri_alloc(struct rte_eth_dev *dev) return (rte_errno == 0) ? -ENODEV : -rte_errno; } prf->num = 2; - ret = mlx5_devx_cmd_query_parse_samples(prf->obj, ids, prf->num, &anchor_id); + ret = mlx5_devx_cmd_query_parse_samples(prf->obj, ids, prf->num, NULL); if (ret) { DRV_LOG(ERR, "Failed to query sample IDs."); return (rte_errno == 0) ? -ENODEV : -rte_errno; @@ -1041,6 +1040,95 @@ mlx5_flex_parser_ecpri_release(struct rte_eth_dev *dev) prf->obj = NULL; } +/* + * Allocation of a flex parser for srh. Once refcnt is zero, the resources held + * by this parser will be freed. + * @param dev + * Pointer to Ethernet device structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev) +{ + struct mlx5_devx_graph_node_attr node = { + .modify_field_select = 0, + }; + struct mlx5_ext_sample_id ids[MLX5_GRAPH_NODE_SAMPLE_NUM]; + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_common_dev_config *config = &priv->sh->cdev->config; + void *ibv_ctx = priv->sh->cdev->ctx; + int ret; + + memset(ids, 0xff, sizeof(ids)); + if (!config->hca_attr.parse_graph_flex_node) { + DRV_LOG(ERR, "Dynamic flex parser is not supported"); + return -ENOTSUP; + } + if (__atomic_add_fetch(&priv->sh->srh_flex_parser.refcnt, 1, __ATOMIC_RELAXED) > 1) + return 0; + + node.header_length_mode = MLX5_GRAPH_NODE_LEN_FIELD; + /* Srv6 first two DW are not counted in. */ + node.header_length_base_value = 0x8; + /* The unit is uint64_t. */ + node.header_length_field_shift = 0x3; + /* Header length is the 2nd byte. */ + node.header_length_field_offset = 0x8; + node.header_length_field_mask = 0xF; + /* One byte next header protocol. */ + node.next_header_field_size = 0x8; + node.in[0].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_IP; + node.in[0].compare_condition_value = IPPROTO_ROUTING; + node.sample[0].flow_match_sample_en = 1; + /* First come first serve no matter inner or outer. */ + node.sample[0].flow_match_sample_tunnel_mode = MLX5_GRAPH_SAMPLE_TUNNEL_FIRST; + node.out[0].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_TCP; + node.out[0].compare_condition_value = IPPROTO_TCP; + node.out[1].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_UDP; + node.out[1].compare_condition_value = IPPROTO_UDP; + node.out[2].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_IPV6; + node.out[2].compare_condition_value = IPPROTO_IPV6; + priv->sh->srh_flex_parser.fp = mlx5_devx_cmd_create_flex_parser(ibv_ctx, &node); + if (!priv->sh->srh_flex_parser.fp) { + DRV_LOG(ERR, "Failed to create flex parser node object."); + return (rte_errno == 0) ? -ENODEV : -rte_errno; + } + priv->sh->srh_flex_parser.num = 1; + ret = mlx5_devx_cmd_query_parse_samples(priv->sh->srh_flex_parser.fp, ids, + priv->sh->srh_flex_parser.num, + &priv->sh->srh_flex_parser.anchor_id); + if (ret) { + DRV_LOG(ERR, "Failed to query sample IDs."); + return (rte_errno == 0) ? -ENODEV : -rte_errno; + } + priv->sh->srh_flex_parser.offset[0] = 0x0; + priv->sh->srh_flex_parser.ids[0].id = ids[0].id; + return 0; +} + +/* + * Destroy the flex parser node, including the parser itself, input / output + * arcs and DW samples. Resources could be reused then. + * + * @param dev + * Pointer to Ethernet device structure + */ +void +mlx5_free_srh_flex_parser(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_internal_flex_parser_profile *fp = &priv->sh->srh_flex_parser; + + if (__atomic_sub_fetch(&fp->refcnt, 1, __ATOMIC_RELAXED)) + return; + if (fp->fp) + mlx5_devx_cmd_destroy(fp->fp); + fp->fp = NULL; + fp->num = 0; +} + uint32_t mlx5_get_supported_sw_parsing_offloads(const struct mlx5_hca_attr *attr) { diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 83fb316ad8..bea1f62ea8 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -543,6 +543,17 @@ struct mlx5_counter_stats_raw { volatile struct flow_counter_stats *data; }; +/* Mlx5 internal flex parser profile structure. */ +struct mlx5_internal_flex_parser_profile { + uint32_t num;/* Actual number of samples. */ + /* Sample IDs for this profile. */ + struct mlx5_ext_sample_id ids[MLX5_FLEX_ITEM_MAPPING_NUM]; + uint32_t offset[MLX5_FLEX_ITEM_MAPPING_NUM]; /* Each ID sample offset. */ + uint8_t anchor_id; + uint32_t refcnt; + void *fp; /* DevX flex parser object. */ +}; + TAILQ_HEAD(mlx5_counter_pools, mlx5_flow_counter_pool); /* Counter global management structure. */ @@ -1436,6 +1447,7 @@ struct mlx5_dev_ctx_shared { struct mlx5_uar rx_uar; /* DevX UAR for Rx. */ struct mlx5_proc_priv *pppriv; /* Pointer to primary private process. */ struct mlx5_ecpri_parser_profile ecpri_parser; + struct mlx5_internal_flex_parser_profile srh_flex_parser; /* srh flex parser structure. */ /* Flex parser profiles information. */ LIST_HEAD(shared_rxqs, mlx5_rxq_ctrl) shared_rxqs; /* Shared RXQs. */ struct mlx5_aso_age_mng *aso_age_mng; @@ -2258,4 +2270,8 @@ struct mlx5_list_entry *mlx5_flex_parser_clone_cb(void *list_ctx, void *ctx); void mlx5_flex_parser_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry); + +int mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev); + +void mlx5_free_srh_flex_parser(struct rte_eth_dev *dev); #endif /* RTE_PMD_MLX5_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 86311b0b08..4bef2296b8 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -219,6 +219,10 @@ enum mlx5_feature_name { /* Meter color item */ #define MLX5_FLOW_ITEM_METER_COLOR (UINT64_C(1) << 44) +/* IPv6 routing extension item */ +#define MLX5_FLOW_ITEM_OUTER_IPV6_ROUTING_EXT (UINT64_C(1) << 45) +#define MLX5_FLOW_ITEM_INNER_IPV6_ROUTING_EXT (UINT64_C(1) << 46) + /* Outer Masks. */ #define MLX5_FLOW_LAYER_OUTER_L3 \ (MLX5_FLOW_LAYER_OUTER_L3_IPV4 | MLX5_FLOW_LAYER_OUTER_L3_IPV6) @@ -2615,4 +2619,28 @@ int mlx5_flow_item_field_width(struct rte_eth_dev *dev, enum rte_flow_field_id field, int inherit, const struct rte_flow_attr *attr, struct rte_flow_error *error); + +static __rte_always_inline int +flow_hw_get_srh_flex_parser_byte_off_from_ctx(void *dr_ctx __rte_unused) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + uint16_t port; + + MLX5_ETH_FOREACH_DEV(port, NULL) { + struct mlx5_priv *priv; + struct mlx5_hca_flex_attr *attr; + + priv = rte_eth_devices[port].data->dev_private; + attr = &priv->sh->cdev->config.hca_attr.flex; + if (priv->dr_ctx == dr_ctx && attr->ext_sample_id) { + if (priv->sh->srh_flex_parser.num) + return priv->sh->srh_flex_parser.ids[0].format_select_dw * + sizeof(uint32_t); + else + return UINT32_MAX; + } + } +#endif + return UINT32_MAX; +} #endif /* RTE_PMD_MLX5_FLOW_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 798bcca710..6799b8a89f 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -213,17 +213,17 @@ flow_hw_hashfields_set(struct mlx5_flow_rss_desc *rss_desc, } /** - * Generate the pattern item flags. - * Will be used for shared RSS action. + * Generate the matching pattern item flags. * * @param[in] items * Pointer to the list of items. * * @return - * Item flags. + * Matching item flags. RSS hash field function + * silently ignores the flags which are unsupported. */ static uint64_t -flow_hw_rss_item_flags_get(const struct rte_flow_item items[]) +flow_hw_matching_item_flags_get(const struct rte_flow_item items[]) { uint64_t item_flags = 0; uint64_t last_item = 0; @@ -249,6 +249,10 @@ flow_hw_rss_item_flags_get(const struct rte_flow_item items[]) last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP : MLX5_FLOW_LAYER_OUTER_L4_UDP; break; + case RTE_FLOW_ITEM_TYPE_IPV6_ROUTING_EXT: + last_item = tunnel ? MLX5_FLOW_ITEM_INNER_IPV6_ROUTING_EXT : + MLX5_FLOW_ITEM_OUTER_IPV6_ROUTING_EXT; + break; case RTE_FLOW_ITEM_TYPE_GRE: last_item = MLX5_FLOW_LAYER_GRE; break; @@ -4738,6 +4742,7 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev, case RTE_FLOW_ITEM_TYPE_ICMP6_ECHO_REQUEST: case RTE_FLOW_ITEM_TYPE_ICMP6_ECHO_REPLY: case RTE_FLOW_ITEM_TYPE_CONNTRACK: + case RTE_FLOW_ITEM_TYPE_IPV6_ROUTING_EXT: break; case RTE_FLOW_ITEM_TYPE_INTEGRITY: /* @@ -4866,7 +4871,7 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev, "cannot create match template"); return NULL; } - it->item_flags = flow_hw_rss_item_flags_get(tmpl_items); + it->item_flags = flow_hw_matching_item_flags_get(tmpl_items); if (copied_items) { if (attr->ingress) it->implicit_port = true; @@ -4874,6 +4879,17 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev, it->implicit_tag = true; mlx5_free(copied_items); } + /* Either inner or outer, can't both. */ + if (it->item_flags & (MLX5_FLOW_ITEM_OUTER_IPV6_ROUTING_EXT | + MLX5_FLOW_ITEM_INNER_IPV6_ROUTING_EXT)) { + if (((it->item_flags & MLX5_FLOW_ITEM_OUTER_IPV6_ROUTING_EXT) && + (it->item_flags & MLX5_FLOW_ITEM_INNER_IPV6_ROUTING_EXT)) || + (mlx5_alloc_srh_flex_parser(dev))) { + claim_zero(mlx5dr_match_template_destroy(it->mt)); + mlx5_free(it); + return NULL; + } + } __atomic_fetch_add(&it->refcnt, 1, __ATOMIC_RELAXED); LIST_INSERT_HEAD(&priv->flow_hw_itt, it, next); return it; @@ -4905,6 +4921,9 @@ flow_hw_pattern_template_destroy(struct rte_eth_dev *dev __rte_unused, NULL, "item template in using"); } + if (template->item_flags & (MLX5_FLOW_ITEM_OUTER_IPV6_ROUTING_EXT | + MLX5_FLOW_ITEM_INNER_IPV6_ROUTING_EXT)) + mlx5_free_srh_flex_parser(dev); LIST_REMOVE(template, next); claim_zero(mlx5dr_match_template_destroy(template->mt)); mlx5_free(template);