From patchwork Mon Jan 30 13:19:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rongwei Liu X-Patchwork-Id: 122688 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 972E841B81; Mon, 30 Jan 2023 14:21:15 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9DA9E42D30; Mon, 30 Jan 2023 14:20:50 +0100 (CET) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2050.outbound.protection.outlook.com [40.107.244.50]) by mails.dpdk.org (Postfix) with ESMTP id 2FE7F40DF6 for ; Mon, 30 Jan 2023 14:20:48 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=czsWzr4uaekiK9gNMi4saOBC95BLn4KhwiTYXpn12VlfmP/BDjhlb4ITRxEkSrHQlaVZZDe6HkEq32hKiOOUPMVhYiCLQvj1PfqNqeBYjMq0Wv0vHjBIXc6HGShZ12Ka4risGgE0wMMHuQ+rV7AJ3hznHPfnOedbzP+Hnn6a2iuCAIsXvm6o7yawyjsBwcPTCdkK0PfCURvC9ThsOuaQImwp1cOe6utBRaTnF//lXvIDdgOJYDmvyfJFPy4/B9OG5/cKgBorVfj/vXQSxcUkXO54tKW9s5y9KlJllaXAGIpMe8pCwCC4tEOWQzamZmncAT8lFn0P3GrzjoZgTzxefg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=1NlxsIj2cCYde9LrNHBcEqh6ZZEfJXfPUphnxYzdjao=; b=n9gcRR4kZ1BgOP881tpIMZjtYfSZRLpKnayErttu6Ay/ktEuSxhCTPezLwwiYhnF4bXJFzj0UaKpZTp6wWQ+Ax+sHlhG9lZsIobDDl6efxt4FiCKeFUZSH4UQlY/c7drN56mJT6LXeBcI1HzsmQkMRPOcIA5zCNbRG88xF1sQ19JDbhfavCGk6KcO1oUJbVdt4fsnSx6sjUnmUTZN0sge8xp6sxTznibcUO4my/neWafXFh9l85aEqwnFt0WeKN8eYXREl3xMt/HLk37LFtfPDlv9hSuJkWN2L7tapJc3jddocg+OGWJGjq10vsDzb4tdOicfcJrs3KS7D094dXYwg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=1NlxsIj2cCYde9LrNHBcEqh6ZZEfJXfPUphnxYzdjao=; b=J0EqjIvjIeNyhxMIr2iyCu8SsWaqltTNpSiqGyqIcXEOxzZ72gHVkOdlx1IzDOY9hf2/aNe/Fa1gsTeh2/ddVD2HZirLLnrXDGPhIE8ehQCWjlts522dus6za57VCe5anRC0wzFya+LnwPkM7Qnv5aieT1y+OFNsnPVNRhgW31/D+EZ2goUeI/krXb3NpxoVlLBtyKRg2YO9GYz+u060/A1yUA45P5rSpLM2PEolARfGzXNZ/2oAh+24JB/e5JblZbi3THdcwq2azLQvmxrRB6uAB/ZRMcZAhDNeaFnrgaVs4fv+e5E3K1aT9K65x0NykMWOE7wfOCTHvSwu9S9oVA== Received: from DM6PR07CA0066.namprd07.prod.outlook.com (2603:10b6:5:74::43) by CH2PR12MB4939.namprd12.prod.outlook.com (2603:10b6:610:61::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Mon, 30 Jan 2023 13:20:45 +0000 Received: from DM6NAM11FT090.eop-nam11.prod.protection.outlook.com (2603:10b6:5:74:cafe::f3) by DM6PR07CA0066.outlook.office365.com (2603:10b6:5:74::43) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36 via Frontend Transport; Mon, 30 Jan 2023 13:20:45 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by DM6NAM11FT090.mail.protection.outlook.com (10.13.172.184) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36 via Frontend Transport; Mon, 30 Jan 2023 13:20:45 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Mon, 30 Jan 2023 05:20:31 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Mon, 30 Jan 2023 05:20:29 -0800 From: Rongwei Liu To: , , , CC: , , Alex Vesker Subject: [PATCH v3 06/11] net/mlx5/hws: add hws flex item matching support Date: Mon, 30 Jan 2023 15:19:55 +0200 Message-ID: <20230130132000.1715473-7-rongweil@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230130132000.1715473-1-rongweil@nvidia.com> References: <3ba49f25-52d0-fe07-02e6-22a71e0fbe13@oktetlabs.ru> <20230130132000.1715473-1-rongweil@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT090:EE_|CH2PR12MB4939:EE_ X-MS-Office365-Filtering-Correlation-Id: 5f049876-bca0-4222-a71d-08db02c4cba8 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: LNVj72ukIVZDqRJcERX3FF7lAfxC1JhGhSVx9kmMM770lVn5xdmiPUd7r8LbIP4kIaTUCEwVVGcQ19NsK0Ds8pNbuOMw17twpi6FFnAjMpXTXxA6uaMdVDqEii04m2CJXT2MzTzWgZOzFB8CHrfGKt41Sxmi8EQdxkMz3f5dPB1IGTnJiNTFQ/frNk3qy47atOyXocBIwNzHmqzAa6gPBL8/ENMpm9eKtldmhGX96AVivgQXA4GbBcO4DyqIjImSuJbCK4b/dqg1oDQVoxjzUaEFXiAIQxkaIdvErzHKYw+bR7BJtV4E2I7vbRTu9014fZX56ZNga1KdzjUcBA7nVGesLM1rKe7Awo8o1cRYu49SgsQDLTTFDjeUTXIvwL5QVGDcK4pdgVCVVX6KWow5pQ5sEb9jllz3XyzTOMcee1N4GON5h2Ak3eANdVhUuAWIPBjnxkqeMTgpuq97OA2abB6RpHeF0cRYh7QzXhMbaoyvr/+xn6MBwOFi9deXKRGtOPYHPDO/0lOaftEv336hyTH3Wa3rKXWpWNXaeZok/c7U5VfHziBbJQaPJxl7VUT+yispgbqiXS/W8gCNuhHr+AeSJ+SqBA/B+w7Ej4/9FN7DSXLM1HggkgQekIl80+jG8ZJJXxaywGXg0D79chdg8VRNuBdlDhr5yIhMkuV4kh9YqYsQxiggO5bfaZqQjiPn3IjOSZimi//6XJWoqFDcRw== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(39860400002)(396003)(136003)(346002)(376002)(451199018)(40470700004)(36840700001)(46966006)(2906002)(82740400003)(36860700001)(6286002)(26005)(356005)(16526019)(186003)(478600001)(6666004)(107886003)(316002)(54906003)(110136005)(55016003)(36756003)(40480700001)(82310400005)(86362001)(7636003)(40460700003)(41300700001)(8936002)(2616005)(4326008)(70586007)(70206006)(8676002)(7696005)(1076003)(47076005)(5660300002)(336012)(30864003)(426003)(83380400001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jan 2023 13:20:45.5168 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5f049876-bca0-4222-a71d-08db02c4cba8 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT090.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4939 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Support flex item matching in hws and syntax follows sws exactly. Flex item should be created in advance and follow current json mapping logic. Signed-off-by: Rongwei Liu Reviewed-by: Alex Vesker Acked-by: Viacheslav Ovsiienko --- drivers/net/mlx5/hws/mlx5dr_definer.c | 83 ++++++++++++++++++ drivers/net/mlx5/mlx5.c | 2 +- drivers/net/mlx5/mlx5.h | 6 ++ drivers/net/mlx5/mlx5_flow.h | 1 + drivers/net/mlx5/mlx5_flow_dv.c | 2 +- drivers/net/mlx5/mlx5_flow_flex.c | 116 ++++++++++++++++++++++---- drivers/net/mlx5/mlx5_flow_hw.c | 48 ++++++++++- 7 files changed, 239 insertions(+), 19 deletions(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c index 6b98eb8c96..a6378afb10 100644 --- a/drivers/net/mlx5/hws/mlx5dr_definer.c +++ b/drivers/net/mlx5/hws/mlx5dr_definer.c @@ -293,6 +293,43 @@ mlx5dr_definer_integrity_set(struct mlx5dr_definer_fc *fc, DR_SET(tag, ok1_bits, fc->byte_off, fc->bit_off, fc->bit_mask); } +static void +mlx5dr_definer_flex_parser_set(struct mlx5dr_definer_fc *fc, + const void *item, + uint8_t *tag, bool is_inner) +{ + const struct rte_flow_item_flex *flex = item; + uint32_t byte_off, val, idx; + int ret; + + val = 0; + byte_off = MLX5_BYTE_OFF(definer_hl, flex_parser.flex_parser_0); + idx = fc->fname - MLX5DR_DEFINER_FNAME_FLEX_PARSER_0; + byte_off -= idx * sizeof(uint32_t); + ret = mlx5_flex_get_parser_value_per_byte_off(flex, flex->handle, byte_off, + false, is_inner, &val); + if (ret == -1 || !val) + return; + + DR_SET(tag, val, fc->byte_off, 0, fc->bit_mask); +} + +static void +mlx5dr_definer_flex_parser_inner_set(struct mlx5dr_definer_fc *fc, + const void *item, + uint8_t *tag) +{ + mlx5dr_definer_flex_parser_set(fc, item, tag, true); +} + +static void +mlx5dr_definer_flex_parser_outer_set(struct mlx5dr_definer_fc *fc, + const void *item, + uint8_t *tag) +{ + mlx5dr_definer_flex_parser_set(fc, item, tag, false); +} + static void mlx5dr_definer_gre_key_set(struct mlx5dr_definer_fc *fc, const void *item_spec, @@ -1465,6 +1502,47 @@ mlx5dr_definer_conv_item_meter_color(struct mlx5dr_definer_conv_data *cd, return 0; } +static int +mlx5dr_definer_conv_item_flex_parser(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + uint32_t base_off = MLX5_BYTE_OFF(definer_hl, flex_parser.flex_parser_0); + const struct rte_flow_item_flex *v, *m; + enum mlx5dr_definer_fname fname; + struct mlx5dr_definer_fc *fc; + uint32_t i, mask, byte_off; + bool is_inner = cd->tunnel; + int ret; + + m = item->mask; + v = item->spec; + mask = 0; + for (i = 0; i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) { + byte_off = base_off - i * sizeof(uint32_t); + ret = mlx5_flex_get_parser_value_per_byte_off(m, v->handle, byte_off, + true, is_inner, &mask); + if (ret == -1) { + rte_errno = EINVAL; + return rte_errno; + } + + if (!mask) + continue; + + fname = MLX5DR_DEFINER_FNAME_FLEX_PARSER_0; + fname += (enum mlx5dr_definer_fname)i; + fc = &cd->fc[fname]; + fc->byte_off = byte_off; + fc->item_idx = item_idx; + fc->tag_set = cd->tunnel ? &mlx5dr_definer_flex_parser_inner_set : + &mlx5dr_definer_flex_parser_outer_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->bit_mask = mask; + } + return 0; +} + static int mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx, struct mlx5dr_match_template *mt, @@ -1581,6 +1659,11 @@ mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx, ret = mlx5dr_definer_conv_item_meter_color(&cd, items, i); item_flags |= MLX5_FLOW_ITEM_METER_COLOR; break; + case RTE_FLOW_ITEM_TYPE_FLEX: + ret = mlx5dr_definer_conv_item_flex_parser(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_ITEM_INNER_FLEX : + MLX5_FLOW_ITEM_OUTER_FLEX; + break; default: DR_LOG(ERR, "Unsupported item type %d", items->type); rte_errno = ENOTSUP; diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 0b97c4e78d..8297129bd1 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1034,7 +1034,7 @@ static void mlx5_flex_parser_ecpri_release(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_ecpri_parser_profile *prf = &priv->sh->ecpri_parser; + struct mlx5_ecpri_parser_profile *prf = &priv->sh->ecpri_parser; if (prf->obj) mlx5_devx_cmd_destroy(prf->obj); diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 83fb316ad8..2cc3959d01 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -2243,6 +2243,12 @@ void mlx5_flex_item_port_cleanup(struct rte_eth_dev *dev); void mlx5_flex_flow_translate_item(struct rte_eth_dev *dev, void *matcher, void *key, const struct rte_flow_item *item, bool is_inner); +int mlx5_flex_get_sample_id(const struct mlx5_flex_item *tp, + uint32_t idx, uint32_t *pos, + bool is_inner, uint32_t *def); +int mlx5_flex_get_parser_value_per_byte_off(const struct rte_flow_item_flex *item, + void *flex, uint32_t byte_off, + bool is_mask, bool tunnel, uint32_t *value); int mlx5_flex_acquire_index(struct rte_eth_dev *dev, struct rte_flow_item_flex_handle *handle, bool acquire); diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index e376dcae93..c8761c4e5a 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1225,6 +1225,7 @@ struct rte_flow_pattern_template { * tag pattern item for representor matching. */ bool implicit_tag; + uint8_t flex_item; /* flex item index. */ }; /* Flow action template struct. */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 7ca909999b..284f18da11 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -10588,7 +10588,7 @@ flow_dv_translate_item_flex(struct rte_eth_dev *dev, void *matcher, void *key, (const struct rte_flow_item_flex *)item->spec; int index = mlx5_flex_acquire_index(dev, spec->handle, false); - MLX5_ASSERT(index >= 0 && index <= (int)(sizeof(uint32_t) * CHAR_BIT)); + MLX5_ASSERT(index >= 0 && index < (int)(sizeof(uint32_t) * CHAR_BIT)); if (index < 0) return; if (!(dev_flow->handle->flex_item & RTE_BIT32(index))) { diff --git a/drivers/net/mlx5/mlx5_flow_flex.c b/drivers/net/mlx5/mlx5_flow_flex.c index 24b7226ee6..aa317fc958 100644 --- a/drivers/net/mlx5/mlx5_flow_flex.c +++ b/drivers/net/mlx5/mlx5_flow_flex.c @@ -198,6 +198,99 @@ mlx5_flex_set_match_sample(void *misc4_m, void *misc4_v, } #undef SET_FP_MATCH_SAMPLE_ID } + +/** + * Get the flex parser sample id and corresponding mask + * per shift and width information. + * + * @param[in] tp + * Mlx5 flex item sample mapping handle. + * @param[in] idx + * Mapping index. + * @param[in, out] pos + * Where to search the value and mask. + * @param[in] is_inner + * For inner matching or not. + * @param[in, def] def + * Mask generated by mapping shift and width. + * + * @return + * 0 on success, -1 to ignore. + */ +int +mlx5_flex_get_sample_id(const struct mlx5_flex_item *tp, + uint32_t idx, uint32_t *pos, + bool is_inner, uint32_t *def) +{ + const struct mlx5_flex_pattern_field *map = tp->map + idx; + uint32_t id = map->reg_id; + + *def = (RTE_BIT64(map->width) - 1) << map->shift; + /* Skip placeholders for DUMMY fields. */ + if (id == MLX5_INVALID_SAMPLE_REG_ID) { + *pos += map->width; + return -1; + } + MLX5_ASSERT(map->width); + MLX5_ASSERT(id < tp->devx_fp->num_samples); + if (tp->tunnel_mode == FLEX_TUNNEL_MODE_MULTI && is_inner) { + uint32_t num_samples = tp->devx_fp->num_samples / 2; + + MLX5_ASSERT(tp->devx_fp->num_samples % 2 == 0); + MLX5_ASSERT(id < num_samples); + id += num_samples; + } + return id; +} + +/** + * Get the flex parser mapping value per definer format_select_dw. + * + * @param[in] item + * Rte flex item pointer. + * @param[in] flex + * Mlx5 flex item sample mapping handle. + * @param[in] byte_off + * Mlx5 flex item format_select_dw. + * @param[in] is_mask + * Spec or mask. + * @param[in] tunnel + * Tunnel mode or not. + * @param[in, def] value + * Value calculated for this flex parser, either spec or mask. + * + * @return + * 0 on success, -1 for error. + */ +int +mlx5_flex_get_parser_value_per_byte_off(const struct rte_flow_item_flex *item, + void *flex, uint32_t byte_off, + bool is_mask, bool tunnel, uint32_t *value) +{ + struct mlx5_flex_pattern_field *map; + struct mlx5_flex_item *tp = flex; + uint32_t def, i, pos, val; + int id; + + *value = 0; + for (i = 0, pos = 0; i < tp->mapnum && pos < item->length * CHAR_BIT; i++) { + map = tp->map + i; + id = mlx5_flex_get_sample_id(tp, i, &pos, tunnel, &def); + if (id == -1) + continue; + if (id >= (int)tp->devx_fp->num_samples || id >= MLX5_GRAPH_NODE_SAMPLE_NUM) + return -1; + if (byte_off == tp->devx_fp->sample_ids[id].format_select_dw * sizeof(uint32_t)) { + val = mlx5_flex_get_bitfield(item, pos, map->width, map->shift); + if (is_mask) + val &= RTE_BE32(def); + *value |= val; + } + pos += map->width; + } + return 0; +} + /** * Translate item pattern into matcher fields according to translation * array. @@ -240,26 +333,17 @@ mlx5_flex_flow_translate_item(struct rte_eth_dev *dev, MLX5_ASSERT(mlx5_flex_index(priv, tp) >= 0); for (i = 0; i < tp->mapnum; i++) { struct mlx5_flex_pattern_field *map = tp->map + i; - uint32_t id = map->reg_id; - uint32_t def = (RTE_BIT64(map->width) - 1) << map->shift; - uint32_t val, msk; + uint32_t val, msk, def; + int id = mlx5_flex_get_sample_id(tp, i, &pos, is_inner, &def); - /* Skip placeholders for DUMMY fields. */ - if (id == MLX5_INVALID_SAMPLE_REG_ID) { - pos += map->width; + if (id == -1) continue; - } + MLX5_ASSERT(id < (int)tp->devx_fp->num_samples); + if (id >= (int)tp->devx_fp->num_samples || + id >= MLX5_GRAPH_NODE_SAMPLE_NUM) + return; val = mlx5_flex_get_bitfield(spec, pos, map->width, map->shift); msk = mlx5_flex_get_bitfield(mask, pos, map->width, map->shift); - MLX5_ASSERT(map->width); - MLX5_ASSERT(id < tp->devx_fp->num_samples); - if (tp->tunnel_mode == FLEX_TUNNEL_MODE_MULTI && is_inner) { - uint32_t num_samples = tp->devx_fp->num_samples / 2; - - MLX5_ASSERT(tp->devx_fp->num_samples % 2 == 0); - MLX5_ASSERT(id < num_samples); - id += num_samples; - } if (attr->ext_sample_id) sample_id = tp->devx_fp->sample_ids[id].sample_id; else diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 9406360a10..c1d4138116 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -4335,6 +4335,36 @@ flow_hw_set_vlan_vid_construct(struct rte_eth_dev *dev, &modify_action); } +static int +flow_hw_flex_item_acquire(struct rte_eth_dev *dev, + struct rte_flow_item_flex_handle *handle, + uint8_t *flex_item) +{ + int index = mlx5_flex_acquire_index(dev, handle, false); + + MLX5_ASSERT(index >= 0 && index < (int)(sizeof(uint32_t) * CHAR_BIT)); + if (index < 0) + return -1; + if (!(*flex_item & RTE_BIT32(index))) { + /* Don't count same flex item again. */ + if (mlx5_flex_acquire_index(dev, handle, true) != index) + MLX5_ASSERT(false); + *flex_item |= (uint8_t)RTE_BIT32(index); + } + return 0; +} + +static void +flow_hw_flex_item_release(struct rte_eth_dev *dev, uint8_t *flex_item) +{ + while (*flex_item) { + int index = rte_bsf32(*flex_item); + + mlx5_flex_release_index(dev, index); + *flex_item &= ~(uint8_t)RTE_BIT32(index); + } +} + /** * Create flow action template. * @@ -4736,6 +4766,7 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev, case RTE_FLOW_ITEM_TYPE_ICMP: case RTE_FLOW_ITEM_TYPE_ICMP6: case RTE_FLOW_ITEM_TYPE_CONNTRACK: + case RTE_FLOW_ITEM_TYPE_FLEX: break; case RTE_FLOW_ITEM_TYPE_INTEGRITY: /* @@ -4813,6 +4844,7 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev, .mask = &tag_m, .last = NULL }; + unsigned int i = 0; if (flow_hw_pattern_validate(dev, attr, items, error)) return NULL; @@ -4872,6 +4904,19 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev, it->implicit_tag = true; mlx5_free(copied_items); } + for (i = 0; items[i].type != RTE_FLOW_ITEM_TYPE_END; ++i) { + if (items[i].type == RTE_FLOW_ITEM_TYPE_FLEX) { + const struct rte_flow_item_flex *spec = + (const struct rte_flow_item_flex *)items[i].spec; + struct rte_flow_item_flex_handle *handle = spec->handle; + + if (flow_hw_flex_item_acquire(dev, handle, &it->flex_item)) { + claim_zero(mlx5dr_match_template_destroy(it->mt)); + mlx5_free(it); + return NULL; + } + } + } __atomic_fetch_add(&it->refcnt, 1, __ATOMIC_RELAXED); LIST_INSERT_HEAD(&priv->flow_hw_itt, it, next); return it; @@ -4891,7 +4936,7 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev, * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -flow_hw_pattern_template_destroy(struct rte_eth_dev *dev __rte_unused, +flow_hw_pattern_template_destroy(struct rte_eth_dev *dev, struct rte_flow_pattern_template *template, struct rte_flow_error *error __rte_unused) { @@ -4904,6 +4949,7 @@ flow_hw_pattern_template_destroy(struct rte_eth_dev *dev __rte_unused, "item template in using"); } LIST_REMOVE(template, next); + flow_hw_flex_item_release(dev, &template->flex_item); claim_zero(mlx5dr_match_template_destroy(template->mt)); mlx5_free(template); return 0;