From patchwork Thu Sep 22 19:03:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Vesker X-Patchwork-Id: 116678 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0CFB8A0543; Thu, 22 Sep 2022 21:07:04 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C277E42B98; Thu, 22 Sep 2022 21:05:40 +0200 (CEST) Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam04on2076.outbound.protection.outlook.com [40.107.101.76]) by mails.dpdk.org (Postfix) with ESMTP id 7E79542BB8 for ; Thu, 22 Sep 2022 21:05:39 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=bxYFGREIR8ErlPvKvjcTWLMM0ta3GpQf9YEBGm7ifACce6xT/c+H5l1fUZVU1kfnFVbA42xEN/wsZdeA9/pFxtAFX3Jt8kYjL887HRTbMiUA2dC7ozaCZxdkLNxYTG0eopQs6J0jX1sJwsUjhS+Wlj03f0QFPCO3X26fo/kCuwGb475aYOqDpEYvxtPapKRsQOlbU+sq91SXdwpGXQMxTamT+3oc2uuTU9qJXj3c2CH5qgo7PAAO4jgkJ8fMRET87z1dMo0DLOUzPBemAEa7R6tK0JP0Krerpnz9fZ9aQcSDL0XipFWaZH0bCh7d4R4GCW0WeFDnkYXp+H1e341WfA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=XV+2Vd7IGkWOcPZEzcwz7/bvkvZkv/oEI8D53V1gqR4=; b=m1vUM1rfZE7mIFBy9exThlKg0U6GC5yJbeyylO8Qd1wxwM23rHUDRwmCaAG+9g78oZby9hLRpx/y2Rs9j2GMA/K/iSBsT7PQsDwHh6gYqqREhX9aCpbkgyxgJsjWsTt4Z5u5uyGytcVJjPCCHwDKXcaaB2h7ZzMrlDuQ9LPjFipCKeuP0mlN75t73yQ1GByPiD5KrAVDbKaz56e1c6Vp4hWuNdQ6a2cVV4E7XnrMGw/t9VSuSt+SCKokdFwQi9gnaWSG/K8jF7cKuaec9qfo+VqJKlW4kw+hvGim1p0zSUys5a5hUCDiI+6Ncc5dSwBpSKHKpUAe+pn+3MY5KFhKHA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=XV+2Vd7IGkWOcPZEzcwz7/bvkvZkv/oEI8D53V1gqR4=; b=IdUXuj6N91iqSTwhOGZgXkBLCpNjHCQ/bsLr69ppIMi0k+QyjteDitF2TJNTp7lLgCUVuaLW/8GDRoWiNW6xJRY/OwE6/9qSEPQumKvWb0Nf5QxmVNnOkyW9LM2uuW/JLT7PTm2dKOJzDan3BJfhEM6tU5p4d708SuyYwoCZVP+jXakMpxyj2ycUMS0Muq3A+wGmDbEE9rMoRIWDucHhEdv/NLXFdZNudM0cqr7yvMlUEtaks7xy1LlIY7QLVU1krTKv04I4DodLVUJUZv6r5PAib/U0TeBgfEybHRUzj6GNqW1CPBdoUwWqtlHPlqmyrnbUfe6NO8f3I2TjOrER7A== Received: from MW4PR04CA0225.namprd04.prod.outlook.com (2603:10b6:303:87::20) by MN2PR12MB4286.namprd12.prod.outlook.com (2603:10b6:208:199::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.18; Thu, 22 Sep 2022 19:05:36 +0000 Received: from CO1NAM11FT093.eop-nam11.prod.protection.outlook.com (2603:10b6:303:87:cafe::86) by MW4PR04CA0225.outlook.office365.com (2603:10b6:303:87::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.17 via Frontend Transport; Thu, 22 Sep 2022 19:05:36 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT093.mail.protection.outlook.com (10.13.175.59) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Thu, 22 Sep 2022 19:05:36 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Thu, 22 Sep 2022 12:05:18 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Thu, 22 Sep 2022 12:05:15 -0700 From: Alex Vesker To: , , , , , Matan Azrad CC: , Subject: [v1 15/19] net/mlx5/hws: Add HWS matcher object Date: Thu, 22 Sep 2022 22:03:40 +0300 Message-ID: <20220922190345.394-16-valex@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220922190345.394-1-valex@nvidia.com> References: <20220922190345.394-1-valex@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT093:EE_|MN2PR12MB4286:EE_ X-MS-Office365-Filtering-Correlation-Id: d9713528-a5b5-47da-edc4-08da9ccd6e91 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: x7PhfuXvzOO35v97RPsMnWwSkqtTYVf4pgFNrZFCDxF6pEzeyzB1HdDiLUKvJ4jYe3B047qNi/u/Jk9i2jwVROJLT3BGKdK1nTXkHFmEZ/4dQA+Yknaf9S0v+i8rpRm9xKAadPvE3hdv083bfl/66UoVGgQzwGanHYQE9459IeQmbbC+PAGeF0eRjbIFYwdHMvpqv8nGKBMRlIUQ4CjdyB+Y6ZE2zbClYnOIM07tz5ZQ+F8LhzsFKxLc9bqPTAc+XZ0t8+RFPBlWqCRFRAFW4fkZqZbI28MF5gbL04BzkrpVsT+SXRO5rJ/wquxWl3UTAnDDAIN4wbvipYW+zDWxJJByUjsLY0PcRA5qiHOLFk3PEx0opCJqSPljljWlneaTIE6ftPZGs3k6/MXlkrBIgsgWD/pQoV2CYhxW0w8K4rhxkxr6J4ej6IqDz4Lvkzfm7vGaH3cYQC7wxCLJlHmyQPSIKK6MjFv8a7utW8mJBqYoBnSTM8uVLuLwmZcbsAUJ9FV6PazF7FN0PnxqpaRemRyWk+a7AN5iWfpR98DT3RT15Z7q60bOTBBZM1+WA3xggyPFjEweO0refoL31clvEgfSaBdIivd+fC3/PBbwC12Zu+SFoa1D/TbB2bmL+ssKi3yk/WpEO71y7QdvXRfwySQUFPtjcBRLwrE5g5nbnlZoYIYjiFuqwXenwby5lEX6xY12GsOQzn8BGR2d0VPhDa09vgsqSOj+/93+qe0Ho8IcElfw3B4qJOUHFpoDiVvhNi8tCyvxXScCsFYy2fakAg== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(396003)(39860400002)(376002)(136003)(346002)(451199015)(40470700004)(46966006)(36840700001)(356005)(82310400005)(1076003)(2616005)(186003)(8676002)(16526019)(83380400001)(336012)(6286002)(8936002)(426003)(26005)(7636003)(55016003)(5660300002)(40480700001)(4326008)(47076005)(36756003)(7696005)(6666004)(2906002)(316002)(107886003)(70586007)(110136005)(41300700001)(54906003)(30864003)(478600001)(82740400003)(6636002)(36860700001)(40460700003)(86362001)(70206006); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Sep 2022 19:05:36.2070 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d9713528-a5b5-47da-edc4-08da9ccd6e91 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT093.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4286 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org HWS matcher resides under the table object, each table can have multiple chained matcher with different attributes. Each matcher represents a combination of match and action templates. Each matcher can contain multiple configurations based on the templates. Packets are steered from the table to the matcher and from there to other objects. The matcher allows efficent HW packet field matching and action execution based on the configuration done to it. Signed-off-by: Alex Vesker --- drivers/net/mlx5/hws/mlx5dr_matcher.c | 920 ++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_matcher.h | 76 +++ 2 files changed, 996 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_matcher.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_matcher.h diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.c b/drivers/net/mlx5/hws/mlx5dr_matcher.c new file mode 100644 index 0000000000..f9c8248ef3 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.c @@ -0,0 +1,920 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Copyright (c) 2022 NVIDIA Corporation 2021 NVIDIA CORPORATION. All rights reserved. Affiliates + */ + +#include "mlx5dr_internal.h" + +static bool mlx5dr_matcher_requires_col_tbl(uint8_t log_num_of_rules) +{ + /* Collision table concatenation is done only for large rule tables */ + return log_num_of_rules > MLX5DR_MATCHER_ASSURED_RULES_TH; +} + +static uint8_t mlx5dr_matcher_rules_to_tbl_depth(uint8_t log_num_of_rules) +{ + if (mlx5dr_matcher_requires_col_tbl(log_num_of_rules)) + return MLX5DR_MATCHER_ASSURED_MAIN_TBL_DEPTH; + + /* For small rule tables we use a single deep table to assure insertion */ + return RTE_MIN(log_num_of_rules, MLX5DR_MATCHER_ASSURED_COL_TBL_DEPTH); +} + +static int mlx5dr_matcher_create_end_ft(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_table *tbl = matcher->tbl; + + matcher->end_ft = mlx5dr_table_create_default_ft(tbl); + if (!matcher->end_ft) { + DR_LOG(ERR, "Failed to create matcher end flow table"); + return rte_errno; + } + return 0; +} + +static void mlx5dr_matcher_destroy_end_ft(struct mlx5dr_matcher *matcher) +{ + mlx5dr_table_destroy_default_ft(matcher->tbl, matcher->end_ft); +} + +static int mlx5dr_matcher_connect(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_cmd_ft_modify_attr ft_attr = {0}; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_matcher *prev = NULL; + struct mlx5dr_matcher *next = NULL; + struct mlx5dr_matcher *tmp_matcher; + struct mlx5dr_devx_obj *ft; + int ret; + + /* Connect lists */ + if (LIST_EMPTY(&tbl->head)) { + LIST_INSERT_HEAD(&tbl->head, matcher, next); + goto connect; + } + + LIST_FOREACH(tmp_matcher, &tbl->head, next) { + if (tmp_matcher->attr.priority > matcher->attr.priority) { + next = tmp_matcher; + break; + } + prev = tmp_matcher; + } + + if (next) + LIST_INSERT_BEFORE(next, matcher, next); + else + LIST_INSERT_AFTER(prev, matcher, next); + +connect: + ft_attr.modify_fs = MLX5_IFC_MODIFY_FLOW_TABLE_RTC_ID; + ft_attr.type = tbl->fw_ft_type; + + /* Connect to next */ + if (next) { + if (next->match_ste.rtc_0) + ft_attr.rtc_id_0 = next->match_ste.rtc_0->id; + if (next->match_ste.rtc_1) + ft_attr.rtc_id_1 = next->match_ste.rtc_1->id; + + ret = mlx5dr_cmd_flow_table_modify(matcher->end_ft, &ft_attr); + if (ret) { + DR_LOG(ERR, "Failed to connect new matcher to next RTC"); + goto remove_from_list; + } + } + + /* Connect to previous */ + ft = prev ? prev->end_ft : tbl->ft; + + if (matcher->match_ste.rtc_0) + ft_attr.rtc_id_0 = matcher->match_ste.rtc_0->id; + if (matcher->match_ste.rtc_1) + ft_attr.rtc_id_1 = matcher->match_ste.rtc_1->id; + + ret = mlx5dr_cmd_flow_table_modify(ft, &ft_attr); + if (ret) { + DR_LOG(ERR, "Failed to connect new matcher to previous FT"); + goto remove_from_list; + } + + return 0; + +remove_from_list: + LIST_REMOVE(matcher, next); + return ret; +} + +static int mlx5dr_matcher_disconnect(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_cmd_ft_modify_attr ft_attr = {0}; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_matcher *tmp_matcher; + struct mlx5dr_devx_obj *prev_ft; + struct mlx5dr_matcher *next; + int ret; + + prev_ft = matcher->tbl->ft; + LIST_FOREACH(tmp_matcher, &tbl->head, next) { + if (tmp_matcher == matcher) + break; + + prev_ft = tmp_matcher->end_ft; + } + + next = matcher->next.le_next; + + ft_attr.modify_fs = MLX5_IFC_MODIFY_FLOW_TABLE_RTC_ID; + ft_attr.type = matcher->tbl->fw_ft_type; + + /* Connect previous end FT to next RTC if exists */ + if (next) { + if (next->match_ste.rtc_0) + ft_attr.rtc_id_0 = next->match_ste.rtc_0->id; + if (next->match_ste.rtc_1) + ft_attr.rtc_id_1 = next->match_ste.rtc_1->id; + } else { /* last matcher is removed, point prev to the default miss */ + mlx5dr_cmd_set_attr_connect_miss_tbl(tbl->ctx, + tbl->fw_ft_type, + tbl->type, + &ft_attr); + } + + ret = mlx5dr_cmd_flow_table_modify(prev_ft, &ft_attr); + if (ret) { + DR_LOG(ERR, "Failed to disconnect matcher"); + return ret; + } + + LIST_REMOVE(matcher, next); + + return 0; +} + +static void mlx5dr_matcher_set_rtc_attr_sz(struct mlx5dr_matcher *matcher, + struct mlx5dr_cmd_rtc_create_attr *rtc_attr, + bool is_match_rtc, + bool is_mirror) +{ + struct mlx5dr_pool_chunk *ste = &matcher->action_ste.ste; + + if ((matcher->attr.optimize_flow_src == MLX5DR_MATCHER_FLOW_SRC_VPORT && !is_mirror) || + (matcher->attr.optimize_flow_src == MLX5DR_MATCHER_FLOW_SRC_WIRE && is_mirror)) { + /* Optimize FDB RTC */ + rtc_attr->log_size = 0; + rtc_attr->log_depth = 0; + } else { + /* Keep original values */ + rtc_attr->log_size = is_match_rtc ? matcher->attr.table.sz_row_log : ste->order; + rtc_attr->log_depth = is_match_rtc ? matcher->attr.table.sz_col_log : 0; + } +} + +static int mlx5dr_matcher_create_rtc(struct mlx5dr_matcher *matcher, + bool is_match_rtc) +{ + const char *rtc_type_str = is_match_rtc ? "match" : "action"; + struct mlx5dr_cmd_rtc_create_attr rtc_attr = {0}; + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dr_action_default_stc *default_stc; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_devx_obj **rtc_0, **rtc_1; + struct mlx5dr_pool *ste_pool, *stc_pool; + struct mlx5dr_devx_obj *devx_obj; + struct mlx5dr_pool_chunk *ste; + int ret; + + if (is_match_rtc) { + rtc_0 = &matcher->match_ste.rtc_0; + rtc_1 = &matcher->match_ste.rtc_1; + ste_pool = matcher->match_ste.pool; + ste = &matcher->match_ste.ste; + ste->order = matcher->attr.table.sz_col_log + + matcher->attr.table.sz_row_log; + rtc_attr.log_size = matcher->attr.table.sz_row_log; + rtc_attr.log_depth = matcher->attr.table.sz_col_log; + rtc_attr.update_index_mode = MLX5_IFC_RTC_STE_UPDATE_MODE_BY_HASH; + /* The first match template is used since all share the same definer */ + rtc_attr.definer_id = mlx5dr_definer_get_id(matcher->mt[0]->definer); + rtc_attr.is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt[0]->definer); + rtc_attr.miss_ft_id = matcher->end_ft->id; + /* Match pool requires implicit allocation */ + ret = mlx5dr_pool_chunk_alloc(ste_pool, ste); + if (ret) { + DR_LOG(ERR, "Failed to allocate STE for %s RTC", rtc_type_str); + return ret; + } + } else { + rtc_0 = &matcher->action_ste.rtc_0; + rtc_1 = &matcher->action_ste.rtc_1; + ste_pool = matcher->action_ste.pool; + ste = &matcher->action_ste.ste; + ste->order = rte_log2_u32(matcher->action_ste.max_stes) + + matcher->attr.table.sz_row_log; + rtc_attr.log_size = ste->order; + rtc_attr.log_depth = 0; + rtc_attr.update_index_mode = MLX5_IFC_RTC_STE_UPDATE_MODE_BY_OFFSET; + /* The action STEs use the default always hit definer */ + rtc_attr.definer_id = ctx->caps->trivial_match_definer; + rtc_attr.is_jumbo = false; + rtc_attr.miss_ft_id = 0; + } + + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj(ste_pool, ste); + + rtc_attr.pd = ctx->pd_num; + rtc_attr.ste_base = devx_obj->id; + rtc_attr.ste_offset = ste->offset; + rtc_attr.table_type = mlx5dr_table_get_res_fw_ft_type(tbl->type, false); + mlx5dr_matcher_set_rtc_attr_sz(matcher, &rtc_attr, is_match_rtc, false); + + /* STC is a single resource (devx_obj), use any STC for the ID */ + stc_pool = ctx->stc_pool[tbl->type]; + default_stc = ctx->common_res[tbl->type].default_stc; + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj(stc_pool, &default_stc->default_hit); + rtc_attr.stc_base = devx_obj->id; + + *rtc_0 = mlx5dr_cmd_rtc_create(ctx->ibv_ctx, &rtc_attr); + if (!*rtc_0) { + DR_LOG(ERR, "Failed to create matcher %s RTC", rtc_type_str); + goto free_ste; + } + + if (tbl->type == MLX5DR_TABLE_TYPE_FDB) { + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj_mirror(ste_pool, ste); + rtc_attr.ste_base = devx_obj->id; + rtc_attr.table_type = mlx5dr_table_get_res_fw_ft_type(tbl->type, true); + + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj_mirror(stc_pool, &default_stc->default_hit); + rtc_attr.stc_base = devx_obj->id; + mlx5dr_matcher_set_rtc_attr_sz(matcher, &rtc_attr, is_match_rtc, true); + + *rtc_1 = mlx5dr_cmd_rtc_create(ctx->ibv_ctx, &rtc_attr); + if (!*rtc_1) { + DR_LOG(ERR, "Failed to create peer matcher %s RTC0", rtc_type_str); + goto destroy_rtc_0; + } + } + + return 0; + +destroy_rtc_0: + mlx5dr_cmd_destroy_obj(*rtc_0); +free_ste: + if (is_match_rtc) + mlx5dr_pool_chunk_free(ste_pool, ste); + return rte_errno; +} + +static void mlx5dr_matcher_destroy_rtc(struct mlx5dr_matcher *matcher, + bool is_match_rtc) +{ + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_devx_obj *rtc_0, *rtc_1; + struct mlx5dr_pool_chunk *ste; + struct mlx5dr_pool *ste_pool; + + if (is_match_rtc) { + rtc_0 = matcher->match_ste.rtc_0; + rtc_1 = matcher->match_ste.rtc_1; + ste_pool = matcher->match_ste.pool; + ste = &matcher->match_ste.ste; + } else { + rtc_0 = matcher->action_ste.rtc_0; + rtc_1 = matcher->action_ste.rtc_1; + ste_pool = matcher->action_ste.pool; + ste = &matcher->action_ste.ste; + } + + if (tbl->type == MLX5DR_TABLE_TYPE_FDB) + mlx5dr_cmd_destroy_obj(rtc_1); + + mlx5dr_cmd_destroy_obj(rtc_0); + if (is_match_rtc) + mlx5dr_pool_chunk_free(ste_pool, ste); +} + +static void mlx5dr_matcher_set_pool_attr(struct mlx5dr_pool_attr *attr, + struct mlx5dr_matcher *matcher) +{ + switch (matcher->attr.optimize_flow_src) { + case MLX5DR_MATCHER_FLOW_SRC_VPORT: + attr->opt_type = MLX5DR_POOL_OPTIMIZE_ORIG; + break; + case MLX5DR_MATCHER_FLOW_SRC_WIRE: + attr->opt_type = MLX5DR_POOL_OPTIMIZE_MIRROR; + break; + default: + break; + } +} + +static int mlx5dr_matcher_bind_at(struct mlx5dr_matcher *matcher) +{ + bool is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt[0]->definer); + struct mlx5dr_cmd_stc_modify_attr stc_attr = {0}; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_pool_attr pool_attr = {0}; + struct mlx5dr_context *ctx = tbl->ctx; + uint32_t required_stes; + int i, ret; + bool valid; + + for (i = 0; i < matcher->num_of_at; i++) { + struct mlx5dr_action_template *at = matcher->at[i]; + + /* Check if action combinabtion is valid */ + valid = mlx5dr_action_check_combo(at->action_type_arr, matcher->tbl->type); + if (!valid) { + DR_LOG(ERR, "Invalid combination in action template %d", i); + return rte_errno; + } + + /* Process action template to setters */ + ret = mlx5dr_action_template_process(at); + if (ret) { + DR_LOG(ERR, "Failed to process action template %d", i); + return rte_errno; + } + + required_stes = at->num_of_action_stes - (!is_jumbo || at->only_term); + matcher->action_ste.max_stes = RTE_MAX(matcher->action_ste.max_stes, required_stes); + + /* Future: Optimize reparse */ + } + + /* There are no additioanl STEs required for matcher */ + if (!matcher->action_ste.max_stes) + return 0; + + /* Allocate action STE mempool */ + pool_attr.table_type = tbl->type; + pool_attr.pool_type = MLX5DR_POOL_TYPE_STE; + pool_attr.flags = MLX5DR_POOL_FLAGS_FOR_STE_ACTION_POOL; + pool_attr.alloc_log_sz = rte_log2_u32(matcher->action_ste.max_stes) + + matcher->attr.table.sz_row_log; + mlx5dr_matcher_set_pool_attr(&pool_attr, matcher); + matcher->action_ste.pool = mlx5dr_pool_create(ctx, &pool_attr); + if (!matcher->action_ste.pool) { + DR_LOG(ERR, "Failed to create action ste pool"); + return rte_errno; + } + + /* Allocate action RTC */ + ret = mlx5dr_matcher_create_rtc(matcher, false); + if (ret) { + DR_LOG(ERR, "Failed to create action RTC"); + goto free_ste_pool; + } + + /* Allocate STC for jumps to STE */ + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_HIT; + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_STE_TABLE; + stc_attr.ste_table.ste = matcher->action_ste.ste; + stc_attr.ste_table.ste_pool = matcher->action_ste.pool; + stc_attr.ste_table.match_definer_id = ctx->caps->trivial_match_definer; + + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl->type, + &matcher->action_ste.stc); + if (ret) { + DR_LOG(ERR, "Failed to create action jump to table STC"); + goto free_rtc; + } + + return 0; + +free_rtc: + mlx5dr_matcher_destroy_rtc(matcher, false); +free_ste_pool: + mlx5dr_pool_destroy(matcher->action_ste.pool); + return rte_errno; +} + +static void mlx5dr_matcher_unbind_at(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_table *tbl = matcher->tbl; + + if (!matcher->action_ste.max_stes) + return; + + mlx5dr_action_free_single_stc(tbl->ctx, tbl->type, &matcher->action_ste.stc); + mlx5dr_matcher_destroy_rtc(matcher, false); + mlx5dr_pool_destroy(matcher->action_ste.pool); +} + +static int mlx5dr_matcher_bind_mt(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dr_pool_attr pool_attr = {0}; + int i, created = 0; + int ret = -1; + + for (i = 0; i < matcher->num_of_mt; i++) { + /* Get a definer for each match template */ + ret = mlx5dr_definer_get(ctx, matcher->mt[i]); + if (ret) + goto definer_put; + + created++; + + /* Verify all templates produce the same definer */ + if (i == 0) + continue; + + ret = mlx5dr_definer_compare(matcher->mt[i]->definer, + matcher->mt[i-1]->definer); + if (ret) { + DR_LOG(ERR, "Match templates cannot be used on the same matcher"); + rte_errno = ENOTSUP; + goto definer_put; + } + } + + /* Create an STE pool per matcher*/ + pool_attr.pool_type = MLX5DR_POOL_TYPE_STE; + pool_attr.flags = MLX5DR_POOL_FLAGS_FOR_MATCHER_STE_POOL; + pool_attr.alloc_log_sz = matcher->attr.table.sz_col_log + + matcher->attr.table.sz_row_log; + pool_attr.table_type = matcher->tbl->type; + mlx5dr_matcher_set_pool_attr(&pool_attr, matcher); + + matcher->match_ste.pool = mlx5dr_pool_create(ctx, &pool_attr); + if (!matcher->match_ste.pool) { + DR_LOG(ERR, "Failed to allocate matcher STE pool"); + goto definer_put; + } + + return 0; + +definer_put: + while (created--) + mlx5dr_definer_put(matcher->mt[created]); + + return ret; +} + +static void mlx5dr_matcher_unbind_mt(struct mlx5dr_matcher *matcher) +{ + int i; + + for (i = 0; i < matcher->num_of_mt; i++) + mlx5dr_definer_put(matcher->mt[i]); + + mlx5dr_pool_destroy(matcher->match_ste.pool); +} + +static int +mlx5dr_matcher_process_attr(struct mlx5dr_cmd_query_caps *caps, + struct mlx5dr_matcher *matcher, + bool is_root) +{ + struct mlx5dr_matcher_attr *attr = &matcher->attr; + + if (matcher->tbl->type != MLX5DR_TABLE_TYPE_FDB && attr->optimize_flow_src) { + DR_LOG(ERR, "NIC domain doesn't support flow_src"); + goto not_supported; + } + + if (is_root) { + if (attr->mode != MLX5DR_MATCHER_RESOURCE_MODE_RULE) { + DR_LOG(ERR, "Root matcher supports only rule resource mode"); + goto not_supported; + } + if (attr->optimize_flow_src) { + DR_LOG(ERR, "Root matcher can't specify FDB direction"); + goto not_supported; + } + return 0; + } + + /* Convert number of rules to the required depth */ + if (attr->mode == MLX5DR_MATCHER_RESOURCE_MODE_RULE) + attr->table.sz_col_log = mlx5dr_matcher_rules_to_tbl_depth(attr->rule.num_log); + + if (attr->table.sz_col_log > caps->rtc_log_depth_max) { + DR_LOG(ERR, "Matcher depth exceeds limit %d", caps->rtc_log_depth_max); + goto not_supported; + } + + if (attr->table.sz_col_log + attr->table.sz_row_log > caps->ste_alloc_log_max) { + DR_LOG(ERR, "Total matcher size exceeds limit %d", caps->ste_alloc_log_max); + goto not_supported; + } + + if (attr->table.sz_col_log + attr->table.sz_row_log < caps->ste_alloc_log_gran) { + DR_LOG(ERR, "Total matcher size below limit %d", caps->ste_alloc_log_gran); + goto not_supported; + } + + return 0; + +not_supported: + rte_errno = EOPNOTSUPP; + return rte_errno; +} + +static int mlx5dr_matcher_create_and_connect(struct mlx5dr_matcher *matcher) +{ + int ret; + + /* Select and create the definers for current matcher */ + ret = mlx5dr_matcher_bind_mt(matcher); + if (ret) + return ret; + + /* Calculate and verify action combination */ + ret = mlx5dr_matcher_bind_at(matcher); + if (ret) + goto unbind_mt; + + /* Create matcher end flow table anchor */ + ret = mlx5dr_matcher_create_end_ft(matcher); + if (ret) + goto unbind_at; + + /* Allocate the RTC for the new matcher */ + ret = mlx5dr_matcher_create_rtc(matcher, true); + if (ret) + goto destroy_end_ft; + + /* Connect the matcher to the matcher list */ + ret = mlx5dr_matcher_connect(matcher); + if (ret) + goto destroy_rtc; + + return 0; + +destroy_rtc: + mlx5dr_matcher_destroy_rtc(matcher, true); +destroy_end_ft: + mlx5dr_matcher_destroy_end_ft(matcher); +unbind_at: + mlx5dr_matcher_unbind_at(matcher); +unbind_mt: + mlx5dr_matcher_unbind_mt(matcher); + return ret; +} + +static void mlx5dr_matcher_destroy_and_disconnect(struct mlx5dr_matcher *matcher) +{ + mlx5dr_matcher_disconnect(matcher); + mlx5dr_matcher_destroy_rtc(matcher, true); + mlx5dr_matcher_destroy_end_ft(matcher); + mlx5dr_matcher_unbind_at(matcher); + mlx5dr_matcher_unbind_mt(matcher); +} + +static int +mlx5dr_matcher_create_col_matcher(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dr_matcher *col_matcher; + int ret; + + if (matcher->attr.mode != MLX5DR_MATCHER_RESOURCE_MODE_RULE) + return 0; + + if (!mlx5dr_matcher_requires_col_tbl(matcher->attr.rule.num_log)) + return 0; + + col_matcher = simple_calloc(1, sizeof(*matcher)); + if (!col_matcher) { + rte_errno = ENOMEM; + return rte_errno; + } + + col_matcher->tbl = matcher->tbl; + col_matcher->num_of_mt = matcher->num_of_mt; + memcpy(col_matcher->mt, matcher->mt, matcher->num_of_mt * sizeof(*matcher->mt)); + col_matcher->num_of_at = matcher->num_of_at; + memcpy(col_matcher->at, matcher->at, matcher->num_of_at * sizeof(*matcher->at)); + + col_matcher->attr.priority = matcher->attr.priority; + col_matcher->attr.mode = MLX5DR_MATCHER_RESOURCE_MODE_HTABLE; + col_matcher->attr.optimize_flow_src = matcher->attr.optimize_flow_src; + col_matcher->attr.table.sz_row_log = matcher->attr.rule.num_log; + col_matcher->attr.table.sz_col_log = MLX5DR_MATCHER_ASSURED_COL_TBL_DEPTH; + if (col_matcher->attr.table.sz_row_log > MLX5DR_MATCHER_ASSURED_ROW_RATIO) + col_matcher->attr.table.sz_row_log -= MLX5DR_MATCHER_ASSURED_ROW_RATIO; + + ret = mlx5dr_matcher_process_attr(ctx->caps, col_matcher, false); + if (ret) + goto free_col_matcher; + + ret = mlx5dr_matcher_create_and_connect(col_matcher); + if (ret) + goto free_col_matcher; + + matcher->col_matcher = col_matcher; + + return 0; + +free_col_matcher: + simple_free(col_matcher); + DR_LOG(ERR, "Failed to create assured collision matcher"); + return ret; +} + +static void +mlx5dr_matcher_destroy_col_matcher(struct mlx5dr_matcher *matcher) +{ + if (matcher->attr.mode != MLX5DR_MATCHER_RESOURCE_MODE_RULE) + return; + + if (matcher->col_matcher) { + mlx5dr_matcher_destroy_and_disconnect(matcher->col_matcher); + simple_free(matcher->col_matcher); + } +} + +static int mlx5dr_matcher_init(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_context *ctx = matcher->tbl->ctx; + int ret; + + pthread_spin_lock(&ctx->ctrl_lock); + + /* Allocate matcher resource and connect to the packet pipe */ + ret = mlx5dr_matcher_create_and_connect(matcher); + if (ret) + goto unlock_err; + + /* Create additional matcher for collision handling */ + ret = mlx5dr_matcher_create_col_matcher(matcher); + if (ret) + goto destory_and_disconnect; + + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; + +destory_and_disconnect: + mlx5dr_matcher_destroy_and_disconnect(matcher); +unlock_err: + pthread_spin_unlock(&ctx->ctrl_lock); + return ret; +} + +static int mlx5dr_matcher_uninit(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_context *ctx = matcher->tbl->ctx; + + pthread_spin_lock(&ctx->ctrl_lock); + mlx5dr_matcher_destroy_col_matcher(matcher); + mlx5dr_matcher_destroy_and_disconnect(matcher); + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; +} + +static int mlx5dr_matcher_init_root(struct mlx5dr_matcher *matcher) +{ + enum mlx5dr_table_type type = matcher->tbl->type; + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dv_flow_matcher_attr attr = {0}; + struct mlx5dv_flow_match_parameters *mask; + struct mlx5_flow_attr flow_attr = {0}; + enum mlx5dv_flow_table_type ft_type; + struct rte_flow_error rte_error; + uint8_t match_criteria; + int ret; + + switch (type) { + case MLX5DR_TABLE_TYPE_NIC_RX: + ft_type = MLX5_IB_UAPI_FLOW_TABLE_TYPE_NIC_RX; + break; + case MLX5DR_TABLE_TYPE_NIC_TX: + ft_type = MLX5_IB_UAPI_FLOW_TABLE_TYPE_NIC_TX; + break; + case MLX5DR_TABLE_TYPE_FDB: + ft_type = MLX5_IB_UAPI_FLOW_TABLE_TYPE_FDB; + break; + default: + assert(0); + break; + } + + if (matcher->attr.priority > UINT16_MAX) { + DR_LOG(ERR, "Root matcher priority exceeds allowed limit"); + rte_errno = EINVAL; + return rte_errno; + } + + mask = simple_calloc(1, MLX5_ST_SZ_BYTES(fte_match_param) + + offsetof(struct mlx5dv_flow_match_parameters, match_buf)); + if (!mask) { + rte_errno = ENOMEM; + return rte_errno; + } + + flow_attr.tbl_type = type; + + /* On root table matcher, only a single match template is supported */ + ret = flow_dv_translate_items_hws(matcher->mt[0]->items, + &flow_attr, mask->match_buf, + MLX5_SET_MATCHER_HS_M, NULL, + &match_criteria, + &rte_error); + if (ret) { + DR_LOG(ERR, "Failed to convert items to PRM [%s]", rte_error.message); + goto free_mask; + } + + mask->match_sz = MLX5_ST_SZ_BYTES(fte_match_param); + attr.match_mask = mask; + attr.match_criteria_enable = match_criteria; + attr.ft_type = ft_type; + attr.type = IBV_FLOW_ATTR_NORMAL; + attr.priority = matcher->attr.priority; + attr.comp_mask = MLX5DV_FLOW_MATCHER_MASK_FT_TYPE; + + matcher->dv_matcher = + mlx5_glue->dv_create_flow_matcher_root(ctx->ibv_ctx, &attr); + if (!matcher->dv_matcher) { + DR_LOG(ERR, "Failed to create DV flow matcher"); + rte_errno = errno; + goto free_mask; + } + + simple_free(mask); + + pthread_spin_lock(&ctx->ctrl_lock); + LIST_INSERT_HEAD(&matcher->tbl->head, matcher, next); + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; + +free_mask: + simple_free(mask); + return rte_errno; +} + +static int mlx5dr_matcher_uninit_root(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_context *ctx = matcher->tbl->ctx; + int ret; + + pthread_spin_lock(&ctx->ctrl_lock); + LIST_REMOVE(matcher, next); + pthread_spin_unlock(&ctx->ctrl_lock); + + ret = mlx5_glue->dv_destroy_flow_matcher_root(matcher->dv_matcher); + if (ret) { + DR_LOG(ERR, "Failed to Destroy DV flow matcher"); + rte_errno = errno; + } + + return ret; +} + +static int +mlx5dr_matcher_check_template(uint8_t num_of_mt, uint8_t num_of_at, bool is_root) +{ + uint8_t max_num_of_mt; + + max_num_of_mt = is_root ? + MLX5DR_MATCHER_MAX_MT_ROOT : + MLX5DR_MATCHER_MAX_MT; + + if (!num_of_mt || !num_of_at) { + DR_LOG(ERR, "Number of action/match template cannot be zero"); + goto out_not_sup; + } + + if (num_of_at > MLX5DR_MATCHER_MAX_AT) { + DR_LOG(ERR, "Number of action templates exceeds limit"); + goto out_not_sup; + } + + if (num_of_mt > max_num_of_mt) { + DR_LOG(ERR, "Number of match templates exceeds limit"); + goto out_not_sup; + } + + return 0; + +out_not_sup: + rte_errno = ENOTSUP; + return rte_errno; +} + +struct mlx5dr_matcher * +mlx5dr_matcher_create(struct mlx5dr_table *tbl, + struct mlx5dr_match_template *mt[], + uint8_t num_of_mt, + struct mlx5dr_action_template *at[], + uint8_t num_of_at, + struct mlx5dr_matcher_attr *attr) +{ + bool is_root = mlx5dr_table_is_root(tbl); + struct mlx5dr_matcher *matcher; + int ret; + + ret = mlx5dr_matcher_check_template(num_of_mt, num_of_at, is_root); + if (ret) + return NULL; + + matcher = simple_calloc(1, sizeof(*matcher)); + if (!matcher) { + rte_errno = ENOMEM; + return NULL; + } + + matcher->tbl = tbl; + matcher->attr = *attr; + matcher->num_of_mt = num_of_mt; + memcpy(matcher->mt, mt, num_of_mt * sizeof(*mt)); + matcher->num_of_at = num_of_at; + memcpy(matcher->at, at, num_of_at * sizeof(*at)); + + ret = mlx5dr_matcher_process_attr(tbl->ctx->caps, matcher, is_root); + if (ret) + goto free_matcher; + + if (is_root) + ret = mlx5dr_matcher_init_root(matcher); + else + ret = mlx5dr_matcher_init(matcher); + + if (ret) { + DR_LOG(ERR, "Failed to initialise matcher: %d", ret); + goto free_matcher; + } + + return matcher; + +free_matcher: + simple_free(matcher); + return NULL; +} + +int mlx5dr_matcher_destroy(struct mlx5dr_matcher *matcher) +{ + if (mlx5dr_table_is_root(matcher->tbl)) + mlx5dr_matcher_uninit_root(matcher); + else + mlx5dr_matcher_uninit(matcher); + + simple_free(matcher); + return 0; +} + +struct mlx5dr_match_template * +mlx5dr_match_template_create(const struct rte_flow_item items[], + enum mlx5dr_match_template_flags flags) +{ + struct mlx5dr_match_template *mt; + struct rte_flow_error error; + int ret, len; + + if (flags > MLX5DR_MATCH_TEMPLATE_FLAG_RELAXED_MATCH) { + DR_LOG(ERR, "Unsupported match template flag provided"); + rte_errno = EINVAL; + return NULL; + } + + mt = simple_calloc(1, sizeof(*mt)); + if (!mt) { + DR_LOG(ERR, "Failed to allocate match template"); + rte_errno = ENOMEM; + return NULL; + } + + mt->flags = flags; + + /* Duplicate the user given items */ + ret = rte_flow_conv(RTE_FLOW_CONV_OP_PATTERN, NULL, 0, items, &error); + if (ret <= 0) { + DR_LOG(ERR, "Unable to process items (%s): %s", + error.message ? error.message : "unspecified", + strerror(rte_errno)); + goto free_template; + } + + len = RTE_ALIGN(ret, 16); + mt->items = simple_calloc(1, len); + if (!mt->items) { + DR_LOG(ERR, "Failed to allocate item copy"); + rte_errno = ENOMEM; + goto free_template; + } + + ret = rte_flow_conv(RTE_FLOW_CONV_OP_PATTERN, mt->items, ret, items, &error); + if (ret <= 0) + goto free_dst; + + return mt; + +free_dst: + simple_free(mt->items); +free_template: + simple_free(mt); + return NULL; +} + +int mlx5dr_match_template_destroy(struct mlx5dr_match_template *mt) +{ + assert(!mt->refcount); + simple_free(mt->items); + simple_free(mt); + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.h b/drivers/net/mlx5/hws/mlx5dr_matcher.h new file mode 100644 index 0000000000..c5f38b9388 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.h @@ -0,0 +1,76 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Copyright (c) 2022 NVIDIA Corporation 2021 NVIDIA CORPORATION. All rights reserved. Affiliates + */ + +#ifndef MLX5DR_MATCHER_H_ +#define MLX5DR_MATCHER_H_ + +/* Max supported match template */ +#define MLX5DR_MATCHER_MAX_MT 2 +#define MLX5DR_MATCHER_MAX_MT_ROOT 1 + +/* Max supported action template */ +#define MLX5DR_MATCHER_MAX_AT 4 + +/* We calculated that concatenating a collision table to the main table with + * 3% of the main table rows will be enough resources for high insertion + * success probability. + * + * The calculation: log2( 2^x * 3 / 100) = log(2^x) + log(3/100) = x - 5.05 ~ 5 + */ +#define MLX5DR_MATCHER_ASSURED_ROW_RATIO 5 +/* Thrashold to determine if amount of rules require a collision table */ +#define MLX5DR_MATCHER_ASSURED_RULES_TH 10 +/* Required depth of an assured collision table */ +#define MLX5DR_MATCHER_ASSURED_COL_TBL_DEPTH 4 +/* Required depth of the main large table */ +#define MLX5DR_MATCHER_ASSURED_MAIN_TBL_DEPTH 2 + +struct mlx5dr_match_template { + struct rte_flow_item *items; + struct mlx5dr_definer *definer; + struct mlx5dr_definer_fc *fc; + uint32_t fc_sz; + uint64_t item_flags; + uint8_t vport_item_id; + enum mlx5dr_match_template_flags flags; + uint32_t refcount; +}; + +struct mlx5dr_matcher_match_ste { + struct mlx5dr_pool_chunk ste; + struct mlx5dr_devx_obj *rtc_0; + struct mlx5dr_devx_obj *rtc_1; + struct mlx5dr_pool *pool; +}; + +struct mlx5dr_matcher_action_ste { + struct mlx5dr_pool_chunk ste; + struct mlx5dr_pool_chunk stc; + struct mlx5dr_devx_obj *rtc_0; + struct mlx5dr_devx_obj *rtc_1; + struct mlx5dr_pool *pool; + uint8_t max_stes; +}; + +struct mlx5dr_matcher { + struct mlx5dr_table *tbl; + struct mlx5dr_matcher_attr attr; + struct mlx5dv_flow_matcher *dv_matcher; + struct mlx5dr_match_template *mt[MLX5DR_MATCHER_MAX_MT]; + uint8_t num_of_mt; + struct mlx5dr_action_template *at[MLX5DR_MATCHER_MAX_AT]; + uint8_t num_of_at; + struct mlx5dr_devx_obj *end_ft; + struct mlx5dr_matcher *col_matcher; + struct mlx5dr_matcher_match_ste match_ste; + struct mlx5dr_matcher_action_ste action_ste; + LIST_ENTRY(mlx5dr_matcher) next; +}; + +int mlx5dr_matcher_conv_items_to_prm(uint64_t *match_buf, + struct rte_flow_item *items, + uint8_t *match_criteria, + bool is_value); + +#endif