From patchwork Tue Jan 31 09:33:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Vesker X-Patchwork-Id: 122733 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0634041B8D; Tue, 31 Jan 2023 10:36:08 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DC6B642F94; Tue, 31 Jan 2023 10:34:55 +0100 (CET) Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam04on2052.outbound.protection.outlook.com [40.107.100.52]) by mails.dpdk.org (Postfix) with ESMTP id D38D342F90 for ; Tue, 31 Jan 2023 10:34:53 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=L6m45HKFcXAALz9okLy8B27/cy7ZDf+laMlC60pWziu9bwrDipSZ+ePL+LgVcxT2zDlbNsKcbyCtf63Q4wge52yOOK9HHV+SGwemmO3d7Dvvf8fGWdpkgZlTDRKVBaxOrYIwX+DfqvepBoT1wXnh08HtD6EgY3cr+DI6Wc1TxwbtnDTayt7jO99m94Mt9s7hVTkagLE5FI+61lluTU1EmdZjQQuajsGPdlzJBOsflb1/0UukPEmw5/PUwPzS+kXAbVLSWa9Wil/LY+PwLea2Fte8rVNyVCLKLREwvV1LM1r/zyks1ckjG+H0UryuXtODty++H27rz3vn+QabI3LX1g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Zqd2myNNzFCyD06DvXWGIZthCo1udtrMgKpx45dX8q8=; b=UMJFnt/IrLTlnQSjdseyIWBk7IQYB661tvBzJCa7s59DWbv3m2pZOZ/fzM/JZHv2v+55f8EwgspTaPbjmIVv1JQQtQALOwRQnaSN/VtH5hgXrnq6igMjuqIP3fdsnf48BCanhY6ebBynpaYq4Ghb59qfTv9YSRD5dWbgx09X9YhvmXCb0S3gjqqdE3dngc+mQOEdZ6gyXiv82EzZMA+sqYKxybabkUP5sNA6pmnUAzPgJ9elrYJBpZdgWRDUmyIdWmtZrwFsxZXkEWV+4rUSAN7haMaSzrHFQHIBitG9Pgun447IEKjSFUogiMXH61BHP5VwMS8IJbzRd5x8xOdwww== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Zqd2myNNzFCyD06DvXWGIZthCo1udtrMgKpx45dX8q8=; b=FZfRxEKclqHtc44el6dVBXM+pUVFKrsGDZHwT0NacZOs1H7smRSbIT7+mK/GEeoQilyxgYGqjz8iDrTTVc+vp4FkTGUOZBHCalzzGQiotYb+qzeUh6UpmQKq7wtMOAjxdW4Nxl6bRc2nDqxPGRUm7OvEtTRbO2JlBl8K5hnfhiPgo8ibXn5Qo9y32qyFqwDWhq1gB8+JlLcIwTkVl6KngE1+XIqXoUFJXcD12Y6n/y3y1FjYOtfvhU1BKDu/zMmeubW17p7D0ANNVo34dqMpx4mq8CdHksKjqe9j5O0MXRO6yYWDnSvBd9U8v9w4KDU/twSraTqmNLmNWxpEeb90kQ== Received: from MW4PR04CA0152.namprd04.prod.outlook.com (2603:10b6:303:85::7) by DM6PR12MB4058.namprd12.prod.outlook.com (2603:10b6:5:21d::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Tue, 31 Jan 2023 09:34:52 +0000 Received: from CO1NAM11FT022.eop-nam11.prod.protection.outlook.com (2603:10b6:303:85:cafe::14) by MW4PR04CA0152.outlook.office365.com (2603:10b6:303:85::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38 via Frontend Transport; Tue, 31 Jan 2023 09:34:51 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT022.mail.protection.outlook.com (10.13.175.199) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.21 via Frontend Transport; Tue, 31 Jan 2023 09:34:50 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:30 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:28 -0800 From: Alex Vesker To: , , , "Matan Azrad" CC: , Subject: [v1 11/16] net/mlx5/hws: support partial hash Date: Tue, 31 Jan 2023 11:33:40 +0200 Message-ID: <20230131093346.1261066-12-valex@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230131093346.1261066-1-valex@nvidia.com> References: <20230131093346.1261066-1-valex@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT022:EE_|DM6PR12MB4058:EE_ X-MS-Office365-Filtering-Correlation-Id: 5700c68d-918e-47c5-1858-08db036e66ed X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: ELXfEPz+GgecH8FL9d7COywBSeQQX5r1bx3i2FZ/hRuHxFPzBqfMcvIeSQ7HYSNjJnDOq3fTibQpxw0eZspPC/lRTgbqRWVTNIV2g80kU/P/BZPQBe7e3BrXiSsfHiQHOEsvsRkybj8/XkeJOdZniL9jCx2dn2Gc4oMD+9bjLve/GIbXvkmbH+8xG43mgT0ElDHtoRKOeL9QvXAkxqE9xO0zhDWcWms+PSHDcqVnpmIXsVjKBXOo/DufgJg9pmgUFE3BjIkBBw3PDz/Qgn2m3wUgVfCs+oAcHPwbCWBbK+my2IyqIK9E+9717jXKmhZzcooaElf1Lp31ib5i3h0N5ZTASevAYsIVbO8brZqZBT63hTNEqa59M3Jxpng8MRcbeXmSkS2SYrYj81EcyuIiISffnDIG1TJX13zVWRAt3zOm7CZ0PhZWjmRNCCXTTLaDiLNQf9UEjQWaIqFxIzhDvcQbwRc3dAJ1b2rKUGPkjOFiPNEVEQO0g0ZUyyJCocF9fd1fvv+LiyzIBcAgfGfh83RcQJem5pvN8vsJYAQiD3cRXBEzvn1658B0t+3wLXRSOzGE0DqCaIDZPqighg60m1IeNLQSgnRZ61DFqd3PkhLSMsHqX+IN4vQqASA3ltfeSixGHFUQhPpaE61nI0O4Z1b/DkNRUWeQXT6/2MiH4vg8GGMdVuPl1RUMlWsOGY4mMKRjpboVCO8AW/sNtfFpCg== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(346002)(376002)(39860400002)(136003)(396003)(451199018)(40470700004)(46966006)(36840700001)(47076005)(7696005)(8676002)(70206006)(4326008)(36860700001)(41300700001)(8936002)(70586007)(336012)(83380400001)(426003)(478600001)(86362001)(2906002)(7636003)(316002)(6666004)(40480700001)(82740400003)(55016003)(82310400005)(107886003)(1076003)(6636002)(5660300002)(16526019)(30864003)(186003)(356005)(6286002)(26005)(36756003)(110136005)(40460700003)(54906003)(2616005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 09:34:50.9751 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5700c68d-918e-47c5-1858-08db036e66ed X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT022.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4058 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Hash definers allow performing hashing over a subset of the fields which are used for matching. This allows combining match templates which were considered invalid until now. During matcher creation mlx5dr code will process the match templates and check if such hash definer is needed based on the definers bitmasks intersection. Since current HW GTA implementation doesn't allow specifying match and hash definers rule insertion is done using the FW GTA WQE command. Signed-off-by: Alex Vesker --- drivers/common/mlx5/mlx5_prm.h | 4 + drivers/net/mlx5/hws/mlx5dr_definer.c | 105 ++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_matcher.c | 66 +++++++++++++++- drivers/net/mlx5/hws/mlx5dr_matcher.h | 10 ++- 4 files changed, 181 insertions(+), 4 deletions(-) diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index cf46296afb..cca2fb6af7 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -2112,6 +2112,10 @@ enum mlx5_ifc_cross_vhca_allowed_objects_types { MLX5_CROSS_VHCA_ALLOWED_OBJS_RTC = 1 << 0xa, }; +enum { + MLX5_GENERATE_WQE_TYPE_FLOW_UPDATE = 1 << 1, +}; + /* * HCA Capabilities 2 */ diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c index 9560f8a0af..260e6c5d1d 100644 --- a/drivers/net/mlx5/hws/mlx5dr_definer.c +++ b/drivers/net/mlx5/hws/mlx5dr_definer.c @@ -1928,6 +1928,27 @@ int mlx5dr_definer_get_id(struct mlx5dr_definer *definer) return definer->obj->id; } +static int +mlx5dr_definer_compare(struct mlx5dr_definer *definer_a, + struct mlx5dr_definer *definer_b) +{ + int i; + + for (i = 0; i < BYTE_SELECTORS; i++) + if (definer_a->byte_selector[i] != definer_b->byte_selector[i]) + return 1; + + for (i = 0; i < DW_SELECTORS; i++) + if (definer_a->dw_selector[i] != definer_b->dw_selector[i]) + return 1; + + for (i = 0; i < MLX5DR_JUMBO_TAG_SZ; i++) + if (definer_a->mask.jumbo[i] != definer_b->mask.jumbo[i]) + return 1; + + return 0; +} + static int mlx5dr_definer_calc_layout(struct mlx5dr_matcher *matcher, struct mlx5dr_definer *match_definer) @@ -2070,6 +2091,80 @@ mlx5dr_definer_matcher_match_uninit(struct mlx5dr_matcher *matcher) mlx5dr_definer_free(matcher->mt[i].definer); } +static int +mlx5dr_definer_matcher_hash_init(struct mlx5dr_context *ctx, + struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_cmd_definer_create_attr def_attr = {0}; + struct mlx5dr_match_template *mt = matcher->mt; + struct ibv_context *ibv_ctx = ctx->ibv_ctx; + uint8_t *bit_mask; + int i, j; + + for (i = 1; i < matcher->num_of_mt; i++) + if (mlx5dr_definer_compare(mt[i].definer, mt[i - 1].definer)) + matcher->flags |= MLX5DR_MATCHER_FLAGS_HASH_DEFINER; + + if (!(matcher->flags & MLX5DR_MATCHER_FLAGS_HASH_DEFINER)) + return 0; + + /* Insert by index requires all MT using the same definer */ + if (matcher->attr.insert_mode == MLX5DR_MATCHER_INSERT_BY_INDEX) { + DR_LOG(ERR, "Insert by index not supported with MT combination"); + rte_errno = EOPNOTSUPP; + return rte_errno; + } + + matcher->hash_definer = simple_calloc(1, sizeof(*matcher->hash_definer)); + if (!matcher->hash_definer) { + DR_LOG(ERR, "Failed to allocate memory for hash definer"); + rte_errno = ENOMEM; + return rte_errno; + } + + /* Calculate intersection between all match templates bitmasks. + * We will use mt[0] as reference and intersect it with mt[1..n]. + * From this we will get: + * hash_definer.selectors = mt[0].selecotrs + * hash_definer.mask = mt[0].mask & mt[0].mask & ... & mt[n].mask + */ + + /* Use first definer which should also contain intersection fields */ + memcpy(matcher->hash_definer, mt->definer, sizeof(struct mlx5dr_definer)); + + /* Calculate intersection between first to all match templates bitmasks */ + for (i = 1; i < matcher->num_of_mt; i++) { + bit_mask = (uint8_t *)&mt[i].definer->mask; + for (j = 0; j < MLX5DR_JUMBO_TAG_SZ; j++) + ((uint8_t *)&matcher->hash_definer->mask)[j] &= bit_mask[j]; + } + + def_attr.match_mask = matcher->hash_definer->mask.jumbo; + def_attr.dw_selector = matcher->hash_definer->dw_selector; + def_attr.byte_selector = matcher->hash_definer->byte_selector; + matcher->hash_definer->obj = mlx5dr_cmd_definer_create(ibv_ctx, &def_attr); + if (!matcher->hash_definer->obj) { + DR_LOG(ERR, "Failed to create hash definer"); + goto free_hash_definer; + } + + return 0; + +free_hash_definer: + simple_free(matcher->hash_definer); + return rte_errno; +} + +static void +mlx5dr_definer_matcher_hash_uninit(struct mlx5dr_matcher *matcher) +{ + if (!matcher->hash_definer) + return; + + mlx5dr_cmd_destroy_obj(matcher->hash_definer->obj); + simple_free(matcher->hash_definer); +} + int mlx5dr_definer_matcher_init(struct mlx5dr_context *ctx, struct mlx5dr_matcher *matcher) { @@ -2093,8 +2188,17 @@ int mlx5dr_definer_matcher_init(struct mlx5dr_context *ctx, goto free_fc; } + /* Calculate partial hash definer */ + ret = mlx5dr_definer_matcher_hash_init(ctx, matcher); + if (ret) { + DR_LOG(ERR, "Failed to init hash definer"); + goto uninit_match_definer; + } + return 0; +uninit_match_definer: + mlx5dr_definer_matcher_match_uninit(matcher); free_fc: for (i = 0; i < matcher->num_of_mt; i++) simple_free(matcher->mt[i].fc); @@ -2109,6 +2213,7 @@ void mlx5dr_definer_matcher_uninit(struct mlx5dr_matcher *matcher) if (matcher->flags & MLX5DR_MATCHER_FLAGS_COLISION) return; + mlx5dr_definer_matcher_hash_uninit(matcher); mlx5dr_definer_matcher_match_uninit(matcher); for (i = 0; i < matcher->num_of_mt; i++) diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.c b/drivers/net/mlx5/hws/mlx5dr_matcher.c index 7e332052b2..e860c274cf 100644 --- a/drivers/net/mlx5/hws/mlx5dr_matcher.c +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.c @@ -337,6 +337,42 @@ static int mlx5dr_matcher_disconnect(struct mlx5dr_matcher *matcher) return 0; } +static bool mlx5dr_matcher_supp_fw_wqe(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_cmd_query_caps *caps = matcher->tbl->ctx->caps; + + if (matcher->flags & MLX5DR_MATCHER_FLAGS_HASH_DEFINER) { + if (matcher->hash_definer->type == MLX5DR_DEFINER_TYPE_MATCH && + !IS_BIT_SET(caps->supp_ste_fromat_gen_wqe, MLX5_IFC_RTC_STE_FORMAT_8DW)) { + DR_LOG(ERR, "Gen WQE MATCH format not supported"); + return false; + } + + if (matcher->hash_definer->type == MLX5DR_DEFINER_TYPE_JUMBO) { + DR_LOG(ERR, "Gen WQE JUMBO format not supported"); + return false; + } + } + + if (matcher->attr.insert_mode != MLX5DR_MATCHER_INSERT_BY_HASH || + matcher->attr.distribute_mode != MLX5DR_MATCHER_DISTRIBUTE_BY_HASH) { + DR_LOG(ERR, "Gen WQE must be inserted and distribute by hash"); + return false; + } + + if (!(caps->supp_type_gen_wqe & MLX5_GENERATE_WQE_TYPE_FLOW_UPDATE)) { + DR_LOG(ERR, "Gen WQE command not supporting GTA"); + return false; + } + + if (!caps->rtc_max_hash_def_gen_wqe) { + DR_LOG(ERR, "Hash definer not supported"); + return false; + } + + return true; +} + static void mlx5dr_matcher_set_rtc_attr_sz(struct mlx5dr_matcher *matcher, struct mlx5dr_cmd_rtc_create_attr *rtc_attr, enum mlx5dr_matcher_rtc_type rtc_type, @@ -432,8 +468,16 @@ static int mlx5dr_matcher_create_rtc(struct mlx5dr_matcher *matcher, if (attr->insert_mode == MLX5DR_MATCHER_INSERT_BY_HASH) { /* The usual Hash Table */ rtc_attr.update_index_mode = MLX5_IFC_RTC_STE_UPDATE_MODE_BY_HASH; - /* The first match template is used since all share the same definer */ - rtc_attr.match_definer_0 = mlx5dr_definer_get_id(mt->definer); + if (matcher->hash_definer) { + /* Specify definer_id_0 is used for hashing */ + rtc_attr.fw_gen_wqe = true; + rtc_attr.num_hash_definer = 1; + rtc_attr.match_definer_0 = + mlx5dr_definer_get_id(matcher->hash_definer); + } else { + /* The first mt is used since all share the same definer */ + rtc_attr.match_definer_0 = mlx5dr_definer_get_id(mt->definer); + } } else if (attr->insert_mode == MLX5DR_MATCHER_INSERT_BY_INDEX) { rtc_attr.update_index_mode = MLX5_IFC_RTC_STE_UPDATE_MODE_BY_OFFSET; rtc_attr.num_hash_definer = 1; @@ -640,6 +684,12 @@ static int mlx5dr_matcher_bind_at(struct mlx5dr_matcher *matcher) if (!matcher->action_ste.max_stes) return 0; + if (mlx5dr_matcher_req_fw_wqe(matcher)) { + DR_LOG(ERR, "FW extended matcher cannot be binded to complex at"); + rte_errno = ENOTSUP; + return rte_errno; + } + /* Allocate action STE mempool */ pool_attr.table_type = tbl->type; pool_attr.pool_type = MLX5DR_POOL_TYPE_STE; @@ -701,13 +751,21 @@ static int mlx5dr_matcher_bind_mt(struct mlx5dr_matcher *matcher) struct mlx5dr_pool_attr pool_attr = {0}; int ret; - /* Calculate match definers */ + /* Calculate match and hash definers */ ret = mlx5dr_definer_matcher_init(ctx, matcher); if (ret) { DR_LOG(ERR, "Failed to set matcher templates with match definers"); return ret; } + if (mlx5dr_matcher_req_fw_wqe(matcher) && + !mlx5dr_matcher_supp_fw_wqe(matcher)) { + DR_LOG(ERR, "Matcher requires FW WQE which is not supported"); + rte_errno = ENOTSUP; + ret = rte_errno; + goto uninit_match_definer; + } + /* Create an STE pool per matcher*/ pool_attr.table_type = matcher->tbl->type; pool_attr.pool_type = MLX5DR_POOL_TYPE_STE; @@ -719,6 +777,7 @@ static int mlx5dr_matcher_bind_mt(struct mlx5dr_matcher *matcher) matcher->match_ste.pool = mlx5dr_pool_create(ctx, &pool_attr); if (!matcher->match_ste.pool) { DR_LOG(ERR, "Failed to allocate matcher STE pool"); + ret = ENOTSUP; goto uninit_match_definer; } @@ -932,6 +991,7 @@ mlx5dr_matcher_create_col_matcher(struct mlx5dr_matcher *matcher) col_matcher->at = matcher->at; col_matcher->num_of_at = matcher->num_of_at; col_matcher->num_of_mt = matcher->num_of_mt; + col_matcher->hash_definer = matcher->hash_definer; col_matcher->attr.priority = matcher->attr.priority; col_matcher->flags = matcher->flags; col_matcher->flags |= MLX5DR_MATCHER_FLAGS_COLISION; diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.h b/drivers/net/mlx5/hws/mlx5dr_matcher.h index 4bdb33b11f..c012c0c193 100644 --- a/drivers/net/mlx5/hws/mlx5dr_matcher.h +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.h @@ -23,7 +23,8 @@ #define MLX5DR_MATCHER_ASSURED_MAIN_TBL_DEPTH 2 enum mlx5dr_matcher_flags { - MLX5DR_MATCHER_FLAGS_COLISION = 1 << 0, + MLX5DR_MATCHER_FLAGS_HASH_DEFINER = 1 << 0, + MLX5DR_MATCHER_FLAGS_COLISION = 1 << 1, }; struct mlx5dr_match_template { @@ -69,6 +70,7 @@ struct mlx5dr_matcher { struct mlx5dr_matcher *col_matcher; struct mlx5dr_matcher_match_ste match_ste; struct mlx5dr_matcher_action_ste action_ste; + struct mlx5dr_definer *hash_definer; LIST_ENTRY(mlx5dr_matcher) next; }; @@ -78,6 +80,12 @@ mlx5dr_matcher_mt_is_jumbo(struct mlx5dr_match_template *mt) return mlx5dr_definer_is_jumbo(mt->definer); } +static inline bool mlx5dr_matcher_req_fw_wqe(struct mlx5dr_matcher *matcher) +{ + /* Currently HWS doesn't support hash different from match or range */ + return unlikely(matcher->flags & MLX5DR_MATCHER_FLAGS_HASH_DEFINER); +} + int mlx5dr_matcher_conv_items_to_prm(uint64_t *match_buf, struct rte_flow_item *items, uint8_t *match_criteria,