From patchwork Wed Feb 1 07:28:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Vesker X-Patchwork-Id: 122791 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7EC5841B9B; Wed, 1 Feb 2023 08:30:35 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E372D42D8B; Wed, 1 Feb 2023 08:29:27 +0100 (CET) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2069.outbound.protection.outlook.com [40.107.244.69]) by mails.dpdk.org (Postfix) with ESMTP id 2459642D7E for ; Wed, 1 Feb 2023 08:29:25 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=hbVAFbFb9Yoc28E5uu+YSjoRm89kfCjJhndzI05k22Nll2Fjk4X6zGBjpgoxftracCLUDmm9L2/IfHP2p43EQgWCgV+BNmdeIbBrodh0PJSvUTCTatFYC5qpqDnqUS66jXekE8GFNXr1d092M60eVSBlAiczVheJW9XYNSuVftCzov/lDfneY9xDUKals+4Ry0pFGYK9Irao8U88J1oNnl/4SsJi3u0x/9a5/wjm/bLNuxhxjq2FZIG/bg9iYx2MMMp0aaILnujHjQYhONx8GPlYtoi3NNvPdMfSRYNNACp0jBxoVKNtyagf0cyF+iCxS004i3adnDEeQ1asEHBB4Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=OxgQ0X7ix7NBxlZTaovIw5RPA+AoAz67E6HKahbO9i8=; b=UdiqNtsnYE/9NmIkl5BpDVKWFqdbq0UA6gGsKLsEXGhXyXf72HzdjKOY3A9WwlCRXlSNKnmO6QP8VPW70jgarzWc5JzeZ5u1fSEKeK/Ys+ljxsXwKyVEJqTpA2n7rhJ/xTYrKm7oUYesQzSpmQemqxCYRaOEyswLmwM3ax2zjMZ8gbYBwD/KDUzSCH0O8c4Akps4CpYV+W7bBArJMohAfKX3IDpSvirhN77/0bvatUc1wWu1JV4BrHh3zr4ngWzBKOCWA+vF7Qk2HUtNRh4ZaMjE75Z1MLlKQRor7lunVyQvtFRVv5Vl8nfZxjCxcBGHsB2O0uoFB1TJOWok0vlw1w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=OxgQ0X7ix7NBxlZTaovIw5RPA+AoAz67E6HKahbO9i8=; b=IIFkdoC8chnxqE72GMmAZJ4jbDj00nMzOVZdcTXvlMG6eB/TnBrqyyl/m7amSfL7ZiVu6e+sjSQbN9UGrSmPr5jAEnMZZk8EoV2wpr4UDpobEYCqEqv9G3foTPN3jFj7mcdsJSV/SWYVu0FcYf94fxdTiYjeK9q/cp8292s2LL+xRgGjyhd4lwYr4Fzyl3RtvPSdOreDwpYLlMyQ8MsKJ1DP8HSvXowofTzSvbyWJxwEG0od15L/tBdbj8teE4CXIL97lMz2k5rdJQo0oCbFPAN1A1YFxF3tGA/AlsjFHugde0zqvSD6+y2I9fcdsmqhHN0OdoNIZKYBeLUmhqOpaw== Received: from BN0PR08CA0022.namprd08.prod.outlook.com (2603:10b6:408:142::7) by CY5PR12MB6550.namprd12.prod.outlook.com (2603:10b6:930:42::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.30; Wed, 1 Feb 2023 07:29:23 +0000 Received: from BN8NAM11FT080.eop-nam11.prod.protection.outlook.com (2603:10b6:408:142:cafe::74) by BN0PR08CA0022.outlook.office365.com (2603:10b6:408:142::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.22 via Frontend Transport; Wed, 1 Feb 2023 07:29:22 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BN8NAM11FT080.mail.protection.outlook.com (10.13.176.82) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.24 via Frontend Transport; Wed, 1 Feb 2023 07:29:22 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 23:29:09 -0800 Received: from nvidia.com (10.126.230.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 23:29:07 -0800 From: Alex Vesker To: , , , "Matan Azrad" CC: , Subject: [v2 13/16] net/mlx5/hws: add FW WQE rule creation logic Date: Wed, 1 Feb 2023 09:28:12 +0200 Message-ID: <20230201072815.1329101-14-valex@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230201072815.1329101-1-valex@nvidia.com> References: <20230131093346.1261066-1-valex@nvidia.com> <20230201072815.1329101-1-valex@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.37] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT080:EE_|CY5PR12MB6550:EE_ X-MS-Office365-Filtering-Correlation-Id: 22249392-837a-4de9-96e1-08db04260a02 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: ez0Th83RWW0uTD/t4OT8mnjIPhUs+XZOONB1qG8xwNNDDawMgi08MKhyRZywJNg944zMk+cY8VrJvXluxr8ysbRrvI6gqIyag1NcaWMbn67ivdxOaQBBJXG4nE+whdlXDKUJyl6kPIuL7hn+UxIHrAvUmdurs/uH9RqPFRhQ1JByK2TPX5iJ6SDQ2xB+YpeIewfJyK++ZGjjICsB12j5lB7aH7oRYpjw1eWGFQ/cP/qB+jvld9QymOgqVlSrM1k8QHBV0Pg7qhuRCfjphDOr6/lDTOEAIK96veUs6DF7YnoaB9W2BEuVrqD4xGwCdhSN3IEtvR2NnKGdhCf9RyWfulefGa7Ee1MAXWurLaq8OxbrBppqcJLsSHcGYHxCv0AuKFfSK2kgsJw+AuUa14+6scWTNuB2FdgbwBXWW+xKk5qhietGxVdeqi3d1WN1e9XwkQ/tw0+AIcWbwq6DzusOyFuF2eBYVxOtoln6bezXIi5UVUIG/VY28QS1smVkVu7jGrQ9oymPmpG6/IlFDaFh/4IzMRCZkpMlTdSbWlAfFnIWIQU7e3f2P9aA7xAOwWcioBDvhn0qS+iPoHmDyvl82MB0p0Bn1hmpD29AVOpWKQz+bJvIgbi/KuahaygqaPxsTqa6mlfa383yuxkQsbunCv559Izi3K4Fazwgtwpd1YH5o5x+EB0PhS0ZdTVlcKwFXTcVKMOYffugIeOuQ8T/Vg== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(376002)(346002)(396003)(136003)(39860400002)(451199018)(40470700004)(46966006)(36840700001)(41300700001)(36756003)(6636002)(478600001)(107886003)(6666004)(54906003)(2906002)(110136005)(356005)(86362001)(82310400005)(40480700001)(47076005)(1076003)(336012)(40460700003)(7696005)(70206006)(30864003)(70586007)(8936002)(5660300002)(8676002)(4326008)(2616005)(26005)(6286002)(36860700001)(426003)(186003)(55016003)(316002)(7636003)(82740400003)(16526019)(83380400001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Feb 2023 07:29:22.4236 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 22249392-837a-4de9-96e1-08db04260a02 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT080.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY5PR12MB6550 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org FW WQE and HW WQE are done in a similar way but not to jeopardize the performance rule creation is done over the new FW rule creation function. The deletion function is shared between both flows. Signed-off-by: Alex Vesker --- drivers/net/mlx5/hws/mlx5dr_rule.c | 180 +++++++++++++++++++++++++++-- drivers/net/mlx5/hws/mlx5dr_rule.h | 2 + drivers/net/mlx5/hws/mlx5dr_send.h | 9 +- 3 files changed, 180 insertions(+), 11 deletions(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.c b/drivers/net/mlx5/hws/mlx5dr_rule.c index f5a0c46315..9d5e5b11a5 100644 --- a/drivers/net/mlx5/hws/mlx5dr_rule.c +++ b/drivers/net/mlx5/hws/mlx5dr_rule.c @@ -112,6 +112,62 @@ static void mlx5dr_rule_gen_comp(struct mlx5dr_send_engine *queue, mlx5dr_send_engine_gen_comp(queue, user_data, comp_status); } +static void +mlx5dr_rule_save_delete_info(struct mlx5dr_rule *rule, + struct mlx5dr_send_ste_attr *ste_attr) +{ + if (unlikely(mlx5dr_matcher_req_fw_wqe(rule->matcher))) { + uint8_t *src_tag; + + /* Save match definer id and tag for delete */ + rule->tag_ptr = simple_calloc(2, sizeof(*rule->tag_ptr)); + assert(rule->tag_ptr); + + src_tag = (uint8_t *)ste_attr->wqe_data->tag; + memcpy(rule->tag_ptr[0].match, src_tag, MLX5DR_MATCH_TAG_SZ); + rule->tag_ptr[1].reserved[0] = ste_attr->send_attr.match_definer_id; + + /* Save range definer id and tag for delete */ + if (ste_attr->range_wqe_data) { + src_tag = (uint8_t *)ste_attr->range_wqe_data->tag; + memcpy(rule->tag_ptr[1].match, src_tag, MLX5DR_MATCH_TAG_SZ); + rule->tag_ptr[1].reserved[1] = ste_attr->send_attr.range_definer_id; + } + return; + } + + if (ste_attr->wqe_tag_is_jumbo) + memcpy(rule->tag.jumbo, ste_attr->wqe_data->jumbo, MLX5DR_JUMBO_TAG_SZ); + else + memcpy(rule->tag.match, ste_attr->wqe_data->tag, MLX5DR_MATCH_TAG_SZ); +} + +static void +mlx5dr_rule_clear_delete_info(struct mlx5dr_rule *rule) +{ + if (unlikely(mlx5dr_matcher_req_fw_wqe(rule->matcher))) + simple_free(rule->tag_ptr); +} + +static void +mlx5dr_rule_load_delete_info(struct mlx5dr_rule *rule, + struct mlx5dr_send_ste_attr *ste_attr) +{ + if (unlikely(mlx5dr_matcher_req_fw_wqe(rule->matcher))) { + /* Load match definer id and tag for delete */ + ste_attr->wqe_tag = &rule->tag_ptr[0]; + ste_attr->send_attr.match_definer_id = rule->tag_ptr[1].reserved[0]; + + /* Load range definer id and tag for delete */ + if (rule->matcher->flags & MLX5DR_MATCHER_FLAGS_RANGE_DEFINER) { + ste_attr->range_wqe_tag = &rule->tag_ptr[1]; + ste_attr->send_attr.range_definer_id = rule->tag_ptr[1].reserved[1]; + } + } else { + ste_attr->wqe_tag = &rule->tag; + } +} + static int mlx5dr_rule_alloc_action_ste(struct mlx5dr_rule *rule, struct mlx5dr_rule_attr *attr) { @@ -180,6 +236,97 @@ static void mlx5dr_rule_create_init(struct mlx5dr_rule *rule, apply->require_dep = 0; } +static int mlx5dr_rule_create_hws_fw_wqe(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr, + uint8_t mt_idx, + const struct rte_flow_item items[], + uint8_t at_idx, + struct mlx5dr_rule_action rule_actions[]) +{ + struct mlx5dr_action_template *at = &rule->matcher->at[at_idx]; + struct mlx5dr_match_template *mt = &rule->matcher->mt[mt_idx]; + struct mlx5dr_send_ring_dep_wqe range_wqe = {{0}}; + struct mlx5dr_send_ring_dep_wqe match_wqe = {{0}}; + bool is_range = mlx5dr_matcher_mt_is_range(mt); + bool is_jumbo = mlx5dr_matcher_mt_is_jumbo(mt); + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dr_send_ste_attr ste_attr = {0}; + struct mlx5dr_actions_apply_data apply; + struct mlx5dr_send_engine *queue; + + queue = &ctx->send_queue[attr->queue_id]; + if (unlikely(mlx5dr_send_engine_err(queue))) { + rte_errno = EIO; + return rte_errno; + } + + mlx5dr_rule_create_init(rule, &ste_attr, &apply); + mlx5dr_rule_init_dep_wqe(&match_wqe, rule, items, mt, attr->user_data); + mlx5dr_rule_init_dep_wqe(&range_wqe, rule, items, mt, attr->user_data); + + ste_attr.direct_index = 0; + ste_attr.rtc_0 = match_wqe.rtc_0; + ste_attr.rtc_1 = match_wqe.rtc_1; + ste_attr.used_id_rtc_0 = &rule->rtc_0; + ste_attr.used_id_rtc_1 = &rule->rtc_1; + ste_attr.retry_rtc_0 = match_wqe.retry_rtc_0; + ste_attr.retry_rtc_1 = match_wqe.retry_rtc_1; + ste_attr.send_attr.rule = match_wqe.rule; + ste_attr.send_attr.user_data = match_wqe.user_data; + + ste_attr.send_attr.fence = 1; + ste_attr.send_attr.notify_hw = 1; + ste_attr.wqe_tag_is_jumbo = is_jumbo; + + /* Prepare match STE TAG */ + ste_attr.wqe_ctrl = &match_wqe.wqe_ctrl; + ste_attr.wqe_data = &match_wqe.wqe_data; + ste_attr.send_attr.match_definer_id = mlx5dr_definer_get_id(mt->definer); + + mlx5dr_definer_create_tag(items, + mt->fc, + mt->fc_sz, + (uint8_t *)match_wqe.wqe_data.action); + + /* Prepare range STE TAG */ + if (is_range) { + ste_attr.range_wqe_data = &range_wqe.wqe_data; + ste_attr.send_attr.len += MLX5DR_WQE_SZ_GTA_DATA; + ste_attr.send_attr.range_definer_id = mlx5dr_definer_get_id(mt->range_definer); + + mlx5dr_definer_create_tag_range(items, + mt->fcr, + mt->fcr_sz, + (uint8_t *)range_wqe.wqe_data.action); + } + + /* Apply the actions on the last STE */ + apply.queue = queue; + apply.next_direct_idx = 0; + apply.rule_action = rule_actions; + apply.wqe_ctrl = &match_wqe.wqe_ctrl; + apply.wqe_data = (uint32_t *)(is_range ? + &range_wqe.wqe_data : + &match_wqe.wqe_data); + + /* Skip setters[0] used for jumbo STE since not support with FW WQE */ + mlx5dr_action_apply_setter(&apply, &at->setters[1], 0); + + /* Send WQEs to FW */ + mlx5dr_send_stes_fw(queue, &ste_attr); + + /* Backup TAG on the rule for deletion */ + mlx5dr_rule_save_delete_info(rule, &ste_attr); + mlx5dr_send_engine_inc_rule(queue); + + /* Send dependent WQEs */ + if (!attr->burst) + mlx5dr_send_all_dep_wqe(queue); + + return 0; +} + static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, struct mlx5dr_rule_attr *attr, uint8_t mt_idx, @@ -189,7 +336,7 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, { struct mlx5dr_action_template *at = &rule->matcher->at[at_idx]; struct mlx5dr_match_template *mt = &rule->matcher->mt[mt_idx]; - bool is_jumbo = mlx5dr_definer_is_jumbo(mt->definer); + bool is_jumbo = mlx5dr_matcher_mt_is_jumbo(mt); struct mlx5dr_matcher *matcher = rule->matcher; struct mlx5dr_context *ctx = matcher->tbl->ctx; struct mlx5dr_send_ste_attr ste_attr = {0}; @@ -200,6 +347,11 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, uint8_t total_stes, action_stes; int i, ret; + /* Insert rule using FW WQE if cannot use GTA WQE */ + if (unlikely(mlx5dr_matcher_req_fw_wqe(matcher))) + return mlx5dr_rule_create_hws_fw_wqe(rule, attr, mt_idx, items, + at_idx, rule_actions); + queue = &ctx->send_queue[attr->queue_id]; if (unlikely(mlx5dr_send_engine_err(queue))) { rte_errno = EIO; @@ -283,11 +435,7 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, } /* Backup TAG on the rule for deletion */ - if (is_jumbo) - memcpy(rule->tag.jumbo, dep_wqe->wqe_data.action, MLX5DR_JUMBO_TAG_SZ); - else - memcpy(rule->tag.match, dep_wqe->wqe_data.tag, MLX5DR_MATCH_TAG_SZ); - + mlx5dr_rule_save_delete_info(rule, &ste_attr); mlx5dr_send_engine_inc_rule(queue); /* Send dependent WQEs */ @@ -311,6 +459,9 @@ static void mlx5dr_rule_destroy_failed_hws(struct mlx5dr_rule *rule, /* Rule failed now we can safely release action STEs */ mlx5dr_rule_free_action_ste_idx(rule); + /* Clear complex tag */ + mlx5dr_rule_clear_delete_info(rule); + /* If a rule that was indicated as burst (need to trigger HW) has failed * insertion we won't ring the HW as nothing is being written to the WQ. * In such case update the last WQE and ring the HW with that work @@ -327,6 +478,9 @@ static int mlx5dr_rule_destroy_hws(struct mlx5dr_rule *rule, { struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; struct mlx5dr_matcher *matcher = rule->matcher; + bool fw_wqe = mlx5dr_matcher_req_fw_wqe(matcher); + bool is_range = mlx5dr_matcher_mt_is_range(matcher->mt); + bool is_jumbo = mlx5dr_matcher_mt_is_jumbo(matcher->mt); struct mlx5dr_wqe_gta_ctrl_seg wqe_ctrl = {0}; struct mlx5dr_send_ste_attr ste_attr = {0}; struct mlx5dr_send_engine *queue; @@ -361,6 +515,8 @@ static int mlx5dr_rule_destroy_hws(struct mlx5dr_rule *rule, ste_attr.send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE; ste_attr.send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; ste_attr.send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA; + if (unlikely(is_range)) + ste_attr.send_attr.len += MLX5DR_WQE_SZ_GTA_DATA; ste_attr.send_attr.rule = rule; ste_attr.send_attr.notify_hw = !attr->burst; @@ -371,13 +527,19 @@ static int mlx5dr_rule_destroy_hws(struct mlx5dr_rule *rule, ste_attr.used_id_rtc_0 = &rule->rtc_0; ste_attr.used_id_rtc_1 = &rule->rtc_1; ste_attr.wqe_ctrl = &wqe_ctrl; - ste_attr.wqe_tag = &rule->tag; - ste_attr.wqe_tag_is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt->definer); + ste_attr.wqe_tag_is_jumbo = is_jumbo; ste_attr.gta_opcode = MLX5DR_WQE_GTA_OP_DEACTIVATE; if (unlikely(mlx5dr_matcher_is_insert_by_idx(matcher))) ste_attr.direct_index = attr->rule_idx; - mlx5dr_send_ste(queue, &ste_attr); + mlx5dr_rule_load_delete_info(rule, &ste_attr); + + if (unlikely(fw_wqe)) { + mlx5dr_send_stes_fw(queue, &ste_attr); + mlx5dr_rule_clear_delete_info(rule); + } else { + mlx5dr_send_ste(queue, &ste_attr); + } return 0; } diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.h b/drivers/net/mlx5/hws/mlx5dr_rule.h index f2fe418159..886cf77992 100644 --- a/drivers/net/mlx5/hws/mlx5dr_rule.h +++ b/drivers/net/mlx5/hws/mlx5dr_rule.h @@ -36,6 +36,8 @@ struct mlx5dr_rule { struct mlx5dr_matcher *matcher; union { struct mlx5dr_rule_match_tag tag; + /* Pointer to tag to store more than one tag */ + struct mlx5dr_rule_match_tag *tag_ptr; struct ibv_flow *flow; }; uint32_t rtc_0; /* The RTC into which the STE was inserted */ diff --git a/drivers/net/mlx5/hws/mlx5dr_send.h b/drivers/net/mlx5/hws/mlx5dr_send.h index 47bb66b3c7..d0977ec851 100644 --- a/drivers/net/mlx5/hws/mlx5dr_send.h +++ b/drivers/net/mlx5/hws/mlx5dr_send.h @@ -54,8 +54,13 @@ struct mlx5dr_wqe_gta_data_seg_ste { __be32 rsvd0_ctr_id; __be32 rsvd1_definer; __be32 rsvd2[3]; - __be32 action[3]; - __be32 tag[8]; + union { + struct { + __be32 action[3]; + __be32 tag[8]; + }; + __be32 jumbo[11]; + }; }; struct mlx5dr_wqe_gta_data_seg_arg {