From patchwork Tue Jan 31 09:33:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Vesker X-Patchwork-Id: 122737 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C7BEE41B8D; Tue, 31 Jan 2023 10:36:38 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4C3FC42FBE; Tue, 31 Jan 2023 10:35:08 +0100 (CET) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2041.outbound.protection.outlook.com [40.107.236.41]) by mails.dpdk.org (Postfix) with ESMTP id 1994542FB6 for ; Tue, 31 Jan 2023 10:35:06 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=UbrfKTUI45bRBbd34I+J79Ot9GXG1F7W69WPjTO/BB5mLflmBK8p9780pd/BFBhQ87d2KC6bpjT4pSo9f9VVsYZkRpZVQpe3lw7GYjVQdCzoTDdUINliWERo0scTpwqVqsoEEDq2G7FrVerDldgND//QttndgiM94zd8P62cVLRDu7cAXDLde+SxfQX90zEsQehlDSbo6PBhap11K7EzVudl2pFX7RorhUvknpm3KtUOhHS5K5xX4bA4+32rXo7he8mBIa8U0n0CwzbTuA4+e1NLngOOBKk0pPu6iV4cHqUy8ppC6w6oGUMm7JQ0xf3iARmnXiE1KU2MCrikW6bdpg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=OxgQ0X7ix7NBxlZTaovIw5RPA+AoAz67E6HKahbO9i8=; b=EK28TS0yP5zhKn3z9YwtMiZY9kzUvkUOAbFnIzyQDc8TC3NyB6ZdqWvkYAHWK2ORm1QBcilbTo5UYZeuktk5132EzZsfr42CE7yz+sHRYH6i4LgqCT8mc5Z5rLAUV1iU+eSgkqssG41WQgrogTxFnb8HmH7iNzwQxlimJs4+vjrW0BfWa81RbhHagCobzxXVxXbL0pktiwaRcB4W0qxq+izfdCWpZwvQPjz5aAcKVV4F6D4dJfAB1GR72Mx7sb2nhzekgjKTLJb4ithFEfcCd+Q4I9TSm/TdT1m9J3feixFMYIrNErYa1byZf2V1nTv7OdBSHVlZBqSNrUo3MizY9g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=OxgQ0X7ix7NBxlZTaovIw5RPA+AoAz67E6HKahbO9i8=; b=lBLLRFF0cinpiTQ7hMimxJvNPTD5iA1QpfJBNCWkrfsiOGSQnYyUZV6vBgHIkIxfh9zqQ8s9zCfimwls+a1PWFDTjpfdQUwAT2Dibu3kYVqzTR28+O+mMIczBO0o6c3tnJ9fWzzwh0rxBTV7T79VfWA8kGXT3uTtfbSKum0yCaJdzIIn4crpq/r371w9Kt81RnMaWcok7QimMrcQl5dZ4XWTpJrJHM4cuOx2VOKY70kcTz6vUnveTpWOYjN1Q3Ui4gK67iVqkJqMjosm5w9HIDlyWvFuCAXMj/wE6OiZ1WN7hPmywz5Vv/48aUHDXl8wFE0k3F+zN9t/PKgcv+CCcg== Received: from MW4PR02CA0029.namprd02.prod.outlook.com (2603:10b6:303:16d::34) by IA1PR12MB6460.namprd12.prod.outlook.com (2603:10b6:208:3a8::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Tue, 31 Jan 2023 09:35:02 +0000 Received: from CO1NAM11FT073.eop-nam11.prod.protection.outlook.com (2603:10b6:303:16d:cafe::1a) by MW4PR02CA0029.outlook.office365.com (2603:10b6:303:16d::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38 via Frontend Transport; Tue, 31 Jan 2023 09:35:02 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT073.mail.protection.outlook.com (10.13.174.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22 via Frontend Transport; Tue, 31 Jan 2023 09:35:01 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:35 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:33 -0800 From: Alex Vesker To: , , , "Matan Azrad" CC: , Subject: [v1 13/16] net/mlx5/hws: add FW WQE rule creation logic Date: Tue, 31 Jan 2023 11:33:42 +0200 Message-ID: <20230131093346.1261066-14-valex@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230131093346.1261066-1-valex@nvidia.com> References: <20230131093346.1261066-1-valex@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT073:EE_|IA1PR12MB6460:EE_ X-MS-Office365-Filtering-Correlation-Id: f3c843c7-5abb-492a-925f-08db036e6d67 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: +f+Kt965ZyEKjlJ9iKa0ir/4al25v7b7vwTBGjm7XQ1wk3OV7eRz79jP6YHn1BqZElQe5aI+OtKiDpWVcnrXzJz1UXcynJqlwaz9nyq6eSeB4bb9t9KTk0tKi40bj519y/7nfy+8j2aB/S+0VbJJRWVVCkBW9AI/2ak0GCFfBEATL/Z562d46Yn4yin8y2Yovw6C+dFeIJvTd+/TmXt/37U7GtxvoHJHrsYdW1HubJbL/E9RLKDmKu0tL1MMXfE/Akr4h+0uny3IJlvItE2c6fLD7O9p67925JnAaKed6zo7+kYvtQsknxBS6y/froIyXPkJmSlUa6iUXrfyYzQY2jeydQCpY3KVEF5UuuzN87SUk0ingPkOtaVV5VHbyV7ItuBtWSMqEv8kaOIc+ZPTz6kZjwnJQjiwCZx/HJJcHW9LwBZljgojvvD+HCDgbGqLP15PuXzkBRPAZ7D8ki9rO7q3C+BrDZadX6EpHRvU4tZmLGlMfsYC+g80Ykcrtm3DDGs/Of/qe1LDNTRbdvPuECkVixRlm3hPdh+smXO2b2/qKyKOH3+JY4kxurxD9rykDpZEjy0iAhKcCdkOeFEirIfE2cBstf+q5165rGlyJE+7yh5S3KPv9eI0uTU+namVvqgiqQOxnTDWKkiUNIyBRIJ2yRPKb1eJdP35BPh5eRJkXjtkWUHs8HLWgBdvK3+jmAT1YuEl6vkX4d16YkFvlg== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(396003)(346002)(136003)(376002)(39860400002)(451199018)(46966006)(40470700004)(36840700001)(30864003)(83380400001)(47076005)(426003)(336012)(7636003)(86362001)(82740400003)(356005)(82310400005)(36756003)(2906002)(7696005)(40460700003)(36860700001)(1076003)(478600001)(16526019)(2616005)(6286002)(186003)(26005)(107886003)(55016003)(6666004)(8676002)(8936002)(4326008)(41300700001)(70586007)(70206006)(40480700001)(54906003)(5660300002)(6636002)(110136005)(316002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 09:35:01.8284 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f3c843c7-5abb-492a-925f-08db036e6d67 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT073.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB6460 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org FW WQE and HW WQE are done in a similar way but not to jeopardize the performance rule creation is done over the new FW rule creation function. The deletion function is shared between both flows. Signed-off-by: Alex Vesker --- drivers/net/mlx5/hws/mlx5dr_rule.c | 180 +++++++++++++++++++++++++++-- drivers/net/mlx5/hws/mlx5dr_rule.h | 2 + drivers/net/mlx5/hws/mlx5dr_send.h | 9 +- 3 files changed, 180 insertions(+), 11 deletions(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.c b/drivers/net/mlx5/hws/mlx5dr_rule.c index f5a0c46315..9d5e5b11a5 100644 --- a/drivers/net/mlx5/hws/mlx5dr_rule.c +++ b/drivers/net/mlx5/hws/mlx5dr_rule.c @@ -112,6 +112,62 @@ static void mlx5dr_rule_gen_comp(struct mlx5dr_send_engine *queue, mlx5dr_send_engine_gen_comp(queue, user_data, comp_status); } +static void +mlx5dr_rule_save_delete_info(struct mlx5dr_rule *rule, + struct mlx5dr_send_ste_attr *ste_attr) +{ + if (unlikely(mlx5dr_matcher_req_fw_wqe(rule->matcher))) { + uint8_t *src_tag; + + /* Save match definer id and tag for delete */ + rule->tag_ptr = simple_calloc(2, sizeof(*rule->tag_ptr)); + assert(rule->tag_ptr); + + src_tag = (uint8_t *)ste_attr->wqe_data->tag; + memcpy(rule->tag_ptr[0].match, src_tag, MLX5DR_MATCH_TAG_SZ); + rule->tag_ptr[1].reserved[0] = ste_attr->send_attr.match_definer_id; + + /* Save range definer id and tag for delete */ + if (ste_attr->range_wqe_data) { + src_tag = (uint8_t *)ste_attr->range_wqe_data->tag; + memcpy(rule->tag_ptr[1].match, src_tag, MLX5DR_MATCH_TAG_SZ); + rule->tag_ptr[1].reserved[1] = ste_attr->send_attr.range_definer_id; + } + return; + } + + if (ste_attr->wqe_tag_is_jumbo) + memcpy(rule->tag.jumbo, ste_attr->wqe_data->jumbo, MLX5DR_JUMBO_TAG_SZ); + else + memcpy(rule->tag.match, ste_attr->wqe_data->tag, MLX5DR_MATCH_TAG_SZ); +} + +static void +mlx5dr_rule_clear_delete_info(struct mlx5dr_rule *rule) +{ + if (unlikely(mlx5dr_matcher_req_fw_wqe(rule->matcher))) + simple_free(rule->tag_ptr); +} + +static void +mlx5dr_rule_load_delete_info(struct mlx5dr_rule *rule, + struct mlx5dr_send_ste_attr *ste_attr) +{ + if (unlikely(mlx5dr_matcher_req_fw_wqe(rule->matcher))) { + /* Load match definer id and tag for delete */ + ste_attr->wqe_tag = &rule->tag_ptr[0]; + ste_attr->send_attr.match_definer_id = rule->tag_ptr[1].reserved[0]; + + /* Load range definer id and tag for delete */ + if (rule->matcher->flags & MLX5DR_MATCHER_FLAGS_RANGE_DEFINER) { + ste_attr->range_wqe_tag = &rule->tag_ptr[1]; + ste_attr->send_attr.range_definer_id = rule->tag_ptr[1].reserved[1]; + } + } else { + ste_attr->wqe_tag = &rule->tag; + } +} + static int mlx5dr_rule_alloc_action_ste(struct mlx5dr_rule *rule, struct mlx5dr_rule_attr *attr) { @@ -180,6 +236,97 @@ static void mlx5dr_rule_create_init(struct mlx5dr_rule *rule, apply->require_dep = 0; } +static int mlx5dr_rule_create_hws_fw_wqe(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr, + uint8_t mt_idx, + const struct rte_flow_item items[], + uint8_t at_idx, + struct mlx5dr_rule_action rule_actions[]) +{ + struct mlx5dr_action_template *at = &rule->matcher->at[at_idx]; + struct mlx5dr_match_template *mt = &rule->matcher->mt[mt_idx]; + struct mlx5dr_send_ring_dep_wqe range_wqe = {{0}}; + struct mlx5dr_send_ring_dep_wqe match_wqe = {{0}}; + bool is_range = mlx5dr_matcher_mt_is_range(mt); + bool is_jumbo = mlx5dr_matcher_mt_is_jumbo(mt); + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dr_send_ste_attr ste_attr = {0}; + struct mlx5dr_actions_apply_data apply; + struct mlx5dr_send_engine *queue; + + queue = &ctx->send_queue[attr->queue_id]; + if (unlikely(mlx5dr_send_engine_err(queue))) { + rte_errno = EIO; + return rte_errno; + } + + mlx5dr_rule_create_init(rule, &ste_attr, &apply); + mlx5dr_rule_init_dep_wqe(&match_wqe, rule, items, mt, attr->user_data); + mlx5dr_rule_init_dep_wqe(&range_wqe, rule, items, mt, attr->user_data); + + ste_attr.direct_index = 0; + ste_attr.rtc_0 = match_wqe.rtc_0; + ste_attr.rtc_1 = match_wqe.rtc_1; + ste_attr.used_id_rtc_0 = &rule->rtc_0; + ste_attr.used_id_rtc_1 = &rule->rtc_1; + ste_attr.retry_rtc_0 = match_wqe.retry_rtc_0; + ste_attr.retry_rtc_1 = match_wqe.retry_rtc_1; + ste_attr.send_attr.rule = match_wqe.rule; + ste_attr.send_attr.user_data = match_wqe.user_data; + + ste_attr.send_attr.fence = 1; + ste_attr.send_attr.notify_hw = 1; + ste_attr.wqe_tag_is_jumbo = is_jumbo; + + /* Prepare match STE TAG */ + ste_attr.wqe_ctrl = &match_wqe.wqe_ctrl; + ste_attr.wqe_data = &match_wqe.wqe_data; + ste_attr.send_attr.match_definer_id = mlx5dr_definer_get_id(mt->definer); + + mlx5dr_definer_create_tag(items, + mt->fc, + mt->fc_sz, + (uint8_t *)match_wqe.wqe_data.action); + + /* Prepare range STE TAG */ + if (is_range) { + ste_attr.range_wqe_data = &range_wqe.wqe_data; + ste_attr.send_attr.len += MLX5DR_WQE_SZ_GTA_DATA; + ste_attr.send_attr.range_definer_id = mlx5dr_definer_get_id(mt->range_definer); + + mlx5dr_definer_create_tag_range(items, + mt->fcr, + mt->fcr_sz, + (uint8_t *)range_wqe.wqe_data.action); + } + + /* Apply the actions on the last STE */ + apply.queue = queue; + apply.next_direct_idx = 0; + apply.rule_action = rule_actions; + apply.wqe_ctrl = &match_wqe.wqe_ctrl; + apply.wqe_data = (uint32_t *)(is_range ? + &range_wqe.wqe_data : + &match_wqe.wqe_data); + + /* Skip setters[0] used for jumbo STE since not support with FW WQE */ + mlx5dr_action_apply_setter(&apply, &at->setters[1], 0); + + /* Send WQEs to FW */ + mlx5dr_send_stes_fw(queue, &ste_attr); + + /* Backup TAG on the rule for deletion */ + mlx5dr_rule_save_delete_info(rule, &ste_attr); + mlx5dr_send_engine_inc_rule(queue); + + /* Send dependent WQEs */ + if (!attr->burst) + mlx5dr_send_all_dep_wqe(queue); + + return 0; +} + static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, struct mlx5dr_rule_attr *attr, uint8_t mt_idx, @@ -189,7 +336,7 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, { struct mlx5dr_action_template *at = &rule->matcher->at[at_idx]; struct mlx5dr_match_template *mt = &rule->matcher->mt[mt_idx]; - bool is_jumbo = mlx5dr_definer_is_jumbo(mt->definer); + bool is_jumbo = mlx5dr_matcher_mt_is_jumbo(mt); struct mlx5dr_matcher *matcher = rule->matcher; struct mlx5dr_context *ctx = matcher->tbl->ctx; struct mlx5dr_send_ste_attr ste_attr = {0}; @@ -200,6 +347,11 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, uint8_t total_stes, action_stes; int i, ret; + /* Insert rule using FW WQE if cannot use GTA WQE */ + if (unlikely(mlx5dr_matcher_req_fw_wqe(matcher))) + return mlx5dr_rule_create_hws_fw_wqe(rule, attr, mt_idx, items, + at_idx, rule_actions); + queue = &ctx->send_queue[attr->queue_id]; if (unlikely(mlx5dr_send_engine_err(queue))) { rte_errno = EIO; @@ -283,11 +435,7 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, } /* Backup TAG on the rule for deletion */ - if (is_jumbo) - memcpy(rule->tag.jumbo, dep_wqe->wqe_data.action, MLX5DR_JUMBO_TAG_SZ); - else - memcpy(rule->tag.match, dep_wqe->wqe_data.tag, MLX5DR_MATCH_TAG_SZ); - + mlx5dr_rule_save_delete_info(rule, &ste_attr); mlx5dr_send_engine_inc_rule(queue); /* Send dependent WQEs */ @@ -311,6 +459,9 @@ static void mlx5dr_rule_destroy_failed_hws(struct mlx5dr_rule *rule, /* Rule failed now we can safely release action STEs */ mlx5dr_rule_free_action_ste_idx(rule); + /* Clear complex tag */ + mlx5dr_rule_clear_delete_info(rule); + /* If a rule that was indicated as burst (need to trigger HW) has failed * insertion we won't ring the HW as nothing is being written to the WQ. * In such case update the last WQE and ring the HW with that work @@ -327,6 +478,9 @@ static int mlx5dr_rule_destroy_hws(struct mlx5dr_rule *rule, { struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; struct mlx5dr_matcher *matcher = rule->matcher; + bool fw_wqe = mlx5dr_matcher_req_fw_wqe(matcher); + bool is_range = mlx5dr_matcher_mt_is_range(matcher->mt); + bool is_jumbo = mlx5dr_matcher_mt_is_jumbo(matcher->mt); struct mlx5dr_wqe_gta_ctrl_seg wqe_ctrl = {0}; struct mlx5dr_send_ste_attr ste_attr = {0}; struct mlx5dr_send_engine *queue; @@ -361,6 +515,8 @@ static int mlx5dr_rule_destroy_hws(struct mlx5dr_rule *rule, ste_attr.send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE; ste_attr.send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; ste_attr.send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA; + if (unlikely(is_range)) + ste_attr.send_attr.len += MLX5DR_WQE_SZ_GTA_DATA; ste_attr.send_attr.rule = rule; ste_attr.send_attr.notify_hw = !attr->burst; @@ -371,13 +527,19 @@ static int mlx5dr_rule_destroy_hws(struct mlx5dr_rule *rule, ste_attr.used_id_rtc_0 = &rule->rtc_0; ste_attr.used_id_rtc_1 = &rule->rtc_1; ste_attr.wqe_ctrl = &wqe_ctrl; - ste_attr.wqe_tag = &rule->tag; - ste_attr.wqe_tag_is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt->definer); + ste_attr.wqe_tag_is_jumbo = is_jumbo; ste_attr.gta_opcode = MLX5DR_WQE_GTA_OP_DEACTIVATE; if (unlikely(mlx5dr_matcher_is_insert_by_idx(matcher))) ste_attr.direct_index = attr->rule_idx; - mlx5dr_send_ste(queue, &ste_attr); + mlx5dr_rule_load_delete_info(rule, &ste_attr); + + if (unlikely(fw_wqe)) { + mlx5dr_send_stes_fw(queue, &ste_attr); + mlx5dr_rule_clear_delete_info(rule); + } else { + mlx5dr_send_ste(queue, &ste_attr); + } return 0; } diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.h b/drivers/net/mlx5/hws/mlx5dr_rule.h index f2fe418159..886cf77992 100644 --- a/drivers/net/mlx5/hws/mlx5dr_rule.h +++ b/drivers/net/mlx5/hws/mlx5dr_rule.h @@ -36,6 +36,8 @@ struct mlx5dr_rule { struct mlx5dr_matcher *matcher; union { struct mlx5dr_rule_match_tag tag; + /* Pointer to tag to store more than one tag */ + struct mlx5dr_rule_match_tag *tag_ptr; struct ibv_flow *flow; }; uint32_t rtc_0; /* The RTC into which the STE was inserted */ diff --git a/drivers/net/mlx5/hws/mlx5dr_send.h b/drivers/net/mlx5/hws/mlx5dr_send.h index 47bb66b3c7..d0977ec851 100644 --- a/drivers/net/mlx5/hws/mlx5dr_send.h +++ b/drivers/net/mlx5/hws/mlx5dr_send.h @@ -54,8 +54,13 @@ struct mlx5dr_wqe_gta_data_seg_ste { __be32 rsvd0_ctr_id; __be32 rsvd1_definer; __be32 rsvd2[3]; - __be32 action[3]; - __be32 tag[8]; + union { + struct { + __be32 action[3]; + __be32 tag[8]; + }; + __be32 jumbo[11]; + }; }; struct mlx5dr_wqe_gta_data_seg_arg {