From patchwork Tue Jan 31 09:33:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Vesker X-Patchwork-Id: 122723 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B6DF741B8D; Tue, 31 Jan 2023 10:34:39 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1E2D542D0E; Tue, 31 Jan 2023 10:34:38 +0100 (CET) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2077.outbound.protection.outlook.com [40.107.237.77]) by mails.dpdk.org (Postfix) with ESMTP id ACD5B42BC9 for ; Tue, 31 Jan 2023 10:34:36 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=njqS1kqhA4mdAOYyYJ/mKu4iQRX0c3ZeUgwhaaIZ5WEbts9ZGAT4GyOZ0Z6IdkYZujZbaeeaEUxE+BaOQ4/eehcTsZiEIAX32fmZzHS/lgVunNzOZbm5M3E49vUc49vroJ0PrCf9TePa7fRe0XdbwPd1GxKZgWM9TWTJ06hMDkqn4FrxVJFyZGjG8zTB3sVQPt4JI+W22Tdn1JaPFO4/nRtWISemsWl1dYrAdN6QS3lADhBvTYnVrUR+/GxfbtGcTNl4aj5Epx61kpra97gQcPuw9aUmELWSfnU71X8hHfdedD9pzDWLYGRhIlaThdEWbBiDUL9N9fy+fw17D3HaNA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=FAMIeJMmqUW12oO0maa7v4zMtpLxE96HuX9TvyT1ZeA=; b=DuEa/qazS5A35ASlbCA390aPhIyjUeXetr9NbrbQtjqoF8+aFmDfiV21oPMLOO5cYheovBo/Gq/3Dzx2CVhwZzj4YNQk06GcfAq83Aj0wLjCYiDJETsTSNroh8oyMeD4nOzz1+CxL6P5khBgEB7oXJS1BkcIHK2RP0iwPZKd++7kFduk877sPeJ3x+MzGipbXPHdFUYwBx1YDP25kRr8bbJvsrW0UaMiDIJsuTZYtSW3vAa/ZI0Q+/VstRrB6LhaZskDPGv2Ww7JHnPj+SAMhM9s6MzpRFHqD8wUt4aJSXayRsrwZcbJ4dCA8zIU0Swo4t2LlKe44n49IiUsPVt//Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=FAMIeJMmqUW12oO0maa7v4zMtpLxE96HuX9TvyT1ZeA=; b=L31Ns1RmP7VCNrXEfRRy4PurhUgaHchqzqBCgZ4AJyDjMha/lQTDXTrFJCI7jbPjb/QFIuP4zDOdWKSYOwAvDBfO/VY0VX24m1JoyJPLuWtsWh2bQc0kpVfS4ypJBu5ShKCM5Cl4KiCVVVdxdWftM/ZS64eTJtLdIYWYyFjdDsKmdndlsJhZiScxA4yYn7JUcTs6UOWR3UJ+bUnvefv+xgCDyBT0W0Wgip3r8AdotajcC23Kys++WSGvOSUAkYbco62zt++b2oM0LhIZ0CWuolQEaqzdlURBQ0N0n1WU18FGPB00Rhfkuiq3boltH3kVIKAZ4tn9RvpVAR2iO1slzQ== Received: from MW4PR03CA0342.namprd03.prod.outlook.com (2603:10b6:303:dc::17) by LV2PR12MB5799.namprd12.prod.outlook.com (2603:10b6:408:179::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Tue, 31 Jan 2023 09:34:34 +0000 Received: from CO1NAM11FT009.eop-nam11.prod.protection.outlook.com (2603:10b6:303:dc:cafe::62) by MW4PR03CA0342.outlook.office365.com (2603:10b6:303:dc::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38 via Frontend Transport; Tue, 31 Jan 2023 09:34:34 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT009.mail.protection.outlook.com (10.13.175.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.21 via Frontend Transport; Tue, 31 Jan 2023 09:34:33 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:06 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:04 -0800 From: Alex Vesker To: , , , "Matan Azrad" CC: , , Erez Shitrit Subject: [v1 01/16] net/mlx5/hws: support synchronous drain Date: Tue, 31 Jan 2023 11:33:30 +0200 Message-ID: <20230131093346.1261066-2-valex@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230131093346.1261066-1-valex@nvidia.com> References: <20230131093346.1261066-1-valex@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT009:EE_|LV2PR12MB5799:EE_ X-MS-Office365-Filtering-Correlation-Id: 8b2ac231-e212-4a15-2466-08db036e5cae X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: u4P12mRZhvHMbrRRqB2RhfEFpmvuVdwiVREZ+zG1Z/M3XW4Z3IJRfrl86c/uhTVoyfZWi7G2e5EEfG/2tpWOhIiK1dos+EPO39p4WQdcTI73EV6wHsuyK4c492kvQc3Axz8yilpUaAgzNMKSLdX4tqxtqJ0wIw5+3WDrI/28609mxm1SM+RQ/NkzBNu6ktqlaMRhrYj2IUF+GkJ95NSHg8BczHaB1GKxKtaCrGMCZ7JteQG3qOW3xW/4Q/K1xG1K1BIs+vC7aBoPxan+8EZhCzgj2ALz1FwThweozvXDFsFOR+SXch2IC2lXM6nGzZpK9s+SyF3lssnFyjNtRxGSMGBWj6dVniNZJzXUTz6Sp2EFWR80vkKuk2/so8X8m2WtvmPniw5ztT+p6r7kV9LECQOQmLRh4VyXXaTarZ4WhiYfnuJTNjCtMbFn/4Ae6V9IpUOMyv1yl0w4ZVeYH+29/usm4PRVYg1VtSXWnKnkMcXbhG2Gof/i1aOLcCAbiJycw+xe4R7rNNAY2MmqbN59glZ56S3S2eyVk0SrOIRGbbM5YYN8j6BReMX6qWSloeGlAFlRR10mo/LtGLx+pe1q+282R0q7aehAKm2jEg7/9/wMBgdKxWL4vekiCqjmri4RKhapvIuSQEQkHJhWPfYRvb8M/NUYBK832NLH2rxciAXqiYV0zrRHnq1VcNBfLhgd12nLlY59WVMOm8hSFbX0bA== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(346002)(376002)(136003)(396003)(39860400002)(451199018)(46966006)(36840700001)(40470700004)(36756003)(40460700003)(5660300002)(2906002)(82310400005)(2616005)(426003)(336012)(26005)(55016003)(6286002)(83380400001)(478600001)(7696005)(186003)(8676002)(54906003)(1076003)(6636002)(110136005)(16526019)(40480700001)(82740400003)(7636003)(356005)(316002)(47076005)(107886003)(86362001)(6666004)(4326008)(8936002)(41300700001)(70206006)(36860700001)(70586007); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 09:34:33.7540 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 8b2ac231-e212-4a15-2466-08db036e5cae X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT009.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV2PR12MB5799 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Until now we supported asynchronous drain, triggering the queue to start the drain, now we added support for synchronous which assures all the work was processed on the queue. This is useful when working over a FW command and HW queue in parallel sending arguments over the HW queue and match over the FW command which requires synchronization. This also fixes an issue with shared arguments send that require more than one WQE. Signed-off-by: Erez Shitrit Signed-off-by: Alex Vesker --- drivers/net/mlx5/hws/mlx5dr.h | 6 ++++-- drivers/net/mlx5/hws/mlx5dr_pat_arg.c | 27 ++++----------------------- drivers/net/mlx5/hws/mlx5dr_send.c | 16 ++++++++++++++-- drivers/net/mlx5/hws/mlx5dr_send.h | 5 +++++ drivers/net/mlx5/mlx5_flow_hw.c | 2 +- 5 files changed, 28 insertions(+), 28 deletions(-) diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h index b3b2bf34f2..2b02884dc3 100644 --- a/drivers/net/mlx5/hws/mlx5dr.h +++ b/drivers/net/mlx5/hws/mlx5dr.h @@ -86,8 +86,10 @@ enum mlx5dr_match_template_flags { }; enum mlx5dr_send_queue_actions { - /* Start executing all pending queued rules and write to HW */ - MLX5DR_SEND_QUEUE_ACTION_DRAIN = 1 << 0, + /* Start executing all pending queued rules */ + MLX5DR_SEND_QUEUE_ACTION_DRAIN_ASYNC = 1 << 0, + /* Start executing all pending queued rules wait till completion */ + MLX5DR_SEND_QUEUE_ACTION_DRAIN_SYNC = 1 << 1, }; struct mlx5dr_context_attr { diff --git a/drivers/net/mlx5/hws/mlx5dr_pat_arg.c b/drivers/net/mlx5/hws/mlx5dr_pat_arg.c index df451f1ae0..152025d302 100644 --- a/drivers/net/mlx5/hws/mlx5dr_pat_arg.c +++ b/drivers/net/mlx5/hws/mlx5dr_pat_arg.c @@ -306,27 +306,6 @@ void mlx5dr_arg_decapl3_write(struct mlx5dr_send_engine *queue, mlx5dr_send_engine_post_end(&ctrl, &send_attr); } -static int -mlx5dr_arg_poll_for_comp(struct mlx5dr_context *ctx, uint16_t queue_id) -{ - struct rte_flow_op_result comp[1]; - int ret; - - while (true) { - ret = mlx5dr_send_queue_poll(ctx, queue_id, comp, 1); - if (ret) { - if (ret < 0) { - DR_LOG(ERR, "Failed mlx5dr_send_queue_poll"); - } else if (comp[0].status == RTE_FLOW_OP_ERROR) { - DR_LOG(ERR, "Got comp with error"); - rte_errno = ENOENT; - } - break; - } - } - return (ret == 1 ? 0 : ret); -} - void mlx5dr_arg_write(struct mlx5dr_send_engine *queue, void *comp_data, uint32_t arg_idx, @@ -388,9 +367,11 @@ int mlx5dr_arg_write_inline_arg_data(struct mlx5dr_context *ctx, mlx5dr_send_engine_flush_queue(queue); /* Poll for completion */ - ret = mlx5dr_arg_poll_for_comp(ctx, ctx->queues - 1); + ret = mlx5dr_send_queue_action(ctx, ctx->queues - 1, + MLX5DR_SEND_QUEUE_ACTION_DRAIN_SYNC); + if (ret) - DR_LOG(ERR, "Failed to get completions for shared action"); + DR_LOG(ERR, "Failed to drain arg queue"); pthread_spin_unlock(&ctx->ctrl_lock); diff --git a/drivers/net/mlx5/hws/mlx5dr_send.c b/drivers/net/mlx5/hws/mlx5dr_send.c index 5c8bbe6fc6..a507e5f626 100644 --- a/drivers/net/mlx5/hws/mlx5dr_send.c +++ b/drivers/net/mlx5/hws/mlx5dr_send.c @@ -830,18 +830,30 @@ int mlx5dr_send_queue_action(struct mlx5dr_context *ctx, { struct mlx5dr_send_ring_sq *send_sq; struct mlx5dr_send_engine *queue; + bool wait_comp = false; + int64_t polled = 0; queue = &ctx->send_queue[queue_id]; send_sq = &queue->send_ring->send_sq; - if (actions == MLX5DR_SEND_QUEUE_ACTION_DRAIN) { + switch (actions) { + case MLX5DR_SEND_QUEUE_ACTION_DRAIN_SYNC: + wait_comp = true; + /* FALLTHROUGH */ + case MLX5DR_SEND_QUEUE_ACTION_DRAIN_ASYNC: if (send_sq->head_dep_idx != send_sq->tail_dep_idx) /* Send dependent WQEs to drain the queue */ mlx5dr_send_all_dep_wqe(queue); else /* Signal on the last posted WQE */ mlx5dr_send_engine_flush_queue(queue); - } else { + + /* Poll queue until empty */ + while (wait_comp && !mlx5dr_send_engine_empty(queue)) + mlx5dr_send_engine_poll_cqs(queue, NULL, &polled, 0); + + break; + default: rte_errno = -EINVAL; return rte_errno; } diff --git a/drivers/net/mlx5/hws/mlx5dr_send.h b/drivers/net/mlx5/hws/mlx5dr_send.h index 8d4769495d..fcddcc6366 100644 --- a/drivers/net/mlx5/hws/mlx5dr_send.h +++ b/drivers/net/mlx5/hws/mlx5dr_send.h @@ -240,6 +240,11 @@ void mlx5dr_send_ste(struct mlx5dr_send_engine *queue, void mlx5dr_send_engine_flush_queue(struct mlx5dr_send_engine *queue); +static inline bool mlx5dr_send_engine_empty(struct mlx5dr_send_engine *queue) +{ + return (queue->send_ring->send_sq.cur_post == queue->send_ring->send_cq.poll_wqe); +} + static inline bool mlx5dr_send_engine_full(struct mlx5dr_send_engine *queue) { return queue->used_entries >= queue->th_entries; diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 20c71ff7f0..7e87d589cb 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -2851,7 +2851,7 @@ flow_hw_push(struct rte_eth_dev *dev, __flow_hw_push_action(dev, queue); ret = mlx5dr_send_queue_action(priv->dr_ctx, queue, - MLX5DR_SEND_QUEUE_ACTION_DRAIN); + MLX5DR_SEND_QUEUE_ACTION_DRAIN_ASYNC); if (ret) { rte_flow_error_set(error, rte_errno, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, From patchwork Tue Jan 31 09:33:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Vesker X-Patchwork-Id: 122727 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E257F41B8D; Tue, 31 Jan 2023 10:35:19 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CA24D42D53; Tue, 31 Jan 2023 10:34:43 +0100 (CET) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2060.outbound.protection.outlook.com [40.107.220.60]) by mails.dpdk.org (Postfix) with ESMTP id 2D2EF42D2C for ; Tue, 31 Jan 2023 10:34:39 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=m69W/lOJg1PRSU0FlwB/fKhDbt6opTnkzj7L9FqOwH3M6LBlFWuGiHAuC1ReZ47dYXu+yXweaytONYy8Oh9soPXF+ZOKdaVLB5A6tASpaXNEcuXPQm+FnM+tmUPbdbmXf6GqOQztKFz1WECllWhn94v+g6Nnl+NDcDQc5aqqdD2EH0r8H6FO0U85OR8908f+K/V69OMOG2CXWEEOHfMbmAj0oll/vg6VSsCBXEe2F559zQoV/1zXU4DUZqjBbYZAA9u1yU8sBIqSz+nEqEvdqB5PJjo7K7Cnlf+YCr+0afnYYl3WiEnMEv9jPLvgBeq/0UZB8/Myb7rBjUehxS3b4A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=iU1+7ngSiW7pco8HxulNG8vgueNU0j9Z/8l3ZtZJl44=; b=EyyNe8lBDpeDQwmyQQbIGByxk1ZyGUwK5MN1ZgNMKhEvg7DpMDZrB1LnZUnKLRWx4UyAxouNObQl1gfqK+eakrS6N9FLtNnlanFJECYleh03v49OX2MLH/aq9I2apvb+mZTHMO86dFhOJ8h8BHeYzlfXRbA2i8XqkbwSo1VFneQk5FP5aURP34lfnBh6H9Qr3vYBqNsuedCTZYGIckJh2qo4BkhktQG2j4Y0n8k1TDdMnfeBJps5LZbCFOqdIYu2KvyDaGthUhkpr/9rciKB0fQXAA16sBGdDY6KKwT1iBEsZrMyUeDgEid/xPx7PAtGqbL0cU6MBUPuIEs6DUaUZA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=iU1+7ngSiW7pco8HxulNG8vgueNU0j9Z/8l3ZtZJl44=; b=axyQe17A9kUGnkvCGjmP66WOGXO5RailJKOBFi8EwoDlTBX0wfjKoWzwXbCMSZL+YHYzN6q4h3kof0vTPEL5sg7dPNnqtAIEGDFM54GN2ER6S+tZdfDF0CYc6ACkDLdkNdkkwArBsrZJPA3iZt5YaMrtyr7RGRLhixMEc7KJbBQ5e5Trv9NUg4UHXwHH17n20gzDj0zjJrFrm8+/u/nL//1GorHhUBclgyYCKqoe0nV8Lwcf8oIFPSSJbUA3RGsfn96Ejab69VO+uMQw3Id42AO6PcmqL4S95Ir2MKrf8GyX0rA2vufLEad4HixwG6N+ZxkeqmYEHo1nyWRA8W5Dbw== Received: from MW4PR04CA0329.namprd04.prod.outlook.com (2603:10b6:303:82::34) by CY5PR12MB6600.namprd12.prod.outlook.com (2603:10b6:930:40::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38; Tue, 31 Jan 2023 09:34:36 +0000 Received: from CO1NAM11FT093.eop-nam11.prod.protection.outlook.com (2603:10b6:303:82:cafe::a9) by MW4PR04CA0329.outlook.office365.com (2603:10b6:303:82::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38 via Frontend Transport; Tue, 31 Jan 2023 09:34:36 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT093.mail.protection.outlook.com (10.13.175.59) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.25 via Frontend Transport; Tue, 31 Jan 2023 09:34:35 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:09 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:07 -0800 From: Alex Vesker To: , , , "Matan Azrad" CC: , Subject: [v1 02/16] net/mlx5/hws: matcher remove AT and MT limitation Date: Tue, 31 Jan 2023 11:33:31 +0200 Message-ID: <20230131093346.1261066-3-valex@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230131093346.1261066-1-valex@nvidia.com> References: <20230131093346.1261066-1-valex@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT093:EE_|CY5PR12MB6600:EE_ X-MS-Office365-Filtering-Correlation-Id: 727508cf-3b52-4f9b-dbe6-08db036e5df7 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: eldpyb1cZIAo35fkgNj1MCcK7TDB47zvWK1JNPJHbZQ0QtfCANM0zwEipzv4qw2ruH6y8zO4X5HuZn4OEv8edu5GqfqEBIqrleWtamYVJNXkZ9PD0uWo7Z567dIU67LzTXtZ14zMAUWeGh+JhwpCY/+VBGBnnvgD/gVzYsM6LTASDZpqfUnY+Rqpmxf23lZF3rNN9dAf8IO1JXjl9B0rDf+FmEJ5U5PtQaZ5Osy5c/FiK1VaB7n+XgrOdQKtY9Oc/0W9HnNODgBw8YA9NTv2saIHfKVIaNFi88VXxfefAsr3UUmQGuJmv54A/pDOr0ffc02QOOtkMzD/2hKevngfzxQ/p0JxXAtnPgmH+5mTsbWiDGy0cQKELpYF5kRnsENdLIn+dCCMaoK08wFw6E0yBJCDDca8QI1iZtjZslQmcpGZ3we/9P/YZGQ+60ujkWDp/DRvXcGNg9aXawM+nlE0r5LO4DMuvG9hqzpHIh9k1zw7nRk6nvlRiiOrO/3GqIPzwDL80gKjg5B1wljraFZhm6a+hC17ArcUkW2v9ljP/sdNQSAUO2cIYcT90JnkyWhH5xy/jTxuceeXG8uOwwFhWNwVRCDNzzBkzbrszuUadzo6MR8SnwNNzEtjZjn5ViSBnkyVwO6bycmGwDqKw1OKMp/Tw/czTdvZmZV5tkMsZqA5DjN/7u27ULXj7MzSU6ramBZ6jLU99KiroLj0tNWD+g== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(39860400002)(346002)(376002)(396003)(136003)(451199018)(36840700001)(46966006)(40470700004)(6666004)(107886003)(1076003)(26005)(186003)(478600001)(4326008)(6286002)(16526019)(8676002)(70206006)(70586007)(7696005)(7636003)(336012)(2616005)(47076005)(41300700001)(426003)(83380400001)(2906002)(82740400003)(40460700003)(86362001)(8936002)(36860700001)(82310400005)(36756003)(30864003)(40480700001)(356005)(55016003)(5660300002)(110136005)(316002)(6636002)(54906003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 09:34:35.9305 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 727508cf-3b52-4f9b-dbe6-08db036e5df7 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT093.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY5PR12MB6600 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The action and match templates were stored on the matcher in a fixed size array to reduce cache misses and reuse template calculations. This approuch introduced two issues: -limitation of fixed array -definer is bindind to match template and cannot be used with union definer since the layout is fixed Signed-off-by: Alex Vesker --- drivers/net/mlx5/hws/mlx5dr_debug.c | 4 +- drivers/net/mlx5/hws/mlx5dr_matcher.c | 109 ++++++++++++++++---------- drivers/net/mlx5/hws/mlx5dr_matcher.h | 8 +- drivers/net/mlx5/hws/mlx5dr_rule.c | 15 ++-- 4 files changed, 78 insertions(+), 58 deletions(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.c b/drivers/net/mlx5/hws/mlx5dr_debug.c index 0815327b18..9199ec16e0 100644 --- a/drivers/net/mlx5/hws/mlx5dr_debug.c +++ b/drivers/net/mlx5/hws/mlx5dr_debug.c @@ -92,7 +92,7 @@ mlx5dr_debug_dump_matcher_match_template(FILE *f, struct mlx5dr_matcher *matcher int i, ret; for (i = 0; i < matcher->num_of_mt; i++) { - struct mlx5dr_match_template *mt = matcher->mt[i]; + struct mlx5dr_match_template *mt = &matcher->mt[i]; ret = fprintf(f, "%d,0x%" PRIx64 ",0x%" PRIx64 ",%d,%d\n", MLX5DR_DEBUG_RES_TYPE_MATCHER_MATCH_TEMPLATE, @@ -123,7 +123,7 @@ mlx5dr_debug_dump_matcher_action_template(FILE *f, struct mlx5dr_matcher *matche int i, j, ret; for (i = 0; i < matcher->num_of_at; i++) { - struct mlx5dr_action_template *at = matcher->at[i]; + struct mlx5dr_action_template *at = &matcher->at[i]; ret = fprintf(f, "%d,0x%" PRIx64 ",0x%" PRIx64 ",%d,%d,%d", MLX5DR_DEBUG_RES_TYPE_MATCHER_ACTION_TEMPLATE, diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.c b/drivers/net/mlx5/hws/mlx5dr_matcher.c index d509a2f0e1..913bb9d447 100644 --- a/drivers/net/mlx5/hws/mlx5dr_matcher.c +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.c @@ -430,8 +430,8 @@ static int mlx5dr_matcher_create_rtc(struct mlx5dr_matcher *matcher, /* The usual Hash Table */ rtc_attr.update_index_mode = MLX5_IFC_RTC_STE_UPDATE_MODE_BY_HASH; /* The first match template is used since all share the same definer */ - rtc_attr.definer_id = mlx5dr_definer_get_id(matcher->mt[0]->definer); - rtc_attr.is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt[0]->definer); + rtc_attr.definer_id = mlx5dr_definer_get_id(matcher->mt->definer); + rtc_attr.is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt->definer); } else if (attr->insert_mode == MLX5DR_MATCHER_INSERT_BY_INDEX) { rtc_attr.update_index_mode = MLX5_IFC_RTC_STE_UPDATE_MODE_BY_OFFSET; rtc_attr.num_hash_definer = 1; @@ -439,10 +439,8 @@ static int mlx5dr_matcher_create_rtc(struct mlx5dr_matcher *matcher, if (attr->distribute_mode == MLX5DR_MATCHER_DISTRIBUTE_BY_HASH) { /* Hash Split Table */ rtc_attr.access_index_mode = MLX5_IFC_RTC_STE_ACCESS_MODE_BY_HASH; - rtc_attr.definer_id = - mlx5dr_definer_get_id(matcher->mt[0]->definer); - rtc_attr.is_jumbo = - mlx5dr_definer_is_jumbo(matcher->mt[0]->definer); + rtc_attr.definer_id = mlx5dr_definer_get_id(matcher->mt->definer); + rtc_attr.is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt->definer); } else if (attr->distribute_mode == MLX5DR_MATCHER_DISTRIBUTE_BY_LINEAR) { /* Linear Lookup Table */ rtc_attr.access_index_mode = MLX5_IFC_RTC_STE_ACCESS_MODE_LINEAR; @@ -579,7 +577,7 @@ static void mlx5dr_matcher_set_pool_attr(struct mlx5dr_pool_attr *attr, static int mlx5dr_matcher_bind_at(struct mlx5dr_matcher *matcher) { - bool is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt[0]->definer); + bool is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt->definer); struct mlx5dr_cmd_stc_modify_attr stc_attr = {0}; struct mlx5dr_table *tbl = matcher->tbl; struct mlx5dr_pool_attr pool_attr = {0}; @@ -589,7 +587,7 @@ static int mlx5dr_matcher_bind_at(struct mlx5dr_matcher *matcher) bool valid; for (i = 0; i < matcher->num_of_at; i++) { - struct mlx5dr_action_template *at = matcher->at[i]; + struct mlx5dr_action_template *at = &matcher->at[i]; /* Check if action combinabtion is valid */ valid = mlx5dr_action_check_combo(at->action_type_arr, matcher->tbl->type); @@ -679,7 +677,7 @@ static int mlx5dr_matcher_bind_mt(struct mlx5dr_matcher *matcher) for (i = 0; i < matcher->num_of_mt; i++) { /* Get a definer for each match template */ - ret = mlx5dr_definer_get(ctx, matcher->mt[i]); + ret = mlx5dr_definer_get(ctx, &matcher->mt[i]); if (ret) goto definer_put; @@ -689,8 +687,8 @@ static int mlx5dr_matcher_bind_mt(struct mlx5dr_matcher *matcher) if (i == 0) continue; - ret = mlx5dr_definer_compare(matcher->mt[i]->definer, - matcher->mt[i - 1]->definer); + ret = mlx5dr_definer_compare(matcher->mt[i].definer, + matcher->mt[i - 1].definer); if (ret) { DR_LOG(ERR, "Match templates cannot be used on the same matcher"); rte_errno = ENOTSUP; @@ -716,7 +714,7 @@ static int mlx5dr_matcher_bind_mt(struct mlx5dr_matcher *matcher) definer_put: while (created--) - mlx5dr_definer_put(matcher->mt[created]); + mlx5dr_definer_put(&matcher->mt[created]); return ret; } @@ -726,7 +724,7 @@ static void mlx5dr_matcher_unbind_mt(struct mlx5dr_matcher *matcher) int i; for (i = 0; i < matcher->num_of_mt; i++) - mlx5dr_definer_put(matcher->mt[i]); + mlx5dr_definer_put(&matcher->mt[i]); mlx5dr_pool_destroy(matcher->match_ste.pool); } @@ -939,11 +937,10 @@ mlx5dr_matcher_create_col_matcher(struct mlx5dr_matcher *matcher) } col_matcher->tbl = matcher->tbl; - col_matcher->num_of_mt = matcher->num_of_mt; - memcpy(col_matcher->mt, matcher->mt, matcher->num_of_mt * sizeof(*matcher->mt)); + col_matcher->mt = matcher->mt; + col_matcher->at = matcher->at; col_matcher->num_of_at = matcher->num_of_at; - memcpy(col_matcher->at, matcher->at, matcher->num_of_at * sizeof(*matcher->at)); - + col_matcher->num_of_mt = matcher->num_of_mt; col_matcher->attr.priority = matcher->attr.priority; col_matcher->attr.mode = MLX5DR_MATCHER_RESOURCE_MODE_HTABLE; col_matcher->attr.optimize_flow_src = matcher->attr.optimize_flow_src; @@ -1069,7 +1066,7 @@ static int mlx5dr_matcher_init_root(struct mlx5dr_matcher *matcher) flow_attr.tbl_type = type; /* On root table matcher, only a single match template is supported */ - ret = flow_dv_translate_items_hws(matcher->mt[0]->items, + ret = flow_dv_translate_items_hws(matcher->mt[0].items, &flow_attr, mask->match_buf, MLX5_SET_MATCHER_HS_M, NULL, &match_criteria, @@ -1126,36 +1123,64 @@ static int mlx5dr_matcher_uninit_root(struct mlx5dr_matcher *matcher) } static int -mlx5dr_matcher_check_template(uint8_t num_of_mt, uint8_t num_of_at, bool is_root) +mlx5dr_matcher_set_templates(struct mlx5dr_matcher *matcher, + struct mlx5dr_match_template *mt[], + uint8_t num_of_mt, + struct mlx5dr_action_template *at[], + uint8_t num_of_at) { - uint8_t max_num_of_mt; - - max_num_of_mt = is_root ? - MLX5DR_MATCHER_MAX_MT_ROOT : - MLX5DR_MATCHER_MAX_MT; + bool is_root = mlx5dr_table_is_root(matcher->tbl); + int i; if (!num_of_mt || !num_of_at) { DR_LOG(ERR, "Number of action/match template cannot be zero"); - goto out_not_sup; + rte_errno = ENOTSUP; + return rte_errno; + } + + if (is_root && num_of_mt > MLX5DR_MATCHER_MAX_MT_ROOT) { + DR_LOG(ERR, "Number of match templates exceeds limit"); + rte_errno = ENOTSUP; + return rte_errno; } - if (num_of_at > MLX5DR_MATCHER_MAX_AT) { - DR_LOG(ERR, "Number of action templates exceeds limit"); - goto out_not_sup; + matcher->mt = simple_calloc(num_of_mt, sizeof(*matcher->mt)); + if (!matcher->mt) { + DR_LOG(ERR, "Failed to allocate match template array"); + rte_errno = ENOMEM; + return rte_errno; } - if (num_of_mt > max_num_of_mt) { - DR_LOG(ERR, "Number of match templates exceeds limit"); - goto out_not_sup; + matcher->at = simple_calloc(num_of_at, sizeof(*matcher->at)); + if (!matcher->at) { + DR_LOG(ERR, "Failed to allocate action template array"); + rte_errno = ENOMEM; + goto free_mt; } + for (i = 0; i < num_of_mt; i++) + matcher->mt[i] = *mt[i]; + + for (i = 0; i < num_of_at; i++) + matcher->at[i] = *at[i]; + + matcher->num_of_mt = num_of_mt; + matcher->num_of_at = num_of_at; + return 0; -out_not_sup: - rte_errno = ENOTSUP; +free_mt: + simple_free(matcher->mt); return rte_errno; } +static void +mlx5dr_matcher_unset_templates(struct mlx5dr_matcher *matcher) +{ + simple_free(matcher->at); + simple_free(matcher->mt); +} + struct mlx5dr_matcher * mlx5dr_matcher_create(struct mlx5dr_table *tbl, struct mlx5dr_match_template *mt[], @@ -1168,10 +1193,6 @@ mlx5dr_matcher_create(struct mlx5dr_table *tbl, struct mlx5dr_matcher *matcher; int ret; - ret = mlx5dr_matcher_check_template(num_of_mt, num_of_at, is_root); - if (ret) - return NULL; - matcher = simple_calloc(1, sizeof(*matcher)); if (!matcher) { rte_errno = ENOMEM; @@ -1180,15 +1201,15 @@ mlx5dr_matcher_create(struct mlx5dr_table *tbl, matcher->tbl = tbl; matcher->attr = *attr; - matcher->num_of_mt = num_of_mt; - memcpy(matcher->mt, mt, num_of_mt * sizeof(*mt)); - matcher->num_of_at = num_of_at; - memcpy(matcher->at, at, num_of_at * sizeof(*at)); ret = mlx5dr_matcher_process_attr(tbl->ctx->caps, matcher, is_root); if (ret) goto free_matcher; + ret = mlx5dr_matcher_set_templates(matcher, mt, num_of_mt, at, num_of_at); + if (ret) + goto free_matcher; + if (is_root) ret = mlx5dr_matcher_init_root(matcher); else @@ -1196,11 +1217,13 @@ mlx5dr_matcher_create(struct mlx5dr_table *tbl, if (ret) { DR_LOG(ERR, "Failed to initialise matcher: %d", ret); - goto free_matcher; + goto unset_templates; } return matcher; +unset_templates: + mlx5dr_matcher_unset_templates(matcher); free_matcher: simple_free(matcher); return NULL; @@ -1213,6 +1236,7 @@ int mlx5dr_matcher_destroy(struct mlx5dr_matcher *matcher) else mlx5dr_matcher_uninit(matcher); + mlx5dr_matcher_unset_templates(matcher); simple_free(matcher); return 0; } @@ -1272,7 +1296,6 @@ mlx5dr_match_template_create(const struct rte_flow_item items[], int mlx5dr_match_template_destroy(struct mlx5dr_match_template *mt) { - assert(!mt->refcount); simple_free(mt->items); simple_free(mt); return 0; diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.h b/drivers/net/mlx5/hws/mlx5dr_matcher.h index 2bebc4bcce..b957f5ea4b 100644 --- a/drivers/net/mlx5/hws/mlx5dr_matcher.h +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.h @@ -6,12 +6,8 @@ #define MLX5DR_MATCHER_H_ /* Max supported match template */ -#define MLX5DR_MATCHER_MAX_MT 2 #define MLX5DR_MATCHER_MAX_MT_ROOT 1 -/* Max supported action template */ -#define MLX5DR_MATCHER_MAX_AT 4 - /* We calculated that concatenating a collision table to the main table with * 3% of the main table rows will be enough resources for high insertion * success probability. @@ -59,9 +55,9 @@ struct mlx5dr_matcher { struct mlx5dr_table *tbl; struct mlx5dr_matcher_attr attr; struct mlx5dv_flow_matcher *dv_matcher; - struct mlx5dr_match_template *mt[MLX5DR_MATCHER_MAX_MT]; + struct mlx5dr_match_template *mt; uint8_t num_of_mt; - struct mlx5dr_action_template *at[MLX5DR_MATCHER_MAX_AT]; + struct mlx5dr_action_template *at; uint8_t num_of_at; struct mlx5dr_devx_obj *end_ft; struct mlx5dr_matcher *col_matcher; diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.c b/drivers/net/mlx5/hws/mlx5dr_rule.c index 60a82c022f..f5a0c46315 100644 --- a/drivers/net/mlx5/hws/mlx5dr_rule.c +++ b/drivers/net/mlx5/hws/mlx5dr_rule.c @@ -5,10 +5,10 @@ #include "mlx5dr_internal.h" static void mlx5dr_rule_skip(struct mlx5dr_matcher *matcher, + struct mlx5dr_match_template *mt, const struct rte_flow_item *items, bool *skip_rx, bool *skip_tx) { - struct mlx5dr_match_template *mt = matcher->mt[0]; const struct flow_hw_port_info *vport; const struct rte_flow_item_ethdev *v; @@ -43,6 +43,7 @@ static void mlx5dr_rule_skip(struct mlx5dr_matcher *matcher, static void mlx5dr_rule_init_dep_wqe(struct mlx5dr_send_ring_dep_wqe *dep_wqe, struct mlx5dr_rule *rule, const struct rte_flow_item *items, + struct mlx5dr_match_template *mt, void *user_data) { struct mlx5dr_matcher *matcher = rule->matcher; @@ -63,7 +64,7 @@ static void mlx5dr_rule_init_dep_wqe(struct mlx5dr_send_ring_dep_wqe *dep_wqe, break; case MLX5DR_TABLE_TYPE_FDB: - mlx5dr_rule_skip(matcher, items, &skip_rx, &skip_tx); + mlx5dr_rule_skip(matcher, mt, items, &skip_rx, &skip_tx); if (!skip_rx) { dep_wqe->rtc_0 = matcher->match_ste.rtc_0->id; @@ -186,8 +187,8 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, uint8_t at_idx, struct mlx5dr_rule_action rule_actions[]) { - struct mlx5dr_action_template *at = rule->matcher->at[at_idx]; - struct mlx5dr_match_template *mt = rule->matcher->mt[mt_idx]; + struct mlx5dr_action_template *at = &rule->matcher->at[at_idx]; + struct mlx5dr_match_template *mt = &rule->matcher->mt[mt_idx]; bool is_jumbo = mlx5dr_definer_is_jumbo(mt->definer); struct mlx5dr_matcher *matcher = rule->matcher; struct mlx5dr_context *ctx = matcher->tbl->ctx; @@ -212,7 +213,7 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, * dep_wqe buffers (ctrl, data) are also reused for all STE writes. */ dep_wqe = mlx5dr_send_add_new_dep_wqe(queue); - mlx5dr_rule_init_dep_wqe(dep_wqe, rule, items, attr->user_data); + mlx5dr_rule_init_dep_wqe(dep_wqe, rule, items, mt, attr->user_data); ste_attr.wqe_ctrl = &dep_wqe->wqe_ctrl; ste_attr.wqe_data = &dep_wqe->wqe_data; @@ -371,7 +372,7 @@ static int mlx5dr_rule_destroy_hws(struct mlx5dr_rule *rule, ste_attr.used_id_rtc_1 = &rule->rtc_1; ste_attr.wqe_ctrl = &wqe_ctrl; ste_attr.wqe_tag = &rule->tag; - ste_attr.wqe_tag_is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt[0]->definer); + ste_attr.wqe_tag_is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt->definer); ste_attr.gta_opcode = MLX5DR_WQE_GTA_OP_DEACTIVATE; if (unlikely(mlx5dr_matcher_is_insert_by_idx(matcher))) ste_attr.direct_index = attr->rule_idx; @@ -388,7 +389,7 @@ static int mlx5dr_rule_create_root(struct mlx5dr_rule *rule, struct mlx5dr_rule_action rule_actions[]) { struct mlx5dv_flow_matcher *dv_matcher = rule->matcher->dv_matcher; - uint8_t num_actions = rule->matcher->at[at_idx]->num_actions; + uint8_t num_actions = rule->matcher->at[at_idx].num_actions; struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; struct mlx5dv_flow_match_parameters *value; struct mlx5_flow_attr flow_attr = {0}; From patchwork Tue Jan 31 09:33:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Vesker X-Patchwork-Id: 122729 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4ADD841B8D; Tue, 31 Jan 2023 10:35:38 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8118C42D88; Tue, 31 Jan 2023 10:34:46 +0100 (CET) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2055.outbound.protection.outlook.com [40.107.244.55]) by mails.dpdk.org (Postfix) with ESMTP id 257F742D0D for ; Tue, 31 Jan 2023 10:34:41 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Ksv0ywH3NmmJPG38YqKkb8bt0hLDYwOEcqWF4fkWTyIH9Olr49dyWBXibV4UsMBsNJ+uNtwMCks3g6B3Rd6gZ2RxLPdQEgcKQrHTU4uBbmGpJUFRboDL88yalK0peakT/kULa5SeK4fdaEOOy/+3RMLFKdwOXRWrYQCDeZwoSAoPM1sZwOD+hFj34hWXwxByYzmXBWAKj6T+r2q0RXqUN056hTEWOpTHZlSklTMhgvOeZ7E8ogRgXr/lUyweeIVsD5391llhIJ5VJ5t7Ygt4fUhoKinykgCCYDGwLCx2YCHZVkpl1jd+JmY/29S4E0xLw5AC0wwRIAPoypX0iz5SJQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=e8+ALuhZ6RBOXXWs9t/ml0u6TQqeK066bx9YsD0ZGWs=; b=CaYMlf0Lx3ZJ62Au5kZjwHtlmsAIJNDZH5/FJA8S7+vjoR2TuATKqAXEVngEKsU/fFlJS0phO4MrYgbu6VqMwhaTL/YjmDn24n0LVMz3R5zqN1lBaEY3M/D/EctZ30yos+4hsevmBk719U4DhK+UNke2XpIGstTcVETu6+UXZ4O2WrdCugAdU+fHwTeGtz2W65n4P8cf5E6QSdu2+g5+0TM+yT+20TzILEnVrz+8BLWIIfY8gmO5larqtxu+YlNCZWtl/hJQvuXH2fv6e0fnYeXFpYHRCtcBfn2qe/gEiKwOc36CLqmCIQM3MXLYlD5fok0Qm13GwSBljdekujo6ig== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=e8+ALuhZ6RBOXXWs9t/ml0u6TQqeK066bx9YsD0ZGWs=; b=q9bKTRFkZxnspUCB6a8rofiQ5coxr5+pKIhk9GGpKZMPqJfgsWRPK/oQpLHWy3UwCqryctrAQSt60Xov3cYgRnXjKvUoEXQ1r/lf83HOgaw7j69AiOAw19z/cw2/6/+oO0r8GCL7QGkPuoAKDbDR4+iAAHmjtLcjyj70CVVCtzoxInzOge83322yTQ3f5zUYxOebzsDkYduTHqDS+srKWExQkFWy6WU374YL1dSEAb5R/KzBf70+htLn4rWpe4I2v5tDFrzgbCwr/xovauK9HEM7+bjhN0WzXKdObPtpFTKhayw3K3+d0QcWRxqkwo5sNlBrpigj3mm05563WORMsg== Received: from MW4PR04CA0181.namprd04.prod.outlook.com (2603:10b6:303:86::6) by BY5PR12MB4259.namprd12.prod.outlook.com (2603:10b6:a03:202::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Tue, 31 Jan 2023 09:34:38 +0000 Received: from CO1NAM11FT025.eop-nam11.prod.protection.outlook.com (2603:10b6:303:86:cafe::87) by MW4PR04CA0181.outlook.office365.com (2603:10b6:303:86::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38 via Frontend Transport; Tue, 31 Jan 2023 09:34:38 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT025.mail.protection.outlook.com (10.13.175.232) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22 via Frontend Transport; Tue, 31 Jan 2023 09:34:38 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:11 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:09 -0800 From: Alex Vesker To: , , , "Matan Azrad" CC: , Subject: [v1 03/16] net/mlx5/hws: support GTA WQE write using FW command Date: Tue, 31 Jan 2023 11:33:32 +0200 Message-ID: <20230131093346.1261066-4-valex@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230131093346.1261066-1-valex@nvidia.com> References: <20230131093346.1261066-1-valex@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT025:EE_|BY5PR12MB4259:EE_ X-MS-Office365-Filtering-Correlation-Id: 23f9bf33-749a-423b-751f-08db036e5f7b X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: FyeW21ns09yy8ZASrVLzHcuqfKEpQaVQqvFuGVDdxWNeswL+zqvfLywt1KaskT65Xni0WThWiwOLWAUKNchqUd+JzpJcFtBoiPYqz5/39v8bpNv3nseyMj10aJzdYK/9D0yswLK5kyHISCqqcp2ra3WDesAsHPE1WndvJIkMMaQ1271q3Lk3OGuQxIHsYbV3+Mvqv7DpZ26GuZM0b7gPiVdVp1Kkk1E233ic6oYQdxhLYh0RZBzNOHyEx0pOnXt2xqn6+virqYO96jGNbMG+xHzFChLfa5znPvU5gbKb7rPZtoaYzi72KvtaUma/5LdimBhLGxnHnTCnn1qbpL/53cjwiTU8h+fHcYyw58yQ95oRBpDjhd+pYGcIFrpnspgiknU2K8XlzdTo0wWWVliVejEjQ70P1ziJDBRXa9ILQDfz5ujaDaymiEx4ukLZ3UZ18kX/gdCzCsCf0SIZ5/qQ2+4cRXgIzXIAwJK21dxt9CcSk1SrJJzg45+twuy4eGewXhAq7dQe4qDnNjK/zqIcAprDvxtAeJ5GcrHpjVTjJX1IYnafgNxZsghPiyvlDEyvcxAJdGMLAhCcpvETJ1Fygghr5C32mnOwJs7mmnEShl6NSOC1OfyaogZzP0CjNA+V/Hd/AXNuqsAhQ0GMQt0OVeyLjbuYhlA0qLFuyn+txdQnhOP5dDG56m0rgxl3qr7ctJpkLt2d5pwHnYFeBpO2GA== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(136003)(376002)(39860400002)(346002)(396003)(451199018)(40470700004)(46966006)(36840700001)(82740400003)(2906002)(36860700001)(6666004)(6286002)(16526019)(356005)(186003)(26005)(478600001)(316002)(107886003)(6636002)(110136005)(54906003)(55016003)(86362001)(82310400005)(40480700001)(36756003)(40460700003)(7636003)(70586007)(70206006)(4326008)(8676002)(8936002)(1076003)(7696005)(5660300002)(41300700001)(2616005)(47076005)(426003)(83380400001)(336012); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 09:34:38.4707 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 23f9bf33-749a-423b-751f-08db036e5f7b X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT025.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4259 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The generate WQE command is used as an interface to writing GTA WQEs with fields that are not supported in current HW, for example extended match definer. Signed-off-by: Alex Vesker --- drivers/common/mlx5/mlx5_prm.h | 27 +++++++++++++++++- drivers/net/mlx5/hws/mlx5dr_cmd.c | 47 +++++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_cmd.h | 13 +++++++++ 3 files changed, 86 insertions(+), 1 deletion(-) diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 9294f65e24..d4d8ddcd2a 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -1141,6 +1141,7 @@ enum { MLX5_CMD_QUERY_REGEX_REGISTERS = 0xb07, MLX5_CMD_OP_ACCESS_REGISTER_USER = 0xb0c, MLX5_CMD_OP_ALLOW_OTHER_VHCA_ACCESS = 0xb16, + MLX5_CMD_OP_GENERATE_WQE = 0xb17, }; enum { @@ -2159,7 +2160,8 @@ struct mlx5_ifc_cmd_hca_cap_2_bits { u8 format_select_dw_gtpu_dw_1[0x8]; u8 format_select_dw_gtpu_dw_2[0x8]; u8 format_select_dw_gtpu_first_ext_dw_0[0x8]; - u8 reserved_at_2a0[0x560]; + u8 generate_wqe_type[0x20]; + u8 reserved_at_2c0[0x540]; }; struct mlx5_ifc_esw_cap_bits { @@ -3529,6 +3531,29 @@ struct mlx5_ifc_create_alias_obj_in_bits { struct mlx5_ifc_alias_context_bits alias_ctx; }; +struct mlx5_ifc_generate_wqe_in_bits { + u8 opcode[0x10]; + u8 uid[0x10]; + u8 reserved_at_20[0x10]; + u8 op_mode[0x10]; + u8 reserved_at_40[0x40]; + u8 reserved_at_80[0x8]; + u8 pdn[0x18]; + u8 reserved_at_a0[0x160]; + u8 wqe_ctrl[0x80]; + u8 wqe_gta_ctrl[0x180]; + u8 wqe_gta_data_0[0x200]; + u8 wqe_gta_data_1[0x200]; +}; + +struct mlx5_ifc_generate_wqe_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + u8 syndrome[0x20]; + u8 reserved_at_40[0x1c0]; + u8 cqe_data[0x200]; +}; + enum { MLX5_CRYPTO_KEY_SIZE_128b = 0x0, MLX5_CRYPTO_KEY_SIZE_256b = 0x1, diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.c b/drivers/net/mlx5/hws/mlx5dr_cmd.c index 32378673cf..c648eacd03 100644 --- a/drivers/net/mlx5/hws/mlx5dr_cmd.c +++ b/drivers/net/mlx5/hws/mlx5dr_cmd.c @@ -795,6 +795,53 @@ mlx5dr_cmd_alias_obj_create(struct ibv_context *ctx, return devx_obj; } +int mlx5dr_cmd_generate_wqe(struct ibv_context *ctx, + struct mlx5dr_cmd_generate_wqe_attr *attr, + struct mlx5_cqe64 *ret_cqe) +{ + uint32_t out[MLX5_ST_SZ_DW(generate_wqe_out)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(generate_wqe_in)] = {0}; + uint8_t status; + void *ptr; + int ret; + + MLX5_SET(generate_wqe_in, in, opcode, MLX5_CMD_OP_GENERATE_WQE); + MLX5_SET(generate_wqe_in, in, pdn, attr->pdn); + + ptr = MLX5_ADDR_OF(generate_wqe_in, in, wqe_ctrl); + memcpy(ptr, attr->wqe_ctrl, MLX5_FLD_SZ_BYTES(generate_wqe_in, wqe_ctrl)); + + ptr = MLX5_ADDR_OF(generate_wqe_in, in, wqe_gta_ctrl); + memcpy(ptr, attr->gta_ctrl, MLX5_FLD_SZ_BYTES(generate_wqe_in, wqe_gta_ctrl)); + + ptr = MLX5_ADDR_OF(generate_wqe_in, in, wqe_gta_data_0); + memcpy(ptr, attr->gta_data_0, MLX5_FLD_SZ_BYTES(generate_wqe_in, wqe_gta_data_0)); + + if (attr->gta_data_1) { + ptr = MLX5_ADDR_OF(generate_wqe_in, in, wqe_gta_data_1); + memcpy(ptr, attr->gta_data_1, MLX5_FLD_SZ_BYTES(generate_wqe_in, wqe_gta_data_1)); + } + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to write GTA WQE using FW"); + rte_errno = errno; + return rte_errno; + } + + status = MLX5_GET(generate_wqe_out, out, status); + if (status) { + DR_LOG(ERR, "Invalid FW CQE status %d", status); + rte_errno = EINVAL; + return rte_errno; + } + + ptr = MLX5_ADDR_OF(generate_wqe_out, out, cqe_data); + memcpy(ret_cqe, ptr, sizeof(*ret_cqe)); + + return 0; +} + int mlx5dr_cmd_query_caps(struct ibv_context *ctx, struct mlx5dr_cmd_query_caps *caps) { diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.h b/drivers/net/mlx5/hws/mlx5dr_cmd.h index 468557ba16..3689d09897 100644 --- a/drivers/net/mlx5/hws/mlx5dr_cmd.h +++ b/drivers/net/mlx5/hws/mlx5dr_cmd.h @@ -153,6 +153,14 @@ struct mlx5dr_cmd_query_vport_caps { uint32_t metadata_c_mask; }; +struct mlx5dr_cmd_generate_wqe_attr { + uint8_t *wqe_ctrl; + uint8_t *gta_ctrl; + uint8_t *gta_data_0; + uint8_t *gta_data_1; + uint32_t pdn; +}; + struct mlx5dr_cmd_query_caps { uint32_t wire_regc; uint32_t wire_regc_mask; @@ -212,6 +220,11 @@ int mlx5dr_cmd_stc_modify(struct mlx5dr_devx_obj *devx_obj, struct mlx5dr_cmd_stc_modify_attr *stc_attr); +int +mlx5dr_cmd_generate_wqe(struct ibv_context *ctx, + struct mlx5dr_cmd_generate_wqe_attr *attr, + struct mlx5_cqe64 *ret_cqe); + struct mlx5dr_devx_obj * mlx5dr_cmd_ste_create(struct ibv_context *ctx, struct mlx5dr_cmd_ste_create_attr *ste_attr); From patchwork Tue Jan 31 09:33:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Vesker X-Patchwork-Id: 122725 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9DE6041B8D; Tue, 31 Jan 2023 10:34:57 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 34DF842D2D; Tue, 31 Jan 2023 10:34:41 +0100 (CET) Received: from NAM04-DM6-obe.outbound.protection.outlook.com (mail-dm6nam04on2060.outbound.protection.outlook.com [40.107.102.60]) by mails.dpdk.org (Postfix) with ESMTP id 05F0F42BC9 for ; Tue, 31 Jan 2023 10:34:38 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=adZhhFzLXIxV2XzmzkvF/LQoXZl4ZqfZ3ZVpi2RfRCf7nSc9RV5nV/SD/MUbYnabc1PXN5K32Eguxp4nPdmw50qvQ+ijlpkKLIjoJ96tZWybuN3aU1NS+xg70mKwCI9PotkYZzx6ENvTdkP0MHqGSc1XCf/7tgTWvRr43Z1xhMgDYXC7KzVdbrROJxDm6DEKjDPqtxqIBA/dxfchSWFe2vIANcSlfXjq1GC6liVCGpV47nDO9bJwQHHU9w37E4fA+ytOXNIGIHDzLi3D6oGR40N3U272dZV0R/cNABf0Ud+tqyO1B33+BGh6X01whZyI3uj4fMthofx2kuu4vfRjBQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ApUoAoSWiMnx3sJEb1WszlUJ6+KIE6rMM2hgpou+ctI=; b=a3TDawRxLQBowLZxpnQcKvsMqovgOhO0Ey0CiHvK1/TSRRKzkHBRL1TzKT10FmAEO22OM41c6DHV2jyqOICSr7/VK9JJX/EJej3SoaaYrtjYO8/fnfySnVagyFTrFmv7N0yrcDlvOphcK5mGxDv+G2W2nPHhn8gQ/BJMAOBxBeTnHRSoDmgjk/SyKcPjne+pRqyhFCyR71VwiHs1QwKKVYlnJH91sS3yLcft4DqfYFqxFYty66O7N9DmjNodqmE7wfLgBdCKl+HcaieE8xMdnFDRznJ4CjC/Y9wai7qDfNgNdhtIEBGxKsAqrUl73m0MPH9dssnoegzTkTJVRD8W+A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ApUoAoSWiMnx3sJEb1WszlUJ6+KIE6rMM2hgpou+ctI=; b=Zcp5oi3KZAZhMR5tweSaIoFO1/+kO6M+wSBt+IDJLpsiNVqWzlE11KlOnWawSJLGmq9VqbsRHH1312fqemxlxodms8R8aMZ9pOHgKZorZC9G/XEUfWEmdqYsEw8/3GiaEnMP6N/HQXKGLyB9ELnM3Iiu4utmLzVCSgvZiNMWE/Qm2c006wX/U0Lhs9COM3gPC3B91xDX3dFARtwwXbmJQ0O/eDO7L193Fk9FbJqrojshtAiJ/NhsBFVP3qmJzExWhfTrOJQBQG9SLe11PUDn8w/90FquhUqsZzYpnieIAtfVSXLmotbgbF0E2Fw9s4oWBjTiWGTkNjda/hXfLWKZGA== Received: from MW4PR04CA0064.namprd04.prod.outlook.com (2603:10b6:303:6b::9) by MW4PR12MB6999.namprd12.prod.outlook.com (2603:10b6:303:20a::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Tue, 31 Jan 2023 09:34:35 +0000 Received: from CO1NAM11FT016.eop-nam11.prod.protection.outlook.com (2603:10b6:303:6b:cafe::2d) by MW4PR04CA0064.outlook.office365.com (2603:10b6:303:6b::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38 via Frontend Transport; Tue, 31 Jan 2023 09:34:35 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CO1NAM11FT016.mail.protection.outlook.com (10.13.175.141) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.21 via Frontend Transport; Tue, 31 Jan 2023 09:34:35 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:14 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:12 -0800 From: Alex Vesker To: , , , "Matan Azrad" CC: , Subject: [v1 04/16] net/mlx5/hws: add capability query for gen wqe command Date: Tue, 31 Jan 2023 11:33:33 +0200 Message-ID: <20230131093346.1261066-5-valex@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230131093346.1261066-1-valex@nvidia.com> References: <20230131093346.1261066-1-valex@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT016:EE_|MW4PR12MB6999:EE_ X-MS-Office365-Filtering-Correlation-Id: dcaa08f5-03a6-49eb-7a5a-08db036e5d82 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: W1jSfc+aNWxyxFlelAYdZjnZI6hwQoC8VIwUgRQhvZZ1DGlXVbljDFcTAZEl7rVIwjR0ewlrBnbfj7KqOr8MAKcAZn2IgLwY2iDklHGcjMCmC/PlaCq7dPKJRZFHwQue8kgGLFXg4MJZnrfghqVblrl/uoD9XMmtTdHTAn+xZgCZsH2l5/caGzt24WomDvWK2oINoC6YHtNwxXAdiGjn5H8ZpOjcbMxQ5Ohwnlh9tRolcVBSAuo4fJmVwUb4uec/YqJ/PKOtcAUSpoUz00blD4vGET+CRxt4gdpaIh3ecbLfMpvMcDRn1Wq1cGeIstsCvBdjHplKr+HJwo908k/E1K4OqXc+/ZtAjPrri2DP0XFISL9gbv6ytRTS7+WqLGz4waBAoLymvRgv2EpMHW++ijkKRes+6HcEMsxx0Lexx9REuQP60G0D+1qeES1I4Ss5kPnmfdP+1WzSTePe6shrHM3FfON7ROf0aMkJSymovhnN+lBTQHLtqGKCx0BujWDeZfhpUcZpBxEyW5kPVLcl8BiM/4LikRLh7Rhh132bc6wKuFjcGQxov0duGHXHYB7pgkvN33KL28NirmjN/JxjriTD1ol+H7wIW3rhjyapo3iwZRTTQceq8GjTYhviWD1nZwu6PkCf0U/mEh2haxwwfvEXIcTbZVIBk564ERnFoCp73Zxts0+yXMyddb/tOzU7L/rHSdgMr8UrjZnl66t6zA== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(136003)(376002)(39860400002)(346002)(396003)(451199018)(46966006)(36840700001)(40470700004)(2906002)(82740400003)(36860700001)(356005)(6636002)(16526019)(6286002)(478600001)(26005)(186003)(54906003)(110136005)(316002)(6666004)(36756003)(82310400005)(55016003)(86362001)(40480700001)(107886003)(41300700001)(40460700003)(7636003)(2616005)(4326008)(70586007)(70206006)(8936002)(8676002)(1076003)(47076005)(7696005)(426003)(336012)(5660300002)(83380400001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 09:34:35.2237 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: dcaa08f5-03a6-49eb-7a5a-08db036e5d82 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT016.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB6999 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Read the capabilities required to determine support for GENERATE_WQE. Signed-off-by: Alex Vesker --- drivers/common/mlx5/mlx5_prm.h | 6 ++++-- drivers/net/mlx5/hws/mlx5dr_cmd.c | 12 ++++++++++++ drivers/net/mlx5/hws/mlx5dr_cmd.h | 3 +++ 3 files changed, 19 insertions(+), 2 deletions(-) diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index d4d8ddcd2a..6d0b5e640c 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -2205,10 +2205,12 @@ struct mlx5_ifc_wqe_based_flow_table_cap_bits { u8 header_insert_type[0x10]; u8 header_remove_type[0x10]; u8 trivial_match_definer[0x20]; - u8 reserved_at_140[0x20]; + u8 reserved_at_140[0x1b]; + u8 rtc_max_num_hash_definer_gen_wqe[0x5]; u8 reserved_at_160[0x18]; u8 access_index_mode[0x8]; - u8 reserved_at_180[0x20]; + u8 reserved_at_180[0x10]; + u8 ste_fromat_gen_wqe[0x10]; u8 linear_match_definer_reg_c3[0x20]; }; diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.c b/drivers/net/mlx5/hws/mlx5dr_cmd.c index c648eacd03..e311be780b 100644 --- a/drivers/net/mlx5/hws/mlx5dr_cmd.c +++ b/drivers/net/mlx5/hws/mlx5dr_cmd.c @@ -928,6 +928,10 @@ int mlx5dr_cmd_query_caps(struct ibv_context *ctx, capability.cmd_hca_cap_2. format_select_dw_gtpu_first_ext_dw_0); + caps->supp_type_gen_wqe = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap_2. + generate_wqe_type); + /* check cross-VHCA support in cap2 */ res = MLX5_GET(query_hca_cap_out, out, @@ -1033,6 +1037,14 @@ int mlx5dr_cmd_query_caps(struct ibv_context *ctx, caps->linear_match_definer = MLX5_GET(query_hca_cap_out, out, capability.wqe_based_flow_table_cap. linear_match_definer_reg_c3); + + caps->rtc_max_hash_def_gen_wqe = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + rtc_max_num_hash_definer_gen_wqe); + + caps->supp_ste_fromat_gen_wqe = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + ste_fromat_gen_wqe); } if (caps->eswitch_manager) { diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.h b/drivers/net/mlx5/hws/mlx5dr_cmd.h index 3689d09897..a42218ba74 100644 --- a/drivers/net/mlx5/hws/mlx5dr_cmd.h +++ b/drivers/net/mlx5/hws/mlx5dr_cmd.h @@ -183,6 +183,9 @@ struct mlx5dr_cmd_query_caps { bool full_dw_jumbo_support; bool rtc_hash_split_table; bool rtc_linear_lookup_table; + uint32_t supp_type_gen_wqe; + uint8_t rtc_max_hash_def_gen_wqe; + uint16_t supp_ste_fromat_gen_wqe; struct mlx5dr_cmd_query_ft_caps nic_ft; struct mlx5dr_cmd_query_ft_caps fdb_ft; bool eswitch_manager; From patchwork Tue Jan 31 09:33:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Vesker X-Patchwork-Id: 122724 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6EB1F41B8D; Tue, 31 Jan 2023 10:34:48 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3086942D0C; Tue, 31 Jan 2023 10:34:40 +0100 (CET) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2053.outbound.protection.outlook.com [40.107.237.53]) by mails.dpdk.org (Postfix) with ESMTP id 08C4642D0C for ; Tue, 31 Jan 2023 10:34:38 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=SEiQSCs8ORRPr04iyyGoT9W8lVOctk+be0J6WyzuChiRu+75gZRPXajbwcLWgQr+E9DxAQUaEDMWx+sNRoOAeBVsQXEZQznS7GT6s2eF9oWwCDAT0VJuzrpnc//hZ3r2Dhf13fAgcpHmZYak2QydoMDv97bDmnkE3mBRWdmxRNKD51poUbMytySCZWMty12T3sCj3Bj0dpIOjI+za5RlfebM2QulnNlWCYOTsYKcQqFknyqdj88ja5Qnd5nBK8e9EigqrDneRHWHKzVh8/zfd+gq2lTyFd6lhK1RzzLVOkZnwfI7qIgEYGGNnpIPTzHsKL1rqmSU2pSg8ykW3fiGrA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=mY0pkgaUbUGWjJWj33X8+HQ5AIhEdsWSCsZVeW0C9uE=; b=NjauA/3b7m7G+E7LcH8eZrv15S5HZyL0bkz0pWDBx5imeD2mzSMCIQdLRWV/I+zdKPT//ITl5GzUkOKRzHUUcorzDQN55HqSjNrymsNwS8HAnbbUHLYBUtq4s2YPY+DDD3Gu8pWvDBpVnnQV4QA0ALDAJNBTlQQbB0OzLegtqM92UnS4UCfGsE9PVh0tktIY+SZAQWD2LHt1fWoD1uZpTUnaxnKOF/e3PbyEHv8gDWpsmkamuhjx9adSz4hf+cuXTlDbYe1NxD36Dl0Xidtw5Q27tdPkLYfwPgoOqGjk1CJ/Pm94AY2FT37XMVOljR5JNzX6ckBc8QRZCau78xRSDw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=mY0pkgaUbUGWjJWj33X8+HQ5AIhEdsWSCsZVeW0C9uE=; b=YMeRBV2/rQvIDP/gE7uXSHVv9BepGFHtFtbVtH/tlHt48h9OGJpjYxK1W0PPGVY1mSoeqU9aLXIrw6ptxu4tF/X1eieIRnLw8mmLo58PtPkFK3C/zJL3Mcg+fEYmlUsHzmsqbqVJfJ1Dupt0B6DhOYOzlPF51HScJ6Y3ApyJ+cOeAXtrdzI31sZ0Tw6yGbpwN1f4xaLSHqjtAUSDzFinSVpW2yKEQkR4Gprpgjbw6BdYvuJiQ3T6QLolD7EdyTeIsqiCFK3jcVrFnVp3Day+nsOZeCE2ceNf3wGLUBKzt3GtC5jNUQU15OwnUoo5OGS47rkv6NoiugbNVUMsATAEOA== Received: from MW4PR02CA0018.namprd02.prod.outlook.com (2603:10b6:303:16d::17) by SN7PR12MB6910.namprd12.prod.outlook.com (2603:10b6:806:262::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Tue, 31 Jan 2023 09:34:36 +0000 Received: from CO1NAM11FT073.eop-nam11.prod.protection.outlook.com (2603:10b6:303:16d:cafe::53) by MW4PR02CA0018.outlook.office365.com (2603:10b6:303:16d::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38 via Frontend Transport; Tue, 31 Jan 2023 09:34:36 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CO1NAM11FT073.mail.protection.outlook.com (10.13.174.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22 via Frontend Transport; Tue, 31 Jan 2023 09:34:35 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:16 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:14 -0800 From: Alex Vesker To: , , , "Matan Azrad" CC: , Subject: [v1 05/16] net/mlx5/hws: align RTC create command with PRM format Date: Tue, 31 Jan 2023 11:33:34 +0200 Message-ID: <20230131093346.1261066-6-valex@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230131093346.1261066-1-valex@nvidia.com> References: <20230131093346.1261066-1-valex@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT073:EE_|SN7PR12MB6910:EE_ X-MS-Office365-Filtering-Correlation-Id: cfdb5b4a-5d5e-488d-7337-08db036e5df4 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 4SYYzxqwrS7G+hOK400l9xWwLYQFwCYJ8lzO34zfo416C2/bw0vKCdUxmAhVElIY/5g/nY/JiB6tHeYjR+ssKSFaq0m3L5BZYCcuFm4vzd/R24J+n7ujEHiaDsVZYQcbKhXkhFOEwanqje4vh+wlCDUnbill0PY0kdofMqA33lyUOolEs8b4agprdHxhLteClbjxADAtBMbuDAsrpmtcbZhl8iYma9QstXfrubPTk9XNOOPr/XFjnq+oKcrgwbXkovyaGQ/ZAUZUBvrn/pvtcU+wkepoF4vBzERTTzQmkedOxoaonu1p2kDwHCONxMHNLFtC2BSJuzYljgNg6J1eIa/uxAt8bppjFGng097d2kxFaPrm4rqtpTk9NpE8vVatmGhGDDDSudwlBdjzSfRyCFiiB/OcBdzyj8NxrUcp5QabxKJ5Z3GJ3o14EqfR3gCBgVoHJsMP9KnXtEa/7TYJN9xbntAWsysg0KcL1jXPd+NOKDRhYIIoiYl5gVYqjFTf/yfH6e7RzcHxDsGV3j/s2DioDWDLQo6FP3oh8Voc+yOP039M+kqNqOvbh04JDdjzEnmbfP0fdUfObJxdEHEqmMRJVQmNA8fbk9H6LIa8O5noyAJf6nL2N8d1yrZiLL+h33LezJC4Qdaksy/3kYX38ayH59LNyEtsJ38xm+08xRMsluQ9+1lE8lxQ9eZXqMDKG67jAtQfqArhOFcvPE1oIw== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(136003)(376002)(346002)(39860400002)(396003)(451199018)(40470700004)(36840700001)(46966006)(4326008)(8676002)(70206006)(70586007)(8936002)(6636002)(316002)(54906003)(110136005)(5660300002)(41300700001)(1076003)(2906002)(7696005)(478600001)(6666004)(107886003)(26005)(2616005)(36756003)(40460700003)(186003)(16526019)(6286002)(336012)(83380400001)(426003)(47076005)(356005)(36860700001)(55016003)(82310400005)(82740400003)(40480700001)(86362001)(7636003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 09:34:35.9550 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: cfdb5b4a-5d5e-488d-7337-08db036e5df4 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT073.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB6910 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Rename rtc params create for new format. Signed-off-by: Alex Vesker --- drivers/common/mlx5/mlx5_prm.h | 16 ++++++++++------ drivers/net/mlx5/hws/mlx5dr_cmd.c | 13 +++++++++++-- drivers/net/mlx5/hws/mlx5dr_cmd.h | 11 +++++++---- drivers/net/mlx5/hws/mlx5dr_matcher.c | 19 ++++++++++++------- 4 files changed, 40 insertions(+), 19 deletions(-) diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 6d0b5e640c..cf46296afb 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -3237,6 +3237,7 @@ enum mlx5_ifc_rtc_access_mode { enum mlx5_ifc_rtc_ste_format { MLX5_IFC_RTC_STE_FORMAT_8DW = 0x4, MLX5_IFC_RTC_STE_FORMAT_11DW = 0x5, + MLX5_IFC_RTC_STE_FORMAT_RANGE = 0x7, }; enum mlx5_ifc_rtc_reparse_mode { @@ -3251,24 +3252,27 @@ struct mlx5_ifc_rtc_bits { u8 reserved_at_40[0x40]; u8 update_index_mode[0x2]; u8 reparse_mode[0x2]; - u8 reserved_at_84[0x4]; + u8 num_match_ste[0x4]; u8 pd[0x18]; u8 reserved_at_a0[0x9]; u8 access_index_mode[0x3]; u8 num_hash_definer[0x4]; - u8 reserved_at_b0[0x3]; + u8 update_method[0x1]; + u8 reserved_at_b1[0x2]; u8 log_depth[0x5]; u8 log_hash_size[0x8]; - u8 ste_format[0x8]; + u8 ste_format_0[0x8]; u8 table_type[0x8]; - u8 reserved_at_d0[0x10]; - u8 match_definer_id[0x20]; + u8 ste_format_1[0x8]; + u8 reserved_at_d8[0x8]; + u8 match_definer_0[0x20]; u8 stc_id[0x20]; u8 ste_table_base_id[0x20]; u8 ste_table_offset[0x20]; u8 reserved_at_160[0x8]; u8 miss_flow_table_id[0x18]; - u8 reserved_at_180[0x280]; + u8 match_definer_1[0x20]; + u8 reserved_at_1a0[0x260]; }; struct mlx5_ifc_alias_context_bits { diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.c b/drivers/net/mlx5/hws/mlx5dr_cmd.c index e311be780b..a8d1cf0322 100644 --- a/drivers/net/mlx5/hws/mlx5dr_cmd.c +++ b/drivers/net/mlx5/hws/mlx5dr_cmd.c @@ -259,17 +259,26 @@ mlx5dr_cmd_rtc_create(struct ibv_context *ctx, attr, obj_type, MLX5_GENERAL_OBJ_TYPE_RTC); attr = MLX5_ADDR_OF(create_rtc_in, in, rtc); - MLX5_SET(rtc, attr, ste_format, rtc_attr->is_jumbo ? + MLX5_SET(rtc, attr, ste_format_0, rtc_attr->is_frst_jumbo ? MLX5_IFC_RTC_STE_FORMAT_11DW : MLX5_IFC_RTC_STE_FORMAT_8DW); + + if (rtc_attr->is_scnd_range) { + MLX5_SET(rtc, attr, ste_format_1, MLX5_IFC_RTC_STE_FORMAT_RANGE); + MLX5_SET(rtc, attr, num_match_ste, 2); + } + MLX5_SET(rtc, attr, pd, rtc_attr->pd); + MLX5_SET(rtc, attr, update_method, rtc_attr->fw_gen_wqe); MLX5_SET(rtc, attr, update_index_mode, rtc_attr->update_index_mode); MLX5_SET(rtc, attr, access_index_mode, rtc_attr->access_index_mode); MLX5_SET(rtc, attr, num_hash_definer, rtc_attr->num_hash_definer); MLX5_SET(rtc, attr, log_depth, rtc_attr->log_depth); MLX5_SET(rtc, attr, log_hash_size, rtc_attr->log_size); MLX5_SET(rtc, attr, table_type, rtc_attr->table_type); - MLX5_SET(rtc, attr, match_definer_id, rtc_attr->definer_id); + MLX5_SET(rtc, attr, num_hash_definer, rtc_attr->num_hash_definer); + MLX5_SET(rtc, attr, match_definer_0, rtc_attr->match_definer_0); + MLX5_SET(rtc, attr, match_definer_1, rtc_attr->match_definer_1); MLX5_SET(rtc, attr, stc_id, rtc_attr->stc_base); MLX5_SET(rtc, attr, ste_table_base_id, rtc_attr->ste_base); MLX5_SET(rtc, attr, ste_table_offset, rtc_attr->ste_offset); diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.h b/drivers/net/mlx5/hws/mlx5dr_cmd.h index a42218ba74..e062cb8171 100644 --- a/drivers/net/mlx5/hws/mlx5dr_cmd.h +++ b/drivers/net/mlx5/hws/mlx5dr_cmd.h @@ -23,8 +23,8 @@ struct mlx5dr_cmd_ft_modify_attr { }; struct mlx5dr_cmd_fg_attr { - uint32_t table_id; - uint32_t table_type; + uint32_t table_id; + uint32_t table_type; }; struct mlx5dr_cmd_forward_tbl { @@ -40,14 +40,17 @@ struct mlx5dr_cmd_rtc_create_attr { uint32_t ste_base; uint32_t ste_offset; uint32_t miss_ft_id; + bool fw_gen_wqe; uint8_t update_index_mode; uint8_t access_index_mode; uint8_t num_hash_definer; uint8_t log_depth; uint8_t log_size; uint8_t table_type; - uint8_t definer_id; - bool is_jumbo; + uint8_t match_definer_0; + uint8_t match_definer_1; + bool is_frst_jumbo; + bool is_scnd_range; }; struct mlx5dr_cmd_alias_obj_create_attr { diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.c b/drivers/net/mlx5/hws/mlx5dr_matcher.c index 913bb9d447..101a12d361 100644 --- a/drivers/net/mlx5/hws/mlx5dr_matcher.c +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.c @@ -413,6 +413,8 @@ static int mlx5dr_matcher_create_rtc(struct mlx5dr_matcher *matcher, struct mlx5dr_pool *ste_pool, *stc_pool; struct mlx5dr_devx_obj *devx_obj; struct mlx5dr_pool_chunk *ste; + uint8_t first_definer_id; + bool is_jumbo; int ret; switch (rtc_type) { @@ -426,12 +428,15 @@ static int mlx5dr_matcher_create_rtc(struct mlx5dr_matcher *matcher, rtc_attr.log_depth = attr->table.sz_col_log; rtc_attr.miss_ft_id = matcher->end_ft->id; + is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt->definer); + first_definer_id = mlx5dr_definer_get_id(matcher->mt->definer); + if (attr->insert_mode == MLX5DR_MATCHER_INSERT_BY_HASH) { /* The usual Hash Table */ rtc_attr.update_index_mode = MLX5_IFC_RTC_STE_UPDATE_MODE_BY_HASH; /* The first match template is used since all share the same definer */ - rtc_attr.definer_id = mlx5dr_definer_get_id(matcher->mt->definer); - rtc_attr.is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt->definer); + rtc_attr.match_definer_0 = first_definer_id; + rtc_attr.is_frst_jumbo = is_jumbo; } else if (attr->insert_mode == MLX5DR_MATCHER_INSERT_BY_INDEX) { rtc_attr.update_index_mode = MLX5_IFC_RTC_STE_UPDATE_MODE_BY_OFFSET; rtc_attr.num_hash_definer = 1; @@ -439,12 +444,12 @@ static int mlx5dr_matcher_create_rtc(struct mlx5dr_matcher *matcher, if (attr->distribute_mode == MLX5DR_MATCHER_DISTRIBUTE_BY_HASH) { /* Hash Split Table */ rtc_attr.access_index_mode = MLX5_IFC_RTC_STE_ACCESS_MODE_BY_HASH; - rtc_attr.definer_id = mlx5dr_definer_get_id(matcher->mt->definer); - rtc_attr.is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt->definer); + rtc_attr.match_definer_0 = first_definer_id; + rtc_attr.is_frst_jumbo = is_jumbo; } else if (attr->distribute_mode == MLX5DR_MATCHER_DISTRIBUTE_BY_LINEAR) { /* Linear Lookup Table */ rtc_attr.access_index_mode = MLX5_IFC_RTC_STE_ACCESS_MODE_LINEAR; - rtc_attr.definer_id = ctx->caps->linear_match_definer; + rtc_attr.match_definer_0 = ctx->caps->linear_match_definer; } } @@ -468,8 +473,8 @@ static int mlx5dr_matcher_create_rtc(struct mlx5dr_matcher *matcher, rtc_attr.log_depth = 0; rtc_attr.update_index_mode = MLX5_IFC_RTC_STE_UPDATE_MODE_BY_OFFSET; /* The action STEs use the default always hit definer */ - rtc_attr.definer_id = ctx->caps->trivial_match_definer; - rtc_attr.is_jumbo = false; + rtc_attr.match_definer_0 = ctx->caps->trivial_match_definer; + rtc_attr.is_frst_jumbo = false; rtc_attr.miss_ft_id = 0; break; From patchwork Tue Jan 31 09:33:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Vesker X-Patchwork-Id: 122726 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3665341B8D; Tue, 31 Jan 2023 10:35:10 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B059942D42; Tue, 31 Jan 2023 10:34:42 +0100 (CET) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2041.outbound.protection.outlook.com [40.107.237.41]) by mails.dpdk.org (Postfix) with ESMTP id 15BCC42D0C for ; Tue, 31 Jan 2023 10:34:39 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=odY8Jh0f9SMVmYjwYakCr60H6qoOilhc0wHEqtZ+vDaTrkSH4YDIRzdlwx41tbLkOpdbTf7IvwNPXSo2o1AcyZzU/V/oGZedk0Acfvld6PI9HRLXoleDqxiZweHmTH+nv39NqZ9ZI8KxZ08+9pGDZ+B6Hd8a//4BbcNMv2Xexg3/G/LgTB75jXWFc7Cbl96lQD/jcXtu0yLNXPdi6hVoL0CFR854Tly4AbY4AuOPW+Z+oStisIwIjSUGP+JqbpOyftbyvP5hF6ucgM4wz7MRCaZGx+c4c3trFSCaj8ErP1jClO8xP3n6e80gGa+5X2phKy4VO1EU8viI+QkvgEJKOg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=cD38lRPY3V2FT/kH9lWANPzm05lc+7NHZSPGjT1mZyE=; b=F+NwxpElUaLmZwei891kMPx+GYW7g+55gSGBACFNIfA9v04ors41og/797E8snF80kHM5qUXIxZcYmZ7hfsnwHf9J+QJdqTPinS2CY4rtqKXMJH/laukuRDQC62nlR6z28hggHUezoPM5NQRT+exTH8fb6IIepPOxmfyRVpKuhj8qaW623ACMpiNRE14/Vu9vfzc/tHnT97RMdac211DDzcDHZOj9GJ6aPiKoTQJ0DWI/xW0+z4QFlcFHgmgf2aN1T7CubyOuw2jMQyNFpUS7kQezx19be8M3YauBhY3xbvzN9ol5K1hrJzyGl+e5shsC+47FtlXibu+IFm4+oLNeQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=cD38lRPY3V2FT/kH9lWANPzm05lc+7NHZSPGjT1mZyE=; b=DhQpuvU46OjiJ3KYEaAJYyd5wce9kpWsNImBVgh1xoDtmfvlbHIPRzqVX6KprBUFT/u5VdUrkzXBt8I0IXplHaLlktAGawqT6VpmumY4C5rJs5PzqRMQ+Ts4eD7noGY5lbPoXGAQeCltl3c2R+jkMcqCthDz/hXPg0E49d/N4vDqfGfJpECjC+b7EUwKm2ZiRMNNcxDQFCSwIStaKCaX/qakhsA+pRcSVS5ejAcOzZuhyhO8vwdIa4ZYwURN+Ci1h6bNaXpVHzWgMK4zmWooGyIzcAEz4MtrtY5sTIyPBrakM/BAMQIgu/oNotTgH+Z1/zseipKkPI1NHZO8D1U3qw== Received: from MW4PR04CA0196.namprd04.prod.outlook.com (2603:10b6:303:86::21) by PH8PR12MB6986.namprd12.prod.outlook.com (2603:10b6:510:1bd::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38; Tue, 31 Jan 2023 09:34:37 +0000 Received: from CO1NAM11FT115.eop-nam11.prod.protection.outlook.com (2603:10b6:303:86:cafe::a1) by MW4PR04CA0196.outlook.office365.com (2603:10b6:303:86::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38 via Frontend Transport; Tue, 31 Jan 2023 09:34:37 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CO1NAM11FT115.mail.protection.outlook.com (10.13.174.211) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.30 via Frontend Transport; Tue, 31 Jan 2023 09:34:37 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:19 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:16 -0800 From: Alex Vesker To: , , , "Matan Azrad" CC: , Subject: [v1 06/16] net/mlx5/hws: add send FW match STE using gen WQE Date: Tue, 31 Jan 2023 11:33:35 +0200 Message-ID: <20230131093346.1261066-7-valex@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230131093346.1261066-1-valex@nvidia.com> References: <20230131093346.1261066-1-valex@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT115:EE_|PH8PR12MB6986:EE_ X-MS-Office365-Filtering-Correlation-Id: 92621266-4598-4be2-0e08-08db036e5e97 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: AAT20Z8MP0/nyMfQWPvTjrYai+pxhvJPzJKvbyA30iTVw5l+JXnD8WyGfmwO7x0FXgsIfXlLyxEJeN+CSkYbD9xNCOD+B5tIRHSlIaMCK1VO8pM8sGDvaCDiNOymk5ZY2ShrSd43DsusfBZ/ND6qROJzi0srpKWBZOltoj8HZWiit1M9eWCxv/eDkwgjieBjqoLCwCRo/Rx/Jj9K56SDGt7ipP0pjnlzRUJVgeZC5xGSyQvC5O8wM7wt49v0BUP2NO/g0K9Ckocji4UjKYybQy0Y9UzDhra0QXNFiFgVY9OvdoFlaUe2pklgRBZQN6R7HBRfAMMeyh7ARWaeKP4SKxsznV79fEzfXZMSYWpXFH62FBlIgVgeCw9HW4q0AqPSzfh6QM+Xx//KGAm6lYJCkcAjN83I8uYZ/Cfd41DSO9gIPDEhY3LMNVNh3u1vCnT72FhvIOvSSobbHJygO38yO7QNzMDZ1Jry4pxOjZic0xwO33wL076gfBL1WTwO/Q8wJkDvULNT+Fz1UJDeM3OCSMwprDsKjpD9Ovtj6zdXDvLoalNRZCMHlCwEA9VrEltYf95KXhbUQHyz9RLwn7C5xqsbhQ7ty4Vn0ZW2dmaKFWOKCyOz9F+2M4nrcfEA7H/OAg/9CqZNqCEBavtRqBYXHsfCwVrHm9wIvKKgm0v0O90CMzMDOcst53ikHDfsy5ZocZgZqRy6djWTq8TfKz0MPg== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(346002)(39860400002)(396003)(376002)(136003)(451199018)(40470700004)(46966006)(36840700001)(36756003)(70206006)(6636002)(54906003)(316002)(110136005)(82310400005)(8676002)(4326008)(5660300002)(70586007)(41300700001)(8936002)(86362001)(356005)(7636003)(82740400003)(36860700001)(107886003)(6666004)(16526019)(186003)(26005)(1076003)(83380400001)(336012)(426003)(40480700001)(40460700003)(55016003)(2906002)(6286002)(7696005)(47076005)(478600001)(2616005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 09:34:37.0363 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 92621266-4598-4be2-0e08-08db036e5e97 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT115.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR12MB6986 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Send STE WQE function wraps the send WQE command to support WQE build and FDB abstraction. Sending using FW is different from sending from HW since FW returns the completion immediately which requires us to retry on failure and prepare the completion as part of the send process. Signed-off-by: Alex Vesker --- drivers/net/mlx5/hws/mlx5dr_send.c | 134 +++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_send.h | 7 +- 2 files changed, 140 insertions(+), 1 deletion(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_send.c b/drivers/net/mlx5/hws/mlx5dr_send.c index a507e5f626..a9958df4f2 100644 --- a/drivers/net/mlx5/hws/mlx5dr_send.c +++ b/drivers/net/mlx5/hws/mlx5dr_send.c @@ -235,6 +235,140 @@ void mlx5dr_send_ste(struct mlx5dr_send_engine *queue, send_attr->fence = fence; } +static +int mlx5dr_send_wqe_fw(struct ibv_context *ibv_ctx, + uint32_t pd_num, + struct mlx5dr_send_engine_post_attr *send_attr, + struct mlx5dr_wqe_gta_ctrl_seg *send_wqe_ctrl, + void *send_wqe_match_data, + void *send_wqe_match_tag, + bool is_jumbo, + uint8_t gta_opcode) +{ + bool has_match = send_wqe_match_data || send_wqe_match_tag; + struct mlx5dr_wqe_gta_data_seg_ste gta_wqe_data0 = {0}; + struct mlx5dr_wqe_gta_ctrl_seg gta_wqe_ctrl = {0}; + struct mlx5dr_cmd_generate_wqe_attr attr = {0}; + struct mlx5dr_wqe_ctrl_seg wqe_ctrl = {0}; + struct mlx5_cqe64 cqe; + uint32_t flags = 0; + int ret; + + /* Set WQE control */ + wqe_ctrl.opmod_idx_opcode = + rte_cpu_to_be_32((send_attr->opmod << 24) | send_attr->opcode); + wqe_ctrl.qpn_ds = + rte_cpu_to_be_32((send_attr->len + sizeof(struct mlx5dr_wqe_ctrl_seg)) / 16); + flags |= send_attr->notify_hw ? MLX5_WQE_CTRL_CQ_UPDATE : 0; + wqe_ctrl.flags = rte_cpu_to_be_32(flags); + wqe_ctrl.imm = rte_cpu_to_be_32(send_attr->id); + + /* Set GTA WQE CTRL */ + memcpy(gta_wqe_ctrl.stc_ix, send_wqe_ctrl->stc_ix, sizeof(send_wqe_ctrl->stc_ix)); + gta_wqe_ctrl.op_dirix = htobe32(gta_opcode << 28); + + /* Set GTA match WQE DATA */ + if (has_match) { + if (send_wqe_match_data) + memcpy(>a_wqe_data0, send_wqe_match_data, sizeof(gta_wqe_data0)); + else + mlx5dr_send_wqe_set_tag(>a_wqe_data0, send_wqe_match_tag, is_jumbo); + + gta_wqe_data0.rsvd1_definer = htobe32(send_attr->match_definer_id << 8); + attr.gta_data_0 = (uint8_t *)>a_wqe_data0; + } + + attr.pdn = pd_num; + attr.wqe_ctrl = (uint8_t *)&wqe_ctrl; + attr.gta_ctrl = (uint8_t *)>a_wqe_ctrl; + +send_wqe: + ret = mlx5dr_cmd_generate_wqe(ibv_ctx, &attr, &cqe); + if (ret) { + DR_LOG(ERR, "Failed to write WQE using command"); + return ret; + } + + if ((mlx5dv_get_cqe_opcode(&cqe) == MLX5_CQE_REQ) && + (rte_be_to_cpu_32(cqe.byte_cnt) >> 31 == 0)) { + *send_attr->used_id = send_attr->id; + return 0; + } + + /* Retry if rule failed */ + if (send_attr->retry_id) { + wqe_ctrl.imm = rte_cpu_to_be_32(send_attr->retry_id); + send_attr->id = send_attr->retry_id; + send_attr->retry_id = 0; + goto send_wqe; + } + + return -1; +} + +void mlx5dr_send_stes_fw(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ste_attr *ste_attr) +{ + struct mlx5dr_send_engine_post_attr *send_attr = &ste_attr->send_attr; + struct mlx5dr_rule *rule = send_attr->rule; + struct ibv_context *ibv_ctx; + struct mlx5dr_context *ctx; + uint16_t queue_id; + uint32_t pdn; + int ret; + + ctx = rule->matcher->tbl->ctx; + queue_id = queue - ctx->send_queue; + ibv_ctx = ctx->ibv_ctx; + pdn = ctx->pd_num; + + /* Writing through FW can't HW fence, therefore we drain the queue */ + if (send_attr->fence) + mlx5dr_send_queue_action(ctx, + queue_id, + MLX5DR_SEND_QUEUE_ACTION_DRAIN_SYNC); + + if (ste_attr->rtc_1) { + send_attr->id = ste_attr->rtc_1; + send_attr->used_id = ste_attr->used_id_rtc_1; + send_attr->retry_id = ste_attr->retry_rtc_1; + ret = mlx5dr_send_wqe_fw(ibv_ctx, pdn, send_attr, + ste_attr->wqe_ctrl, + ste_attr->wqe_data, + ste_attr->wqe_tag, + ste_attr->wqe_tag_is_jumbo, + ste_attr->gta_opcode); + if (ret) + goto fail_rule; + } + + if (ste_attr->rtc_0) { + send_attr->id = ste_attr->rtc_0; + send_attr->used_id = ste_attr->used_id_rtc_0; + send_attr->retry_id = ste_attr->retry_rtc_0; + ret = mlx5dr_send_wqe_fw(ibv_ctx, pdn, send_attr, + ste_attr->wqe_ctrl, + ste_attr->wqe_data, + ste_attr->wqe_tag, + ste_attr->wqe_tag_is_jumbo, + ste_attr->gta_opcode); + if (ret) + goto fail_rule; + } + + /* Increase the status, this only works on good flow as the enum + * is arrange it away creating -> created -> deleting -> deleted + */ + rule->status++; + mlx5dr_send_engine_gen_comp(queue, send_attr->user_data, RTE_FLOW_OP_SUCCESS); + return; + +fail_rule: + rule->status = !rule->rtc_0 && !rule->rtc_1 ? + MLX5DR_RULE_STATUS_FAILED : MLX5DR_RULE_STATUS_FAILING; + mlx5dr_send_engine_gen_comp(queue, send_attr->user_data, RTE_FLOW_OP_ERROR); +} + static void mlx5dr_send_engine_retry_post_send(struct mlx5dr_send_engine *queue, struct mlx5dr_send_ring_priv *priv, uint16_t wqe_cnt) diff --git a/drivers/net/mlx5/hws/mlx5dr_send.h b/drivers/net/mlx5/hws/mlx5dr_send.h index fcddcc6366..1e845b1c7a 100644 --- a/drivers/net/mlx5/hws/mlx5dr_send.h +++ b/drivers/net/mlx5/hws/mlx5dr_send.h @@ -52,7 +52,8 @@ struct mlx5dr_wqe_gta_ctrl_seg { struct mlx5dr_wqe_gta_data_seg_ste { __be32 rsvd0_ctr_id; - __be32 rsvd1[4]; + __be32 rsvd1_definer; + __be32 rsvd2[3]; __be32 action[3]; __be32 tag[8]; }; @@ -159,6 +160,7 @@ struct mlx5dr_send_engine_post_attr { uint8_t opmod; uint8_t notify_hw; uint8_t fence; + uint8_t match_definer_id; size_t len; struct mlx5dr_rule *rule; uint32_t id; @@ -238,6 +240,9 @@ void mlx5dr_send_engine_post_end(struct mlx5dr_send_engine_post_ctrl *ctrl, void mlx5dr_send_ste(struct mlx5dr_send_engine *queue, struct mlx5dr_send_ste_attr *ste_attr); +void mlx5dr_send_stes_fw(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ste_attr *ste_attr); + void mlx5dr_send_engine_flush_queue(struct mlx5dr_send_engine *queue); static inline bool mlx5dr_send_engine_empty(struct mlx5dr_send_engine *queue) From patchwork Tue Jan 31 09:33:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Vesker X-Patchwork-Id: 122728 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E2BB041B8D; Tue, 31 Jan 2023 10:35:28 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EB50342D2C; Tue, 31 Jan 2023 10:34:44 +0100 (CET) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2056.outbound.protection.outlook.com [40.107.243.56]) by mails.dpdk.org (Postfix) with ESMTP id E172541148 for ; Tue, 31 Jan 2023 10:34:39 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=fFdt0rGJU/BPxoRNLTygeNoeXuwPuXi9JOSylDn3vWVwm5HV2mGqOkVf16LQ6Uk5oTyQ0H0+xF/T0l3TG7x/JG/AHHPFq63Wn8TgL/ZahJS0WVq4/TT42sH1ZYbe+ltE+K8DToObji72Tmdl2BvdrwcFF0Zj+D7pAmqqLGmh1HjAPlNQebwLLJnh/T6em1xxvKHHhzzGc9AyYixHP67rAK0AURRXpbAvnrtGZQspoo8HQsaRquIfkZGXF8ELJgRpXBzfXocKCYdh2vVwWlbZHokKHL5aWCS8R7cnKdRED8KmjaQZ5p9GUlaNWfQnE+24DKkKpamSy3u/bG6Je/iGOA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=A2D0P8B+eE93VbCo8qqLvTULa6Y8ekyxebk8V4wKVvo=; b=ARFC+HLpOmdL+U1vFxSZFjKeYh4fdh29Svbs9TGmSpYHKcC3vzjKDvp3dO3kMe9MV01Wnn9qHK7oCOnBfQ3zg3DYlJoCZH5vLVZq1WKE/ndlxoyxtCUIvfWHwYqyhFcPE+h7ljZoIva7NxLAtDP51zvfdp9+6bVYGVu9+8TW74vjbvhRHXdxYA01mYVJxDg7OWAY5d5E4rBCRHVwoSNKhs3wdIkvuAhjQov3egePzKa8S+tGoWmofegDtEp+pbMesINxVHdHMB40CGJ5Jblv1sDJ0Z6E6Zbwa7jIZKyflXiVNdRo5AF28dm2oYXVFT0Pkb+B9Cgom294Cuy5cNlZ8g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=A2D0P8B+eE93VbCo8qqLvTULa6Y8ekyxebk8V4wKVvo=; b=M6n4kWRBxlkmxupWP9REkEhTHk8BlJCBnYKGfAAN9dezmGF9sUJ2x2mp6Pq8uPdYltWokp+OsrvviGgHyueCn7K9DS6yjO05RPU7gcaYbU/X6sr/KGE6f4QbA17b9YPhFpPuYBIoy+kt4j/MQASc2Ssahu7BaTCNyJKnDyd5ArsTWhS538x2kBxpR/qlSwFhJoj5p4evJvDRwkjBavfA5uqAOWmZREHK+UIszqiiJFny5W4HniN7xgB7gZOxcA5pDk5DuAaClAfHiCJXhGGvIniKvfFQAkBqrhvd+EsfIss3h0NC8+o7fZce4LunEus94XDDnT0llcO7Uipf3/JURA== Received: from MW4PR04CA0087.namprd04.prod.outlook.com (2603:10b6:303:6b::32) by MW4PR12MB7311.namprd12.prod.outlook.com (2603:10b6:303:227::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38; Tue, 31 Jan 2023 09:34:38 +0000 Received: from CO1NAM11FT016.eop-nam11.prod.protection.outlook.com (2603:10b6:303:6b:cafe::db) by MW4PR04CA0087.outlook.office365.com (2603:10b6:303:6b::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38 via Frontend Transport; Tue, 31 Jan 2023 09:34:38 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CO1NAM11FT016.mail.protection.outlook.com (10.13.175.141) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.21 via Frontend Transport; Tue, 31 Jan 2023 09:34:38 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:21 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:19 -0800 From: Alex Vesker To: , , , "Matan Azrad" CC: , Subject: [v1 07/16] net/mlx5/hws: add send FW range STE WQE Date: Tue, 31 Jan 2023 11:33:36 +0200 Message-ID: <20230131093346.1261066-8-valex@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230131093346.1261066-1-valex@nvidia.com> References: <20230131093346.1261066-1-valex@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT016:EE_|MW4PR12MB7311:EE_ X-MS-Office365-Filtering-Correlation-Id: 146e6ada-37ef-4ec9-dc9c-08db036e5f2a X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 3E7Ar5EksJlGrGFt5KwwN+eaf3L2psApmUP0yHKXpk50WUnaWWgn+AzV/T4iaNltiq/8NFRXBo7ZAjXHHEECppL0Uhm2PaFPJ5FuVzbHvKMGgcAhf71+MP8Xtf3Rxu/ditMfDSoOhVuTF7ymcdbnrMjAn8q9hDo8KsY32MCcDxe5/9IB3Zv5adcHqRy7u3p163v82J/wbNwRXG7pRHoSZtIbAiA02DXMSJndlcsOcN6/++5dDKOBP7Yfmr7Jjz94pMi7+ep2Ir2ZBAXWtU74t3VqrhHPAxvQAQRxFZ0wDZXZEcub3xZotm6ce6lh6TAe5ddG4jfbAUqvt6PlBiwF+h3CejhCTfw8sIwrwe4Rd1RAfnMpNDND1uWkNcqMIOu5d2PhqL94pmU49G+q7DZp+js3s1TCSM5izpZjsAkbLY1fHWqFwTKd0/pEKd6w+v6nDQ7kT8DFS/zyQrF3Q0mYz/0xe5NLil465msSxHLYLgtlP0RLFylJ71BRb2pEGvvzwE9R/oKqosMImrvVtFCMKnG+dOV4AO5WDdBonuqqh2GgT4ApGxp1wE7RDLnhCZ/4/ULGmlW5Jz4MtEZneGG/aThC9BvyUJxbBjp3OZV6zYkdq4eBdSTNkQiG320Le0uh2T5feQZHvfhay34tlNGMP6hc/aponHf2APsOiN4o9GlKc7fxGsyDkednzq0yjiihtgwSBjmuOdBDr5AOR+t1nGU+Eow8H93j9SdzluHToHgkmNvsOp1Lni02Fp4X66tt X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(136003)(346002)(376002)(39860400002)(396003)(451199018)(36840700001)(46966006)(40470700004)(7636003)(86362001)(82310400005)(356005)(40460700003)(40480700001)(55016003)(36756003)(426003)(47076005)(336012)(70586007)(70206006)(8676002)(83380400001)(54906003)(6636002)(110136005)(316002)(107886003)(1076003)(16526019)(6286002)(26005)(186003)(7696005)(478600001)(6666004)(2616005)(82740400003)(2906002)(8936002)(41300700001)(36860700001)(4326008)(5660300002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 09:34:38.0050 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 146e6ada-37ef-4ec9-dc9c-08db036e5f2a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT016.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB7311 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org FW WQE supports complex rules, constructed from 2 STEs, for example: Hash(DefinerA) SteMatch(DefinerB) SteRange(DefinerC) DefinerA is a subset of DefinerB This complex rule is written using a single FW command which has a single WQE control, STE match data0 and STE range data1. FW manages STEs/ICM and coherency between deletion and creation. It is possible to also pass the definer value as part of the STE, this is not supported with current HW. Signed-off-by: Alex Vesker --- drivers/net/mlx5/hws/mlx5dr_send.c | 19 +++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_send.h | 3 +++ 2 files changed, 22 insertions(+) diff --git a/drivers/net/mlx5/hws/mlx5dr_send.c b/drivers/net/mlx5/hws/mlx5dr_send.c index a9958df4f2..51aaf5c8e2 100644 --- a/drivers/net/mlx5/hws/mlx5dr_send.c +++ b/drivers/net/mlx5/hws/mlx5dr_send.c @@ -242,11 +242,15 @@ int mlx5dr_send_wqe_fw(struct ibv_context *ibv_ctx, struct mlx5dr_wqe_gta_ctrl_seg *send_wqe_ctrl, void *send_wqe_match_data, void *send_wqe_match_tag, + void *send_wqe_range_data, + void *send_wqe_range_tag, bool is_jumbo, uint8_t gta_opcode) { + bool has_range = send_wqe_range_data || send_wqe_range_tag; bool has_match = send_wqe_match_data || send_wqe_match_tag; struct mlx5dr_wqe_gta_data_seg_ste gta_wqe_data0 = {0}; + struct mlx5dr_wqe_gta_data_seg_ste gta_wqe_data1 = {0}; struct mlx5dr_wqe_gta_ctrl_seg gta_wqe_ctrl = {0}; struct mlx5dr_cmd_generate_wqe_attr attr = {0}; struct mlx5dr_wqe_ctrl_seg wqe_ctrl = {0}; @@ -278,6 +282,17 @@ int mlx5dr_send_wqe_fw(struct ibv_context *ibv_ctx, attr.gta_data_0 = (uint8_t *)>a_wqe_data0; } + /* Set GTA range WQE DATA */ + if (has_range) { + if (send_wqe_range_data) + memcpy(>a_wqe_data1, send_wqe_range_data, sizeof(gta_wqe_data1)); + else + mlx5dr_send_wqe_set_tag(>a_wqe_data1, send_wqe_range_tag, false); + + gta_wqe_data1.rsvd1_definer = htobe32(send_attr->range_definer_id << 8); + attr.gta_data_1 = (uint8_t *)>a_wqe_data1; + } + attr.pdn = pd_num; attr.wqe_ctrl = (uint8_t *)&wqe_ctrl; attr.gta_ctrl = (uint8_t *)>a_wqe_ctrl; @@ -336,6 +351,8 @@ void mlx5dr_send_stes_fw(struct mlx5dr_send_engine *queue, ste_attr->wqe_ctrl, ste_attr->wqe_data, ste_attr->wqe_tag, + ste_attr->range_wqe_data, + ste_attr->range_wqe_tag, ste_attr->wqe_tag_is_jumbo, ste_attr->gta_opcode); if (ret) @@ -350,6 +367,8 @@ void mlx5dr_send_stes_fw(struct mlx5dr_send_engine *queue, ste_attr->wqe_ctrl, ste_attr->wqe_data, ste_attr->wqe_tag, + ste_attr->range_wqe_data, + ste_attr->range_wqe_tag, ste_attr->wqe_tag_is_jumbo, ste_attr->gta_opcode); if (ret) diff --git a/drivers/net/mlx5/hws/mlx5dr_send.h b/drivers/net/mlx5/hws/mlx5dr_send.h index 1e845b1c7a..47bb66b3c7 100644 --- a/drivers/net/mlx5/hws/mlx5dr_send.h +++ b/drivers/net/mlx5/hws/mlx5dr_send.h @@ -161,6 +161,7 @@ struct mlx5dr_send_engine_post_attr { uint8_t notify_hw; uint8_t fence; uint8_t match_definer_id; + uint8_t range_definer_id; size_t len; struct mlx5dr_rule *rule; uint32_t id; @@ -182,8 +183,10 @@ struct mlx5dr_send_ste_attr { uint32_t direct_index; struct mlx5dr_send_engine_post_attr send_attr; struct mlx5dr_rule_match_tag *wqe_tag; + struct mlx5dr_rule_match_tag *range_wqe_tag; struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; struct mlx5dr_wqe_gta_data_seg_ste *wqe_data; + struct mlx5dr_wqe_gta_data_seg_ste *range_wqe_data; }; /** From patchwork Tue Jan 31 09:33:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Vesker X-Patchwork-Id: 122731 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E246141B8D; Tue, 31 Jan 2023 10:35:54 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3AC6642D73; Tue, 31 Jan 2023 10:34:50 +0100 (CET) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2050.outbound.protection.outlook.com [40.107.223.50]) by mails.dpdk.org (Postfix) with ESMTP id A662D42DAB for ; Tue, 31 Jan 2023 10:34:48 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=AXLxzGSzpFQZbs5LtXn/z4ZD2vL1qNJTjegPESn2wIi17lkwLuSMsCVHeKvJWssYVN+OVR+vBOfPFaG5GAQKIYJRhyrbTTVVpSR/hoLoj+QLGaDkiCLePGY9Xnj6YmE5hnIsC1iiTbShse4CT7z+jXCRZcQR56ElKQuUzBL5FWcEKIIiQjeYEhgYGaNSpSZ6NzAZTP9UO93Z1iQCsgNetyg+qeWItUn1V+EczBjEFhdRLcCZQw2d5KDAeb2vCkJ9sBbZHUCQ1fiWkd/wAZ9N2+9SQ3mQhp9HoH3mVt61YCXyybHrRC/lnu81rS2mM2EekKphLa0j2wRvi3Hvmx2/vA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=JAE/ahSL7xyMq43+KlfqHwd93u9ZkOZTHempeQxZ/q4=; b=TCDV5Ui73QzKroI3t+qqdn7PkJQdCRdHqk39oA78wKNTsr9/OdMvz+8LquSrk+KI899UUY2CP5ULqLkzz0Np6BZSx6/PNIq3W+lKBtkjbDZqC7UnAos2Zk+3ks6RXWJVvp50XDouSITF1xxSG2Nwxq39TVGxIyZQqkVFFRE1ZianelDcgGsvcbNynQavcKr4DiERSnfJ628U7GvvSjaJZxTEUdxSq3RPQLIOYJlQuRKbEFKeG120kHVlrQIhjomOaXZ8nPDcU93g7UHL47Dax1TyDRlorrk+xKhiUKH5FYbflGZdT3upOOfzktRuEPkF1oo5qEbNW9i5AvRbq8wEXg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=JAE/ahSL7xyMq43+KlfqHwd93u9ZkOZTHempeQxZ/q4=; b=NQkkGi18OjAVT52RgZgvvbDV29lD/uQ+cUmd01vQNg3E+o+lUAa+4z/2+9pM4Zg3YXlWc/5UCFt+tVz4d+g9pW5pSuJlgfVReL7o9aaXZbGT+dwmQhIVzc5oeHsTCQHSVpkR2epESgrlgPCl0X0bmN8uWoeP2rTRUcSFClziI2YGnLVNcWiYxojjFH2QWGKjv9D85vMY0dCEFxIUirXgHYCKXvOva/t0p0Ooj3CF3L8s1EwyIJ8AT8cdBOpbwy9i0US/KO+Q0xkA9qvifcBIEjofJURhwr0vv2goy/4OD3GowoBlKIW15m8ld7CDV9XpV4fxHo/cdaGbcbHyqG5obw== Received: from MW2PR2101CA0011.namprd21.prod.outlook.com (2603:10b6:302:1::24) by MN2PR12MB4125.namprd12.prod.outlook.com (2603:10b6:208:1d9::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38; Tue, 31 Jan 2023 09:34:46 +0000 Received: from CO1NAM11FT023.eop-nam11.prod.protection.outlook.com (2603:10b6:302:1:cafe::37) by MW2PR2101CA0011.outlook.office365.com (2603:10b6:302:1::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.4 via Frontend Transport; Tue, 31 Jan 2023 09:34:46 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT023.mail.protection.outlook.com (10.13.175.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.21 via Frontend Transport; Tue, 31 Jan 2023 09:34:46 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:23 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:21 -0800 From: Alex Vesker To: , , , "Matan Azrad" CC: , Subject: [v1 08/16] net/mlx5/hws: move matcher size check to function Date: Tue, 31 Jan 2023 11:33:37 +0200 Message-ID: <20230131093346.1261066-9-valex@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230131093346.1261066-1-valex@nvidia.com> References: <20230131093346.1261066-1-valex@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT023:EE_|MN2PR12MB4125:EE_ X-MS-Office365-Filtering-Correlation-Id: 981961ae-02d3-48d8-0307-08db036e6434 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: EDT24U0LanzLlO108hyw43q7vJbLXvbn+l6PJop+OwUzvkf/uUiV4fOH0Axa/k4emxC/YgKYmpku8cvZcggh8HJP98Id046BiOb1u+XoIe7KlyyCchWQ0E/kr8wItwIvWG8IVdjvWWueyUwmNQQePgs1IC12KPTNibC7RSzoW1VMLTLcASQ2+fouWH6GsnnoWqgziENRitJSybVJpYvJ/duTbm9bKWcrxMw4XTdVWD7tv+ZgdCXYeusCgf85ls926oq2EWwf2mNRG1/L6kC0FT8rkFRS8EQgTtGoGJ8v4sdCGtsvcmQsIQ7WI9u8r3MIXagmAbHWsU34LLaNpzg28pX7qFTKnwh+noiplTipJMfabxnHaKtNEvkNN2HoqaNSXYWQhGN5inb0QIrG6wp2BTPrQ6S2RbIebwbKoqdkA5OMrxIQn2J3baBXHBNzdbZ6Ods3WkdOvbJSTvM8gL8Y4T6NVGjDLRJuz7Gi+1mokyRbowSOqAEsrN6YZgoXRHqlhtOhKFmhqzUAacmZO8HTBRpzCVNnmkTym7+0bkNHEUJRWG3WXv4sUAP4d9H24kpD3/RgB5dv8tjT4+/AaBXUhx+py9j6oZpMNWZZ8O2rw/6VuVNa5Dzff8lC2b25r64tUtdGrJvitxG2C22f+UKGbWJ2BmSg1Y79QeQ1TIqkTV8V6i53bFaPUHd0KU3/LlVrr7WsesSeOuUGtYfERaFihA== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(376002)(136003)(39860400002)(396003)(346002)(451199018)(36840700001)(46966006)(40470700004)(70206006)(426003)(336012)(82310400005)(83380400001)(47076005)(40460700003)(86362001)(36756003)(70586007)(8676002)(82740400003)(36860700001)(4326008)(356005)(40480700001)(55016003)(2906002)(41300700001)(7636003)(7696005)(478600001)(110136005)(54906003)(6636002)(6666004)(1076003)(316002)(107886003)(8936002)(5660300002)(2616005)(26005)(6286002)(16526019)(186003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 09:34:46.3910 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 981961ae-02d3-48d8-0307-08db036e6434 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT023.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4125 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This check can be later on reused for other places, it will look better in a function Signed-off-by: Alex Vesker --- drivers/net/mlx5/hws/mlx5dr_matcher.c | 43 +++++++++++++++++---------- 1 file changed, 27 insertions(+), 16 deletions(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.c b/drivers/net/mlx5/hws/mlx5dr_matcher.c index 101a12d361..b8db0a27ae 100644 --- a/drivers/net/mlx5/hws/mlx5dr_matcher.c +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.c @@ -565,6 +565,32 @@ static void mlx5dr_matcher_destroy_rtc(struct mlx5dr_matcher *matcher, mlx5dr_pool_chunk_free(ste_pool, ste); } +static int +mlx5dr_matcher_check_attr_sz(struct mlx5dr_cmd_query_caps *caps, + struct mlx5dr_matcher_attr *attr) +{ + if (attr->table.sz_col_log > caps->rtc_log_depth_max) { + DR_LOG(ERR, "Matcher depth exceeds limit %d", caps->rtc_log_depth_max); + goto not_supported; + } + + if (attr->table.sz_col_log + attr->table.sz_row_log > caps->ste_alloc_log_max) { + DR_LOG(ERR, "Total matcher size exceeds limit %d", caps->ste_alloc_log_max); + goto not_supported; + } + + if (attr->table.sz_col_log + attr->table.sz_row_log < caps->ste_alloc_log_gran) { + DR_LOG(ERR, "Total matcher size below limit %d", caps->ste_alloc_log_gran); + goto not_supported; + } + + return 0; + +not_supported: + rte_errno = EOPNOTSUPP; + return rte_errno; +} + static void mlx5dr_matcher_set_pool_attr(struct mlx5dr_pool_attr *attr, struct mlx5dr_matcher *matcher) { @@ -840,22 +866,7 @@ mlx5dr_matcher_process_attr(struct mlx5dr_cmd_query_caps *caps, attr->insert_mode == MLX5DR_MATCHER_INSERT_BY_HASH) attr->table.sz_col_log = mlx5dr_matcher_rules_to_tbl_depth(attr->rule.num_log); - if (attr->table.sz_col_log > caps->rtc_log_depth_max) { - DR_LOG(ERR, "Matcher depth exceeds limit %d", caps->rtc_log_depth_max); - goto not_supported; - } - - if (attr->table.sz_col_log + attr->table.sz_row_log > caps->ste_alloc_log_max) { - DR_LOG(ERR, "Total matcher size exceeds limit %d", caps->ste_alloc_log_max); - goto not_supported; - } - - if (attr->table.sz_col_log + attr->table.sz_row_log < caps->ste_alloc_log_gran) { - DR_LOG(ERR, "Total matcher size below limit %d", caps->ste_alloc_log_gran); - goto not_supported; - } - - return 0; + return mlx5dr_matcher_check_attr_sz(caps, attr); not_supported: rte_errno = EOPNOTSUPP; From patchwork Tue Jan 31 09:33:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Vesker X-Patchwork-Id: 122735 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1066641B8D; Tue, 31 Jan 2023 10:36:24 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8A9C042D86; Tue, 31 Jan 2023 10:35:03 +0100 (CET) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2081.outbound.protection.outlook.com [40.107.94.81]) by mails.dpdk.org (Postfix) with ESMTP id 26F9F42DB5 for ; Tue, 31 Jan 2023 10:35:01 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=D3G6aq10R7CYpFkxfPsH8kBoteUV9ruuegzw6GsMJGX+3JJIglWlkxYg0GGcNhphbDpI16z6X4ocnRihEfEvBQj83t8C6CS8Xx2Ug0c92AmkkdtZ8s9py8DUctinMfhjHYtvcLmj36niKM8y4v+3tn6qoreTwHHwiWhJ00vp9b4nYDBMsD36//TeTdoN0yQtyZccgErgWvhMXOHCa5Z2UeGzUYGGI8amqqgsNewpnQ1q7e3dkOmRwX1RbX6I3SZSCjp1dHbZi/z2CmnJjvYcgv/FVyb1Jopgps8pA+7yhOpb/DRwY+yZ9zA9hNMlZtCH/mVmZo5Awb92gzQ3nLYXcA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=vPZB4J4RIXEdmv7nEgvDvcunhy8DyC2OnnyVg9rkNpI=; b=ILN6zCU0IxlAGdDp/XHu/iSnalwDdtrdH2YVXhY8jX48I7feRBVp5ZLdDtIJ2WGE650KKIq+pth6IBJhrqVeY7kUkQsuzptonR1qetmIyqA4x3ZqJcrUbBKvaqD8a5TIbh+NS+LjBmFH9g96iGb6hCzbiRpZMoEdJBgyyvlv5h7KPUncWvZXDmPXAfA46PPIxYcyh6kPOcVgf993Ps0cqIAiWc9+DmlMpTS0oiBagYmbEEt3egWcb4Df25bygrB/k94mn8E7lNYyFLGMXxQFXeKONAjjtC6+XagCRZjNNsURSQkSRMpYxcgYttBWlMYQl8+BdSwTpxvqkF7gOUvZlg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=vPZB4J4RIXEdmv7nEgvDvcunhy8DyC2OnnyVg9rkNpI=; b=mrvMxIFHgqzFgdh3Gqhu1BiSVBAnq09l6Ge+ZSMV8mq8IoXi+kRfE3S8OgU5u1arODNV8i9EhbVdEwApRxVM9xpkHWCRlhZh8XIt4QzSa70a07zt1GQv7r8m6eRm/nGx61AKEgMXS9KHDnYpjJfkOUdV1GLVTiYefVPFWxGNXoyhwFrl/NRNISfdjvLOacbEIrlAtCl/uoOG4Wk4mjR6HCSL09DvC7+xOx2ZJoak76wsRafjgENLQvVjcxd0I7B/SYninHmUpmk3xu7o7wNG/jkXDrzwYNPZsW02xGsTA6pTz6aVDhBqpbzlFXmvZwjpYnL++8ruClMJDncW8l0z3w== Received: from MW4PR03CA0083.namprd03.prod.outlook.com (2603:10b6:303:b6::28) by CH0PR12MB8464.namprd12.prod.outlook.com (2603:10b6:610:184::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38; Tue, 31 Jan 2023 09:34:49 +0000 Received: from CO1NAM11FT047.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b6:cafe::45) by MW4PR03CA0083.outlook.office365.com (2603:10b6:303:b6::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36 via Frontend Transport; Tue, 31 Jan 2023 09:34:49 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT047.mail.protection.outlook.com (10.13.174.132) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22 via Frontend Transport; Tue, 31 Jan 2023 09:34:49 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:26 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:24 -0800 From: Alex Vesker To: , , , "Matan Azrad" CC: , Subject: [v1 09/16] net/mlx5/hws: support range match Date: Tue, 31 Jan 2023 11:33:38 +0200 Message-ID: <20230131093346.1261066-10-valex@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230131093346.1261066-1-valex@nvidia.com> References: <20230131093346.1261066-1-valex@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT047:EE_|CH0PR12MB8464:EE_ X-MS-Office365-Filtering-Correlation-Id: d0b2ae6a-545c-4a30-2acc-08db036e65e0 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: HTzKFITDtm/DsesbMNWLWpiVN6syevi+L+/sJD5oZsruGs24GIiM2aYYGk4PCVwx2tG00yoUd1WMdfK+omelhBXbeJUrHvtw+truZbjdEdLeCNhhU6DRKfa4fePTy13gtSl2AETjxJqh9v0+URZO7fnAGELcq81cNaLH4aGASJMoHLmzKJEKzuhgfxhMibmJ/X8d4YsyjjY2oNH78h2KFhNokEDPk5ufzfa0d+yhybHHDttHyrjMqVVVDW6Msb0PFYRDYETYHZ7M787BNNOoAwVQSzbgeQYfc5lSf+ehvVvIfgKNNr6gLrONQjBrZRC501xtI0XrvC5AkLY7moj2j7nbksj3ex0o0gjeItCOJnTgVv5dc6Bzb1c1iouoRuF/hFs4pqmK/n3D34RQ05UbMWcFi0n1iLg3b3dTh2aAe1pWClYt4x7a/HxXQg6qKCTxtVXbnT6vAq1wgw69lTdnt6bY+VnXKxt3KqrljFDvjNBO3fZocPZneNTeRyZruAkIAtWdhEJhliBUOC1xFhK12t0SRR6kMoDPIUTMxpe4b4vT5VUuhsuuWopMFeJJUamE+mfQL4840jrNiodNOCU6us4Mhj13BQOZ999bN6iawoNMTPELWpe0h/91oz+PhS4UNmU+wE+82Fj98yu6LewIYEC7TdsiyXD/Jb+Ha1o/ZZiY19PBQCgddv3NY5OeO1KtdNHWgR01MS3tobbbQ5be/Q== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(376002)(346002)(39860400002)(136003)(396003)(451199018)(40470700004)(36840700001)(46966006)(83380400001)(7696005)(36756003)(40460700003)(356005)(86362001)(55016003)(40480700001)(82740400003)(36860700001)(7636003)(26005)(336012)(186003)(2616005)(16526019)(6286002)(82310400005)(47076005)(426003)(1076003)(478600001)(54906003)(6636002)(316002)(110136005)(107886003)(6666004)(70586007)(70206006)(41300700001)(8936002)(5660300002)(8676002)(4326008)(2906002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 09:34:49.1814 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d0b2ae6a-545c-4a30-2acc-08db036e65e0 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT047.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR12MB8464 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Support range matching over selected items and range is not supported over all the items. The range match is done using: item->last.field - maximum value item->mask.field - bitmask item->spec.field - minimum value When items are processed if item last and mask fields are non zero range matching will be done over these fields. There are two field setter, field copy (fc) and field copy range (fcr). Signed-off-by: Alex Vesker --- drivers/net/mlx5/hws/mlx5dr_definer.c | 73 +++++++++++++++++++++++++-- drivers/net/mlx5/hws/mlx5dr_definer.h | 5 +- 2 files changed, 72 insertions(+), 6 deletions(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c index 6b98eb8c96..c268f94ad3 100644 --- a/drivers/net/mlx5/hws/mlx5dr_definer.c +++ b/drivers/net/mlx5/hws/mlx5dr_definer.c @@ -123,6 +123,7 @@ struct mlx5dr_definer_conv_data { X(SET, ipv4_next_proto, v->next_proto_id, rte_ipv4_hdr) \ X(SET, ipv4_version, STE_IPV4, rte_ipv4_hdr) \ X(SET_BE16, ipv4_frag, v->fragment_offset, rte_ipv4_hdr) \ + X(SET_BE16, ipv4_len, v->total_length, rte_ipv4_hdr) \ X(SET_BE16, ipv6_payload_len, v->hdr.payload_len, rte_flow_item_ipv6) \ X(SET, ipv6_proto, v->hdr.proto, rte_flow_item_ipv6) \ X(SET, ipv6_hop_limits, v->hdr.hop_limits, rte_flow_item_ipv6) \ @@ -516,6 +517,7 @@ mlx5dr_definer_conv_item_ipv4(struct mlx5dr_definer_conv_data *cd, int item_idx) { const struct rte_ipv4_hdr *m = item->mask; + const struct rte_ipv4_hdr *l = item->last; struct mlx5dr_definer_fc *fc; bool inner = cd->tunnel; @@ -533,8 +535,8 @@ mlx5dr_definer_conv_item_ipv4(struct mlx5dr_definer_conv_data *cd, if (!m) return 0; - if (m->total_length || m->packet_id || - m->hdr_checksum) { + if (m->packet_id || m->hdr_checksum || + (l && (l->next_proto_id || l->type_of_service))) { rte_errno = ENOTSUP; return rte_errno; } @@ -553,9 +555,18 @@ mlx5dr_definer_conv_item_ipv4(struct mlx5dr_definer_conv_data *cd, DR_CALC_SET(fc, eth_l3, protocol_next_header, inner); } + if (m->total_length) { + fc = &cd->fc[DR_CALC_FNAME(IP_LEN, inner)]; + fc->item_idx = item_idx; + fc->is_range = l && l->total_length; + fc->tag_set = &mlx5dr_definer_ipv4_len_set; + DR_CALC_SET(fc, eth_l3, ipv4_total_length, inner); + } + if (m->dst_addr) { fc = &cd->fc[DR_CALC_FNAME(IPV4_DST, inner)]; fc->item_idx = item_idx; + fc->is_range = l && l->dst_addr; fc->tag_set = &mlx5dr_definer_ipv4_dst_addr_set; DR_CALC_SET(fc, ipv4_src_dest, destination_address, inner); } @@ -563,6 +574,7 @@ mlx5dr_definer_conv_item_ipv4(struct mlx5dr_definer_conv_data *cd, if (m->src_addr) { fc = &cd->fc[DR_CALC_FNAME(IPV4_SRC, inner)]; fc->item_idx = item_idx; + fc->is_range = l && l->src_addr; fc->tag_set = &mlx5dr_definer_ipv4_src_addr_set; DR_CALC_SET(fc, ipv4_src_dest, source_address, inner); } @@ -570,6 +582,7 @@ mlx5dr_definer_conv_item_ipv4(struct mlx5dr_definer_conv_data *cd, if (m->ihl) { fc = &cd->fc[DR_CALC_FNAME(IPV4_IHL, inner)]; fc->item_idx = item_idx; + fc->is_range = l && l->ihl; fc->tag_set = &mlx5dr_definer_ipv4_ihl_set; DR_CALC_SET(fc, eth_l3, ihl, inner); } @@ -577,6 +590,7 @@ mlx5dr_definer_conv_item_ipv4(struct mlx5dr_definer_conv_data *cd, if (m->time_to_live) { fc = &cd->fc[DR_CALC_FNAME(IP_TTL, inner)]; fc->item_idx = item_idx; + fc->is_range = l && l->time_to_live; fc->tag_set = &mlx5dr_definer_ipv4_time_to_live_set; DR_CALC_SET(fc, eth_l3, time_to_live_hop_limit, inner); } @@ -597,6 +611,7 @@ mlx5dr_definer_conv_item_ipv6(struct mlx5dr_definer_conv_data *cd, int item_idx) { const struct rte_flow_item_ipv6 *m = item->mask; + const struct rte_flow_item_ipv6 *l = item->last; struct mlx5dr_definer_fc *fc; bool inner = cd->tunnel; @@ -616,7 +631,10 @@ mlx5dr_definer_conv_item_ipv6(struct mlx5dr_definer_conv_data *cd, if (m->has_hop_ext || m->has_route_ext || m->has_auth_ext || m->has_esp_ext || m->has_dest_ext || m->has_mobil_ext || - m->has_hip_ext || m->has_shim6_ext) { + m->has_hip_ext || m->has_shim6_ext || + (l && (l->has_frag_ext || l->hdr.vtc_flow || l->hdr.proto || + !is_mem_zero(l->hdr.src_addr, 16) || + !is_mem_zero(l->hdr.dst_addr, 16)))) { rte_errno = ENOTSUP; return rte_errno; } @@ -643,8 +661,9 @@ mlx5dr_definer_conv_item_ipv6(struct mlx5dr_definer_conv_data *cd, } if (m->hdr.payload_len) { - fc = &cd->fc[DR_CALC_FNAME(IPV6_PAYLOAD_LEN, inner)]; + fc = &cd->fc[DR_CALC_FNAME(IP_LEN, inner)]; fc->item_idx = item_idx; + fc->is_range = l && l->hdr.payload_len; fc->tag_set = &mlx5dr_definer_ipv6_payload_len_set; DR_CALC_SET(fc, eth_l3, ipv6_payload_length, inner); } @@ -659,6 +678,7 @@ mlx5dr_definer_conv_item_ipv6(struct mlx5dr_definer_conv_data *cd, if (m->hdr.hop_limits) { fc = &cd->fc[DR_CALC_FNAME(IP_TTL, inner)]; fc->item_idx = item_idx; + fc->is_range = l && l->hdr.hop_limits; fc->tag_set = &mlx5dr_definer_ipv6_hop_limits_set; DR_CALC_SET(fc, eth_l3, time_to_live_hop_limit, inner); } @@ -728,6 +748,7 @@ mlx5dr_definer_conv_item_udp(struct mlx5dr_definer_conv_data *cd, int item_idx) { const struct rte_flow_item_udp *m = item->mask; + const struct rte_flow_item_udp *l = item->last; struct mlx5dr_definer_fc *fc; bool inner = cd->tunnel; @@ -751,6 +772,7 @@ mlx5dr_definer_conv_item_udp(struct mlx5dr_definer_conv_data *cd, if (m->hdr.src_port) { fc = &cd->fc[DR_CALC_FNAME(L4_SPORT, inner)]; fc->item_idx = item_idx; + fc->is_range = l && l->hdr.src_port; fc->tag_set = &mlx5dr_definer_udp_src_port_set; DR_CALC_SET(fc, eth_l4, source_port, inner); } @@ -758,6 +780,7 @@ mlx5dr_definer_conv_item_udp(struct mlx5dr_definer_conv_data *cd, if (m->hdr.dst_port) { fc = &cd->fc[DR_CALC_FNAME(L4_DPORT, inner)]; fc->item_idx = item_idx; + fc->is_range = l && l->hdr.dst_port; fc->tag_set = &mlx5dr_definer_udp_dst_port_set; DR_CALC_SET(fc, eth_l4, destination_port, inner); } @@ -771,6 +794,7 @@ mlx5dr_definer_conv_item_tcp(struct mlx5dr_definer_conv_data *cd, int item_idx) { const struct rte_flow_item_tcp *m = item->mask; + const struct rte_flow_item_tcp *l = item->last; struct mlx5dr_definer_fc *fc; bool inner = cd->tunnel; @@ -786,9 +810,16 @@ mlx5dr_definer_conv_item_tcp(struct mlx5dr_definer_conv_data *cd, if (!m) return 0; + if (m->hdr.sent_seq || m->hdr.recv_ack || m->hdr.data_off || + m->hdr.rx_win || m->hdr.cksum || m->hdr.tcp_urp) { + rte_errno = ENOTSUP; + return rte_errno; + } + if (m->hdr.tcp_flags) { fc = &cd->fc[DR_CALC_FNAME(TCP_FLAGS, inner)]; fc->item_idx = item_idx; + fc->is_range = l && l->hdr.tcp_flags; fc->tag_set = &mlx5dr_definer_tcp_flags_set; DR_CALC_SET(fc, eth_l4, tcp_flags, inner); } @@ -796,6 +827,7 @@ mlx5dr_definer_conv_item_tcp(struct mlx5dr_definer_conv_data *cd, if (m->hdr.src_port) { fc = &cd->fc[DR_CALC_FNAME(L4_SPORT, inner)]; fc->item_idx = item_idx; + fc->is_range = l && l->hdr.src_port; fc->tag_set = &mlx5dr_definer_tcp_src_port_set; DR_CALC_SET(fc, eth_l4, source_port, inner); } @@ -803,6 +835,7 @@ mlx5dr_definer_conv_item_tcp(struct mlx5dr_definer_conv_data *cd, if (m->hdr.dst_port) { fc = &cd->fc[DR_CALC_FNAME(L4_DPORT, inner)]; fc->item_idx = item_idx; + fc->is_range = l && l->hdr.dst_port; fc->tag_set = &mlx5dr_definer_tcp_dst_port_set; DR_CALC_SET(fc, eth_l4, destination_port, inner); } @@ -1108,6 +1141,7 @@ mlx5dr_definer_conv_item_tag(struct mlx5dr_definer_conv_data *cd, { const struct rte_flow_item_tag *m = item->mask; const struct rte_flow_item_tag *v = item->spec; + const struct rte_flow_item_tag *l = item->last; struct mlx5dr_definer_fc *fc; int reg; @@ -1130,7 +1164,9 @@ mlx5dr_definer_conv_item_tag(struct mlx5dr_definer_conv_data *cd, return rte_errno; fc->item_idx = item_idx; + fc->is_range = l && l->index; fc->tag_set = &mlx5dr_definer_tag_set; + return 0; } @@ -1140,6 +1176,7 @@ mlx5dr_definer_conv_item_metadata(struct mlx5dr_definer_conv_data *cd, int item_idx) { const struct rte_flow_item_meta *m = item->mask; + const struct rte_flow_item_meta *l = item->last; struct mlx5dr_definer_fc *fc; int reg; @@ -1158,7 +1195,9 @@ mlx5dr_definer_conv_item_metadata(struct mlx5dr_definer_conv_data *cd, return rte_errno; fc->item_idx = item_idx; + fc->is_range = l && l->data; fc->tag_set = &mlx5dr_definer_metadata_set; + return 0; } @@ -1465,6 +1504,28 @@ mlx5dr_definer_conv_item_meter_color(struct mlx5dr_definer_conv_data *cd, return 0; } +static int +mlx5dr_definer_check_item_range_supp(struct rte_flow_item *item) +{ + if (!item->last) + return 0; + + switch ((int)item->type) { + case RTE_FLOW_ITEM_TYPE_IPV4: + case RTE_FLOW_ITEM_TYPE_IPV6: + case RTE_FLOW_ITEM_TYPE_UDP: + case RTE_FLOW_ITEM_TYPE_TCP: + case RTE_FLOW_ITEM_TYPE_TAG: + case RTE_FLOW_ITEM_TYPE_META: + case MLX5_RTE_FLOW_ITEM_TYPE_TAG: + return 0; + default: + DR_LOG(ERR, "Range not supported over item type %d", item->type); + rte_errno = ENOTSUP; + return rte_errno; + } +} + static int mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx, struct mlx5dr_match_template *mt, @@ -1487,6 +1548,10 @@ mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx, for (i = 0; items->type != RTE_FLOW_ITEM_TYPE_END; i++, items++) { cd.tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); + ret = mlx5dr_definer_check_item_range_supp(items); + if (ret) + return ret; + switch ((int)items->type) { case RTE_FLOW_ITEM_TYPE_ETH: ret = mlx5dr_definer_conv_item_eth(&cd, items, i); diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.h b/drivers/net/mlx5/hws/mlx5dr_definer.h index d52c6b0627..bab4baae4a 100644 --- a/drivers/net/mlx5/hws/mlx5dr_definer.h +++ b/drivers/net/mlx5/hws/mlx5dr_definer.h @@ -38,8 +38,8 @@ enum mlx5dr_definer_fname { MLX5DR_DEFINER_FNAME_IP_VERSION_I, MLX5DR_DEFINER_FNAME_IP_FRAG_O, MLX5DR_DEFINER_FNAME_IP_FRAG_I, - MLX5DR_DEFINER_FNAME_IPV6_PAYLOAD_LEN_O, - MLX5DR_DEFINER_FNAME_IPV6_PAYLOAD_LEN_I, + MLX5DR_DEFINER_FNAME_IP_LEN_O, + MLX5DR_DEFINER_FNAME_IP_LEN_I, MLX5DR_DEFINER_FNAME_IP_TOS_O, MLX5DR_DEFINER_FNAME_IP_TOS_I, MLX5DR_DEFINER_FNAME_IPV6_FLOW_LABEL_O, @@ -116,6 +116,7 @@ enum mlx5dr_definer_type { struct mlx5dr_definer_fc { uint8_t item_idx; + uint8_t is_range; uint32_t byte_off; int bit_off; uint32_t bit_mask; From patchwork Tue Jan 31 09:33:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Vesker X-Patchwork-Id: 122730 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DBBA641B8D; Tue, 31 Jan 2023 10:35:45 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A798A42D9C; Tue, 31 Jan 2023 10:34:47 +0100 (CET) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2040.outbound.protection.outlook.com [40.107.92.40]) by mails.dpdk.org (Postfix) with ESMTP id 60C4242D56 for ; Tue, 31 Jan 2023 10:34:44 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=izL0sSmDw5YX46aTjfBADR4+cyVvDMa3scw6gz9i6eWhUCY90NgRqJejRvzL31GC+NuWIrK3UfsYH+C69rVTN++x517nNdrFW/kNUaD/98M/IFOX1OmowkXRAl9qCCA+c84o/nveEh/XQYe+rl7hA2XtT5qUUEpGllBnPVpvrpTer5aP6BRidhMFGFRDEYgoGUK0xqHpoXXIX3qKpjBnAz4n8/QZCkjFYSXKbUj2O1tvMrWGru4W6PF5M2sK8bodc/Nsg5Lr8cs8Of9HByMjWHZrrN/FrYiDkuEh+M6+PTC6G02geOdsU4SYYkQUzXbYoiLfiQacoN2fUMXD36yEZg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=xAcrPVXjY/2pMLJBgVYxiOD38smOcHK62tZTRhypu4g=; b=SInHHCrNIHMnY6LIL/f/L9AWQ5goFckQmp+DMvjhk7RYxmOufLohm1mbFsoKx/tLbiSPWUXgY6qjS5O58OVO16tMUF4wg4V73x5CNGd3jiR1Q5s61wgWUq51E61x8rayN6KxRqYVMqBSelv7re5zC9HEWpxom6g6hDZMv6OwRkpkP8BovYKt2FwnStetvtacLKGUaQ5/8PveCS9UxfSOpW7fjcD2ZzDikNskTsBC7d91gwtzkd0vtZ1vmhha0sQDL3D9TpGr5vrgWIOLUY8XoksPqvE+ib4+FpMFRcSlWDOECK/z1ZXM37nb4pUXpH+BkexE8fSIVAAz7oZ4UDh9XQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=xAcrPVXjY/2pMLJBgVYxiOD38smOcHK62tZTRhypu4g=; b=tVTyppXnKL4ewJQjWFxONLwXyfChm+C9SYR2c8S4ISSOT7nKxd+lLBrxEToMJJPnjxwOSnK3BYze41MXTW3xy7vOxoLehvV7rcs4YiLDFQdyfylt5HFVaXqlgpyr8bGRJAjP3dPFCWK1RifOGERPOSC4dWn+ozOVjyD0TUgCFVWiHGaIKZ5uoSKA9MdDLdyT/8y8caqmO3RMITZus+G28ZAQZAk04JkCIp86fjRILZEJ4PgiE4iP76N1zisTfmJwv1jiJSKTvmZzghFos4w3eHAgHMSx03z4QUD30una4791ohV6aFbVnm2Ytiqtx2fLNX4d8LqhcO6qQlCn2IGuXw== Received: from MW4PR04CA0070.namprd04.prod.outlook.com (2603:10b6:303:6b::15) by SN7PR12MB6716.namprd12.prod.outlook.com (2603:10b6:806:270::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38; Tue, 31 Jan 2023 09:34:42 +0000 Received: from CO1NAM11FT016.eop-nam11.prod.protection.outlook.com (2603:10b6:303:6b:cafe::4) by MW4PR04CA0070.outlook.office365.com (2603:10b6:303:6b::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38 via Frontend Transport; Tue, 31 Jan 2023 09:34:41 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CO1NAM11FT016.mail.protection.outlook.com (10.13.175.141) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.21 via Frontend Transport; Tue, 31 Jan 2023 09:34:41 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:28 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:26 -0800 From: Alex Vesker To: , , , "Matan Azrad" CC: , Subject: [v1 10/16] net/mlx5/hws: redesign definer create Date: Tue, 31 Jan 2023 11:33:39 +0200 Message-ID: <20230131093346.1261066-11-valex@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230131093346.1261066-1-valex@nvidia.com> References: <20230131093346.1261066-1-valex@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT016:EE_|SN7PR12MB6716:EE_ X-MS-Office365-Filtering-Correlation-Id: fd8f90c3-101a-4723-a3cc-08db036e6164 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: kZgFqszNsJRDIbQ9LQLVi0qfiZDFukLFJ5SRcLDD7HqXHzWFFEEY4+0pJPeqOpZ4xQpwltmKMzDJHUSavEODDoICGKFlsJPhbiNH6zTJ8MM4OTK9dOB/viTvlhHIgrIzFMECTRRPgGWlSHkHW0RIL2P6yxI6PEDB2vFGySJ5RvPvR89IN2o4bi7StHRQnaIED44QKieoDjshL6DKrOVxfFwXmicJXQhhcAI1GyYg9YwL46rp4fFIBg+la1mayilz9Wr5XiAdNuujZnGC/qqISjxypkNm++TX5tBtVhceWRLYwqprZTdsUY0B7TxXvNjardqM7KOpt81VkvbtvnNdBWl++Xky92bLUICbeXC4opvbqp6zEieMuL+PDWZALD+sbClZnoRMFJSKb5wh/4o0Wn4jWAlsXdsQ8r8+LibSnEXTlY7fUwy46zrUf30++lIvGHWVTIGAd9x3JWTcSWdJaFPd9oBfaZTpE+fOyVWThPa/6Z99a0/jtx3LgeskhD8lNMoDYWq8sKD2eDoeYn+M0jpxvYbIRQRgzqIv7GnKg+WFF2ExbNHV5o5ywgX8M2JW0sPbP+vMN2zJEts5CYviWvtIjSX2VFjTKNr5FMhA/u0L/zn1SR0I8p6RHkh7QH85p3prujfMCjnAVpZEc2QgyaRfaOiHrunRxsh+GrjmdVLGn3pmZBPUwraOqy7/P+9GlUKhAi5ptNzOlyXUiBnOKA== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(346002)(39860400002)(396003)(376002)(136003)(451199018)(40470700004)(46966006)(36840700001)(36756003)(70206006)(6636002)(54906003)(316002)(110136005)(82310400005)(8676002)(4326008)(5660300002)(70586007)(41300700001)(8936002)(86362001)(356005)(7636003)(82740400003)(36860700001)(107886003)(6666004)(16526019)(186003)(26005)(1076003)(30864003)(83380400001)(336012)(426003)(40480700001)(40460700003)(55016003)(2906002)(6286002)(7696005)(47076005)(478600001)(2616005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 09:34:41.7234 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: fd8f90c3-101a-4723-a3cc-08db036e6164 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT016.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB6716 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Until now definer creation and deletion get and put functions were used, the get function would calculate the definer field copy (fc), header layout (hl) and definer layout internally without taking into account other match templates used over the same matcher. This logic had to be split to allow sharing the hl over multiple definers. First calculate the shared hl than create definers based on the definer shared layout. Once all definers use the same layout it is possible to hash over the shared fields since the location is the same across all of the definers. Signed-off-by: Alex Vesker --- drivers/net/mlx5/hws/mlx5dr_definer.c | 301 ++++++++++++++++--------- drivers/net/mlx5/hws/mlx5dr_definer.h | 11 +- drivers/net/mlx5/hws/mlx5dr_internal.h | 2 +- drivers/net/mlx5/hws/mlx5dr_matcher.c | 61 ++--- drivers/net/mlx5/hws/mlx5dr_matcher.h | 16 +- 5 files changed, 230 insertions(+), 161 deletions(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c index c268f94ad3..9560f8a0af 100644 --- a/drivers/net/mlx5/hws/mlx5dr_definer.c +++ b/drivers/net/mlx5/hws/mlx5dr_definer.c @@ -104,7 +104,6 @@ struct mlx5dr_definer_conv_data { struct mlx5dr_definer_fc *fc; uint8_t relaxed; uint8_t tunnel; - uint8_t *hl; }; /* Xmacro used to create generic item setter from items */ @@ -1504,6 +1503,36 @@ mlx5dr_definer_conv_item_meter_color(struct mlx5dr_definer_conv_data *cd, return 0; } +static int +mlx5dr_definer_mt_set_fc(struct mlx5dr_match_template *mt, + struct mlx5dr_definer_fc *fc, + uint8_t *hl) +{ + uint32_t fc_sz = 0; + int i; + + for (i = 0; i < MLX5DR_DEFINER_FNAME_MAX; i++) + if (fc[i].tag_set) + fc_sz++; + + mt->fc = simple_calloc(fc_sz, sizeof(*mt->fc)); + if (!mt->fc) { + rte_errno = ENOMEM; + return rte_errno; + } + + for (i = 0; i < MLX5DR_DEFINER_FNAME_MAX; i++) { + if (!fc[i].tag_set) + continue; + + fc[i].fname = i; + memcpy(&mt->fc[mt->fc_sz++], &fc[i], sizeof(*mt->fc)); + DR_SET(hl, -1, fc[i].byte_off, fc[i].bit_off, fc[i].bit_mask); + } + + return 0; +} + static int mlx5dr_definer_check_item_range_supp(struct rte_flow_item *item) { @@ -1535,12 +1564,9 @@ mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx, struct mlx5dr_definer_conv_data cd = {0}; struct rte_flow_item *items = mt->items; uint64_t item_flags = 0; - uint32_t total = 0; - int i, j; - int ret; + int i, ret; cd.fc = fc; - cd.hl = hl; cd.caps = ctx->caps; cd.relaxed = mt->flags & MLX5DR_MATCH_TEMPLATE_FLAG_RELAXED_MATCH; @@ -1660,29 +1686,11 @@ mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx, mt->item_flags = item_flags; - /* Fill in headers layout and calculate total number of fields */ - for (i = 0; i < MLX5DR_DEFINER_FNAME_MAX; i++) { - if (fc[i].tag_set) { - total++; - DR_SET(hl, -1, fc[i].byte_off, fc[i].bit_off, fc[i].bit_mask); - } - } - - mt->fc_sz = total; - mt->fc = simple_calloc(total, sizeof(*mt->fc)); - if (!mt->fc) { - DR_LOG(ERR, "Failed to allocate field copy array"); - rte_errno = ENOMEM; - return rte_errno; - } - - j = 0; - for (i = 0; i < MLX5DR_DEFINER_FNAME_MAX; i++) { - if (fc[i].tag_set) { - memcpy(&mt->fc[j], &fc[i], sizeof(*mt->fc)); - mt->fc[j].fname = i; - j++; - } + /* Fill in headers layout and allocate fc array on mt */ + ret = mlx5dr_definer_mt_set_fc(mt, fc, hl); + if (ret) { + DR_LOG(ERR, "Failed to set field copy to match template"); + return ret; } return 0; @@ -1837,8 +1845,8 @@ mlx5dr_definer_best_hl_fit_recu(struct mlx5dr_definer_sel_ctrl *ctrl, } static void -mlx5dr_definer_apply_sel_ctrl(struct mlx5dr_definer_sel_ctrl *ctrl, - struct mlx5dr_definer *definer) +mlx5dr_definer_copy_sel_ctrl(struct mlx5dr_definer_sel_ctrl *ctrl, + struct mlx5dr_definer *definer) { memcpy(definer->byte_selector, ctrl->byte_selector, ctrl->allowed_bytes); memcpy(definer->dw_selector, ctrl->full_dw_selector, ctrl->allowed_full_dw); @@ -1848,7 +1856,7 @@ mlx5dr_definer_apply_sel_ctrl(struct mlx5dr_definer_sel_ctrl *ctrl, static int mlx5dr_definer_find_best_hl_fit(struct mlx5dr_context *ctx, - struct mlx5dr_match_template *mt, + struct mlx5dr_definer *definer, uint8_t *hl) { struct mlx5dr_definer_sel_ctrl ctrl = {0}; @@ -1861,8 +1869,8 @@ mlx5dr_definer_find_best_hl_fit(struct mlx5dr_context *ctx, found = mlx5dr_definer_best_hl_fit_recu(&ctrl, 0, (uint32_t *)hl); if (found) { - mlx5dr_definer_apply_sel_ctrl(&ctrl, mt->definer); - mt->definer->type = MLX5DR_DEFINER_TYPE_MATCH; + mlx5dr_definer_copy_sel_ctrl(&ctrl, definer); + definer->type = MLX5DR_DEFINER_TYPE_MATCH; return 0; } @@ -1875,8 +1883,8 @@ mlx5dr_definer_find_best_hl_fit(struct mlx5dr_context *ctx, found = mlx5dr_definer_best_hl_fit_recu(&ctrl, 0, (uint32_t *)hl); if (found) { - mlx5dr_definer_apply_sel_ctrl(&ctrl, mt->definer); - mt->definer->type = MLX5DR_DEFINER_TYPE_JUMBO; + mlx5dr_definer_copy_sel_ctrl(&ctrl, definer); + definer->type = MLX5DR_DEFINER_TYPE_JUMBO; return 0; } @@ -1920,114 +1928,189 @@ int mlx5dr_definer_get_id(struct mlx5dr_definer *definer) return definer->obj->id; } -int mlx5dr_definer_compare(struct mlx5dr_definer *definer_a, - struct mlx5dr_definer *definer_b) +static int +mlx5dr_definer_calc_layout(struct mlx5dr_matcher *matcher, + struct mlx5dr_definer *match_definer) { - int i; + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dr_match_template *mt = matcher->mt; + uint8_t *match_hl, *hl_buff; + int i, ret; - if (definer_a->type != definer_b->type) - return 1; + /* Union header-layout (hl) is used for creating a single definer + * field layout used with different bitmasks for hash and match. + */ + hl_buff = simple_calloc(1, MLX5_ST_SZ_BYTES(definer_hl)); + if (!hl_buff) { + DR_LOG(ERR, "Failed to allocate memory for header layout"); + rte_errno = ENOMEM; + return rte_errno; + } - for (i = 0; i < BYTE_SELECTORS; i++) - if (definer_a->byte_selector[i] != definer_b->byte_selector[i]) - return 1; + match_hl = hl_buff; - for (i = 0; i < DW_SELECTORS; i++) - if (definer_a->dw_selector[i] != definer_b->dw_selector[i]) - return 1; + /* Convert all mt items to header layout (hl) + * and allocate the match field copy array (fc). + */ + for (i = 0; i < matcher->num_of_mt; i++) { + ret = mlx5dr_definer_conv_items_to_hl(ctx, &mt[i], match_hl); + if (ret) { + DR_LOG(ERR, "Failed to convert items to header layout"); + goto free_fc; + } + } - for (i = 0; i < MLX5DR_JUMBO_TAG_SZ; i++) - if (definer_a->mask.jumbo[i] != definer_b->mask.jumbo[i]) - return 1; + /* Find the match definer layout for header layout match union */ + ret = mlx5dr_definer_find_best_hl_fit(ctx, match_definer, match_hl); + if (ret) { + DR_LOG(ERR, "Failed to create match definer from header layout"); + goto free_fc; + } + simple_free(hl_buff); return 0; + +free_fc: + for (i = 0; i < matcher->num_of_mt; i++) + if (mt[i].fc) + simple_free(mt[i].fc); + + simple_free(hl_buff); + return rte_errno; } -int mlx5dr_definer_get(struct mlx5dr_context *ctx, - struct mlx5dr_match_template *mt) +static struct mlx5dr_definer * +mlx5dr_definer_alloc(struct ibv_context *ibv_ctx, + struct mlx5dr_definer_fc *fc, + int fc_sz, + struct rte_flow_item *items, + struct mlx5dr_definer *layout) { struct mlx5dr_cmd_definer_create_attr def_attr = {0}; - struct ibv_context *ibv_ctx = ctx->ibv_ctx; - uint8_t *hl; + struct mlx5dr_definer *definer; int ret; - if (mt->refcount++) - return 0; - - mt->definer = simple_calloc(1, sizeof(*mt->definer)); - if (!mt->definer) { + definer = simple_calloc(1, sizeof(*definer)); + if (!definer) { DR_LOG(ERR, "Failed to allocate memory for definer"); rte_errno = ENOMEM; - goto dec_refcount; - } - - /* Header layout (hl) holds full bit mask per field */ - hl = simple_calloc(1, MLX5_ST_SZ_BYTES(definer_hl)); - if (!hl) { - DR_LOG(ERR, "Failed to allocate memory for header layout"); - rte_errno = ENOMEM; - goto free_definer; + return NULL; } - /* Convert items to hl and allocate the field copy array (fc) */ - ret = mlx5dr_definer_conv_items_to_hl(ctx, mt, hl); - if (ret) { - DR_LOG(ERR, "Failed to convert items to hl"); - goto free_hl; - } + memcpy(definer, layout, sizeof(*definer)); - /* Find the definer for given header layout */ - ret = mlx5dr_definer_find_best_hl_fit(ctx, mt, hl); - if (ret) { - DR_LOG(ERR, "Failed to create definer from header layout"); - goto free_field_copy; - } - - /* Align field copy array based on the new definer */ - ret = mlx5dr_definer_fc_bind(mt->definer, - mt->fc, - mt->fc_sz); + /* Align field copy array based on given layout */ + ret = mlx5dr_definer_fc_bind(definer, fc, fc_sz); if (ret) { DR_LOG(ERR, "Failed to bind field copy to definer"); - goto free_field_copy; + goto free_definer; } /* Create the tag mask used for definer creation */ - mlx5dr_definer_create_tag_mask(mt->items, - mt->fc, - mt->fc_sz, - mt->definer->mask.jumbo); + mlx5dr_definer_create_tag_mask(items, fc, fc_sz, definer->mask.jumbo); /* Create definer based on the bitmask tag */ - def_attr.match_mask = mt->definer->mask.jumbo; - def_attr.dw_selector = mt->definer->dw_selector; - def_attr.byte_selector = mt->definer->byte_selector; - mt->definer->obj = mlx5dr_cmd_definer_create(ibv_ctx, &def_attr); - if (!mt->definer->obj) - goto free_field_copy; + def_attr.match_mask = definer->mask.jumbo; + def_attr.dw_selector = layout->dw_selector; + def_attr.byte_selector = layout->byte_selector; - simple_free(hl); + definer->obj = mlx5dr_cmd_definer_create(ibv_ctx, &def_attr); + if (!definer->obj) + goto free_definer; - return 0; + return definer; -free_field_copy: - simple_free(mt->fc); -free_hl: - simple_free(hl); free_definer: - simple_free(mt->definer); -dec_refcount: - mt->refcount--; + simple_free(definer); + return NULL; +} + +static void +mlx5dr_definer_free(struct mlx5dr_definer *definer) +{ + mlx5dr_cmd_destroy_obj(definer->obj); + simple_free(definer); +} + +static int +mlx5dr_definer_matcher_match_init(struct mlx5dr_context *ctx, + struct mlx5dr_matcher *matcher, + struct mlx5dr_definer *match_layout) +{ + struct mlx5dr_match_template *mt = matcher->mt; + int i; + + /* Create mendatory match definer */ + for (i = 0; i < matcher->num_of_mt; i++) { + mt[i].definer = mlx5dr_definer_alloc(ctx->ibv_ctx, + mt[i].fc, + mt[i].fc_sz, + mt[i].items, + match_layout); + if (!mt[i].definer) { + DR_LOG(ERR, "Failed to create match definer"); + goto free_definers; + } + } + return 0; + +free_definers: + while (i--) + mlx5dr_definer_free(mt[i].definer); return rte_errno; } -void mlx5dr_definer_put(struct mlx5dr_match_template *mt) +static void +mlx5dr_definer_matcher_match_uninit(struct mlx5dr_matcher *matcher) { - if (--mt->refcount) + int i; + + for (i = 0; i < matcher->num_of_mt; i++) + mlx5dr_definer_free(matcher->mt[i].definer); +} + +int mlx5dr_definer_matcher_init(struct mlx5dr_context *ctx, + struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_definer match_layout = {0}; + int ret, i; + + if (matcher->flags & MLX5DR_MATCHER_FLAGS_COLISION) + return 0; + + /* Calculate header layout based on matcher items */ + ret = mlx5dr_definer_calc_layout(matcher, &match_layout); + if (ret) { + DR_LOG(ERR, "Failed to calculate matcher definer layout"); + return ret; + } + + /* Calculate definers needed for exact match */ + ret = mlx5dr_definer_matcher_match_init(ctx, matcher, &match_layout); + if (ret) { + DR_LOG(ERR, "Failed to init match definers"); + goto free_fc; + } + + return 0; + +free_fc: + for (i = 0; i < matcher->num_of_mt; i++) + simple_free(matcher->mt[i].fc); + + return ret; +} + +void mlx5dr_definer_matcher_uninit(struct mlx5dr_matcher *matcher) +{ + int i; + + if (matcher->flags & MLX5DR_MATCHER_FLAGS_COLISION) return; - simple_free(mt->fc); - mlx5dr_cmd_destroy_obj(mt->definer->obj); - simple_free(mt->definer); + mlx5dr_definer_matcher_match_uninit(matcher); + + for (i = 0; i < matcher->num_of_mt; i++) + simple_free(matcher->mt[i].fc); } diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.h b/drivers/net/mlx5/hws/mlx5dr_definer.h index bab4baae4a..a14a08838a 100644 --- a/drivers/net/mlx5/hws/mlx5dr_definer.h +++ b/drivers/net/mlx5/hws/mlx5dr_definer.h @@ -573,14 +573,11 @@ void mlx5dr_definer_create_tag(const struct rte_flow_item *items, uint32_t fc_sz, uint8_t *tag); -int mlx5dr_definer_compare(struct mlx5dr_definer *definer_a, - struct mlx5dr_definer *definer_b); - int mlx5dr_definer_get_id(struct mlx5dr_definer *definer); -int mlx5dr_definer_get(struct mlx5dr_context *ctx, - struct mlx5dr_match_template *mt); +int mlx5dr_definer_matcher_init(struct mlx5dr_context *ctx, + struct mlx5dr_matcher *matcher); -void mlx5dr_definer_put(struct mlx5dr_match_template *mt); +void mlx5dr_definer_matcher_uninit(struct mlx5dr_matcher *matcher); -#endif /* MLX5DR_DEFINER_H_ */ +#endif diff --git a/drivers/net/mlx5/hws/mlx5dr_internal.h b/drivers/net/mlx5/hws/mlx5dr_internal.h index faad2bbd0f..c3c077667d 100644 --- a/drivers/net/mlx5/hws/mlx5dr_internal.h +++ b/drivers/net/mlx5/hws/mlx5dr_internal.h @@ -30,12 +30,12 @@ #include "mlx5dr_pool.h" #include "mlx5dr_context.h" #include "mlx5dr_table.h" -#include "mlx5dr_matcher.h" #include "mlx5dr_send.h" #include "mlx5dr_rule.h" #include "mlx5dr_cmd.h" #include "mlx5dr_action.h" #include "mlx5dr_definer.h" +#include "mlx5dr_matcher.h" #include "mlx5dr_debug.h" #include "mlx5dr_pat_arg.h" diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.c b/drivers/net/mlx5/hws/mlx5dr_matcher.c index b8db0a27ae..7e332052b2 100644 --- a/drivers/net/mlx5/hws/mlx5dr_matcher.c +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.c @@ -406,6 +406,7 @@ static int mlx5dr_matcher_create_rtc(struct mlx5dr_matcher *matcher, { struct mlx5dr_matcher_attr *attr = &matcher->attr; struct mlx5dr_cmd_rtc_create_attr rtc_attr = {0}; + struct mlx5dr_match_template *mt = matcher->mt; struct mlx5dr_context *ctx = matcher->tbl->ctx; struct mlx5dr_action_default_stc *default_stc; struct mlx5dr_table *tbl = matcher->tbl; @@ -413,8 +414,6 @@ static int mlx5dr_matcher_create_rtc(struct mlx5dr_matcher *matcher, struct mlx5dr_pool *ste_pool, *stc_pool; struct mlx5dr_devx_obj *devx_obj; struct mlx5dr_pool_chunk *ste; - uint8_t first_definer_id; - bool is_jumbo; int ret; switch (rtc_type) { @@ -424,19 +423,17 @@ static int mlx5dr_matcher_create_rtc(struct mlx5dr_matcher *matcher, ste_pool = matcher->match_ste.pool; ste = &matcher->match_ste.ste; ste->order = attr->table.sz_col_log + attr->table.sz_row_log; + rtc_attr.log_size = attr->table.sz_row_log; rtc_attr.log_depth = attr->table.sz_col_log; + rtc_attr.is_frst_jumbo = mlx5dr_matcher_mt_is_jumbo(mt); rtc_attr.miss_ft_id = matcher->end_ft->id; - is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt->definer); - first_definer_id = mlx5dr_definer_get_id(matcher->mt->definer); - if (attr->insert_mode == MLX5DR_MATCHER_INSERT_BY_HASH) { /* The usual Hash Table */ rtc_attr.update_index_mode = MLX5_IFC_RTC_STE_UPDATE_MODE_BY_HASH; /* The first match template is used since all share the same definer */ - rtc_attr.match_definer_0 = first_definer_id; - rtc_attr.is_frst_jumbo = is_jumbo; + rtc_attr.match_definer_0 = mlx5dr_definer_get_id(mt->definer); } else if (attr->insert_mode == MLX5DR_MATCHER_INSERT_BY_INDEX) { rtc_attr.update_index_mode = MLX5_IFC_RTC_STE_UPDATE_MODE_BY_OFFSET; rtc_attr.num_hash_definer = 1; @@ -444,8 +441,7 @@ static int mlx5dr_matcher_create_rtc(struct mlx5dr_matcher *matcher, if (attr->distribute_mode == MLX5DR_MATCHER_DISTRIBUTE_BY_HASH) { /* Hash Split Table */ rtc_attr.access_index_mode = MLX5_IFC_RTC_STE_ACCESS_MODE_BY_HASH; - rtc_attr.match_definer_0 = first_definer_id; - rtc_attr.is_frst_jumbo = is_jumbo; + rtc_attr.match_definer_0 = mlx5dr_definer_get_id(mt->definer); } else if (attr->distribute_mode == MLX5DR_MATCHER_DISTRIBUTE_BY_LINEAR) { /* Linear Lookup Table */ rtc_attr.access_index_mode = MLX5_IFC_RTC_STE_ACCESS_MODE_LINEAR; @@ -608,7 +604,7 @@ static void mlx5dr_matcher_set_pool_attr(struct mlx5dr_pool_attr *attr, static int mlx5dr_matcher_bind_at(struct mlx5dr_matcher *matcher) { - bool is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt->definer); + bool is_jumbo = mlx5dr_matcher_mt_is_jumbo(matcher->mt); struct mlx5dr_cmd_stc_modify_attr stc_attr = {0}; struct mlx5dr_table *tbl = matcher->tbl; struct mlx5dr_pool_attr pool_attr = {0}; @@ -703,34 +699,19 @@ static int mlx5dr_matcher_bind_mt(struct mlx5dr_matcher *matcher) { struct mlx5dr_context *ctx = matcher->tbl->ctx; struct mlx5dr_pool_attr pool_attr = {0}; - int i, created = 0; - int ret = -1; - - for (i = 0; i < matcher->num_of_mt; i++) { - /* Get a definer for each match template */ - ret = mlx5dr_definer_get(ctx, &matcher->mt[i]); - if (ret) - goto definer_put; - - created++; - - /* Verify all templates produce the same definer */ - if (i == 0) - continue; + int ret; - ret = mlx5dr_definer_compare(matcher->mt[i].definer, - matcher->mt[i - 1].definer); - if (ret) { - DR_LOG(ERR, "Match templates cannot be used on the same matcher"); - rte_errno = ENOTSUP; - goto definer_put; - } + /* Calculate match definers */ + ret = mlx5dr_definer_matcher_init(ctx, matcher); + if (ret) { + DR_LOG(ERR, "Failed to set matcher templates with match definers"); + return ret; } /* Create an STE pool per matcher*/ + pool_attr.table_type = matcher->tbl->type; pool_attr.pool_type = MLX5DR_POOL_TYPE_STE; pool_attr.flags = MLX5DR_POOL_FLAGS_FOR_MATCHER_STE_POOL; - pool_attr.table_type = matcher->tbl->type; pool_attr.alloc_log_sz = matcher->attr.table.sz_col_log + matcher->attr.table.sz_row_log; mlx5dr_matcher_set_pool_attr(&pool_attr, matcher); @@ -738,26 +719,20 @@ static int mlx5dr_matcher_bind_mt(struct mlx5dr_matcher *matcher) matcher->match_ste.pool = mlx5dr_pool_create(ctx, &pool_attr); if (!matcher->match_ste.pool) { DR_LOG(ERR, "Failed to allocate matcher STE pool"); - goto definer_put; + goto uninit_match_definer; } return 0; -definer_put: - while (created--) - mlx5dr_definer_put(&matcher->mt[created]); - +uninit_match_definer: + mlx5dr_definer_matcher_uninit(matcher); return ret; } static void mlx5dr_matcher_unbind_mt(struct mlx5dr_matcher *matcher) { - int i; - - for (i = 0; i < matcher->num_of_mt; i++) - mlx5dr_definer_put(&matcher->mt[i]); - mlx5dr_pool_destroy(matcher->match_ste.pool); + mlx5dr_definer_matcher_uninit(matcher); } static int @@ -958,6 +933,8 @@ mlx5dr_matcher_create_col_matcher(struct mlx5dr_matcher *matcher) col_matcher->num_of_at = matcher->num_of_at; col_matcher->num_of_mt = matcher->num_of_mt; col_matcher->attr.priority = matcher->attr.priority; + col_matcher->flags = matcher->flags; + col_matcher->flags |= MLX5DR_MATCHER_FLAGS_COLISION; col_matcher->attr.mode = MLX5DR_MATCHER_RESOURCE_MODE_HTABLE; col_matcher->attr.optimize_flow_src = matcher->attr.optimize_flow_src; col_matcher->attr.table.sz_row_log = matcher->attr.rule.num_log; diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.h b/drivers/net/mlx5/hws/mlx5dr_matcher.h index b957f5ea4b..4bdb33b11f 100644 --- a/drivers/net/mlx5/hws/mlx5dr_matcher.h +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.h @@ -22,15 +22,19 @@ /* Required depth of the main large table */ #define MLX5DR_MATCHER_ASSURED_MAIN_TBL_DEPTH 2 +enum mlx5dr_matcher_flags { + MLX5DR_MATCHER_FLAGS_COLISION = 1 << 0, +}; + struct mlx5dr_match_template { struct rte_flow_item *items; struct mlx5dr_definer *definer; + struct mlx5dr_definer *range_definer; struct mlx5dr_definer_fc *fc; - uint32_t fc_sz; + uint16_t fc_sz; uint64_t item_flags; uint8_t vport_item_id; enum mlx5dr_match_template_flags flags; - uint32_t refcount; }; struct mlx5dr_matcher_match_ste { @@ -59,6 +63,8 @@ struct mlx5dr_matcher { uint8_t num_of_mt; struct mlx5dr_action_template *at; uint8_t num_of_at; + /* enum mlx5dr_matcher_flags */ + uint8_t flags; struct mlx5dr_devx_obj *end_ft; struct mlx5dr_matcher *col_matcher; struct mlx5dr_matcher_match_ste match_ste; @@ -66,6 +72,12 @@ struct mlx5dr_matcher { LIST_ENTRY(mlx5dr_matcher) next; }; +static inline bool +mlx5dr_matcher_mt_is_jumbo(struct mlx5dr_match_template *mt) +{ + return mlx5dr_definer_is_jumbo(mt->definer); +} + int mlx5dr_matcher_conv_items_to_prm(uint64_t *match_buf, struct rte_flow_item *items, uint8_t *match_criteria, From patchwork Tue Jan 31 09:33:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Vesker X-Patchwork-Id: 122733 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0634041B8D; Tue, 31 Jan 2023 10:36:08 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DC6B642F94; Tue, 31 Jan 2023 10:34:55 +0100 (CET) Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam04on2052.outbound.protection.outlook.com [40.107.100.52]) by mails.dpdk.org (Postfix) with ESMTP id D38D342F90 for ; Tue, 31 Jan 2023 10:34:53 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=L6m45HKFcXAALz9okLy8B27/cy7ZDf+laMlC60pWziu9bwrDipSZ+ePL+LgVcxT2zDlbNsKcbyCtf63Q4wge52yOOK9HHV+SGwemmO3d7Dvvf8fGWdpkgZlTDRKVBaxOrYIwX+DfqvepBoT1wXnh08HtD6EgY3cr+DI6Wc1TxwbtnDTayt7jO99m94Mt9s7hVTkagLE5FI+61lluTU1EmdZjQQuajsGPdlzJBOsflb1/0UukPEmw5/PUwPzS+kXAbVLSWa9Wil/LY+PwLea2Fte8rVNyVCLKLREwvV1LM1r/zyks1ckjG+H0UryuXtODty++H27rz3vn+QabI3LX1g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Zqd2myNNzFCyD06DvXWGIZthCo1udtrMgKpx45dX8q8=; b=UMJFnt/IrLTlnQSjdseyIWBk7IQYB661tvBzJCa7s59DWbv3m2pZOZ/fzM/JZHv2v+55f8EwgspTaPbjmIVv1JQQtQALOwRQnaSN/VtH5hgXrnq6igMjuqIP3fdsnf48BCanhY6ebBynpaYq4Ghb59qfTv9YSRD5dWbgx09X9YhvmXCb0S3gjqqdE3dngc+mQOEdZ6gyXiv82EzZMA+sqYKxybabkUP5sNA6pmnUAzPgJ9elrYJBpZdgWRDUmyIdWmtZrwFsxZXkEWV+4rUSAN7haMaSzrHFQHIBitG9Pgun447IEKjSFUogiMXH61BHP5VwMS8IJbzRd5x8xOdwww== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Zqd2myNNzFCyD06DvXWGIZthCo1udtrMgKpx45dX8q8=; b=FZfRxEKclqHtc44el6dVBXM+pUVFKrsGDZHwT0NacZOs1H7smRSbIT7+mK/GEeoQilyxgYGqjz8iDrTTVc+vp4FkTGUOZBHCalzzGQiotYb+qzeUh6UpmQKq7wtMOAjxdW4Nxl6bRc2nDqxPGRUm7OvEtTRbO2JlBl8K5hnfhiPgo8ibXn5Qo9y32qyFqwDWhq1gB8+JlLcIwTkVl6KngE1+XIqXoUFJXcD12Y6n/y3y1FjYOtfvhU1BKDu/zMmeubW17p7D0ANNVo34dqMpx4mq8CdHksKjqe9j5O0MXRO6yYWDnSvBd9U8v9w4KDU/twSraTqmNLmNWxpEeb90kQ== Received: from MW4PR04CA0152.namprd04.prod.outlook.com (2603:10b6:303:85::7) by DM6PR12MB4058.namprd12.prod.outlook.com (2603:10b6:5:21d::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Tue, 31 Jan 2023 09:34:52 +0000 Received: from CO1NAM11FT022.eop-nam11.prod.protection.outlook.com (2603:10b6:303:85:cafe::14) by MW4PR04CA0152.outlook.office365.com (2603:10b6:303:85::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38 via Frontend Transport; Tue, 31 Jan 2023 09:34:51 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT022.mail.protection.outlook.com (10.13.175.199) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.21 via Frontend Transport; Tue, 31 Jan 2023 09:34:50 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:30 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:28 -0800 From: Alex Vesker To: , , , "Matan Azrad" CC: , Subject: [v1 11/16] net/mlx5/hws: support partial hash Date: Tue, 31 Jan 2023 11:33:40 +0200 Message-ID: <20230131093346.1261066-12-valex@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230131093346.1261066-1-valex@nvidia.com> References: <20230131093346.1261066-1-valex@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT022:EE_|DM6PR12MB4058:EE_ X-MS-Office365-Filtering-Correlation-Id: 5700c68d-918e-47c5-1858-08db036e66ed X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: ELXfEPz+GgecH8FL9d7COywBSeQQX5r1bx3i2FZ/hRuHxFPzBqfMcvIeSQ7HYSNjJnDOq3fTibQpxw0eZspPC/lRTgbqRWVTNIV2g80kU/P/BZPQBe7e3BrXiSsfHiQHOEsvsRkybj8/XkeJOdZniL9jCx2dn2Gc4oMD+9bjLve/GIbXvkmbH+8xG43mgT0ElDHtoRKOeL9QvXAkxqE9xO0zhDWcWms+PSHDcqVnpmIXsVjKBXOo/DufgJg9pmgUFE3BjIkBBw3PDz/Qgn2m3wUgVfCs+oAcHPwbCWBbK+my2IyqIK9E+9717jXKmhZzcooaElf1Lp31ib5i3h0N5ZTASevAYsIVbO8brZqZBT63hTNEqa59M3Jxpng8MRcbeXmSkS2SYrYj81EcyuIiISffnDIG1TJX13zVWRAt3zOm7CZ0PhZWjmRNCCXTTLaDiLNQf9UEjQWaIqFxIzhDvcQbwRc3dAJ1b2rKUGPkjOFiPNEVEQO0g0ZUyyJCocF9fd1fvv+LiyzIBcAgfGfh83RcQJem5pvN8vsJYAQiD3cRXBEzvn1658B0t+3wLXRSOzGE0DqCaIDZPqighg60m1IeNLQSgnRZ61DFqd3PkhLSMsHqX+IN4vQqASA3ltfeSixGHFUQhPpaE61nI0O4Z1b/DkNRUWeQXT6/2MiH4vg8GGMdVuPl1RUMlWsOGY4mMKRjpboVCO8AW/sNtfFpCg== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(346002)(376002)(39860400002)(136003)(396003)(451199018)(40470700004)(46966006)(36840700001)(47076005)(7696005)(8676002)(70206006)(4326008)(36860700001)(41300700001)(8936002)(70586007)(336012)(83380400001)(426003)(478600001)(86362001)(2906002)(7636003)(316002)(6666004)(40480700001)(82740400003)(55016003)(82310400005)(107886003)(1076003)(6636002)(5660300002)(16526019)(30864003)(186003)(356005)(6286002)(26005)(36756003)(110136005)(40460700003)(54906003)(2616005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 09:34:50.9751 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5700c68d-918e-47c5-1858-08db036e66ed X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT022.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4058 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Hash definers allow performing hashing over a subset of the fields which are used for matching. This allows combining match templates which were considered invalid until now. During matcher creation mlx5dr code will process the match templates and check if such hash definer is needed based on the definers bitmasks intersection. Since current HW GTA implementation doesn't allow specifying match and hash definers rule insertion is done using the FW GTA WQE command. Signed-off-by: Alex Vesker --- drivers/common/mlx5/mlx5_prm.h | 4 + drivers/net/mlx5/hws/mlx5dr_definer.c | 105 ++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_matcher.c | 66 +++++++++++++++- drivers/net/mlx5/hws/mlx5dr_matcher.h | 10 ++- 4 files changed, 181 insertions(+), 4 deletions(-) diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index cf46296afb..cca2fb6af7 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -2112,6 +2112,10 @@ enum mlx5_ifc_cross_vhca_allowed_objects_types { MLX5_CROSS_VHCA_ALLOWED_OBJS_RTC = 1 << 0xa, }; +enum { + MLX5_GENERATE_WQE_TYPE_FLOW_UPDATE = 1 << 1, +}; + /* * HCA Capabilities 2 */ diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c index 9560f8a0af..260e6c5d1d 100644 --- a/drivers/net/mlx5/hws/mlx5dr_definer.c +++ b/drivers/net/mlx5/hws/mlx5dr_definer.c @@ -1928,6 +1928,27 @@ int mlx5dr_definer_get_id(struct mlx5dr_definer *definer) return definer->obj->id; } +static int +mlx5dr_definer_compare(struct mlx5dr_definer *definer_a, + struct mlx5dr_definer *definer_b) +{ + int i; + + for (i = 0; i < BYTE_SELECTORS; i++) + if (definer_a->byte_selector[i] != definer_b->byte_selector[i]) + return 1; + + for (i = 0; i < DW_SELECTORS; i++) + if (definer_a->dw_selector[i] != definer_b->dw_selector[i]) + return 1; + + for (i = 0; i < MLX5DR_JUMBO_TAG_SZ; i++) + if (definer_a->mask.jumbo[i] != definer_b->mask.jumbo[i]) + return 1; + + return 0; +} + static int mlx5dr_definer_calc_layout(struct mlx5dr_matcher *matcher, struct mlx5dr_definer *match_definer) @@ -2070,6 +2091,80 @@ mlx5dr_definer_matcher_match_uninit(struct mlx5dr_matcher *matcher) mlx5dr_definer_free(matcher->mt[i].definer); } +static int +mlx5dr_definer_matcher_hash_init(struct mlx5dr_context *ctx, + struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_cmd_definer_create_attr def_attr = {0}; + struct mlx5dr_match_template *mt = matcher->mt; + struct ibv_context *ibv_ctx = ctx->ibv_ctx; + uint8_t *bit_mask; + int i, j; + + for (i = 1; i < matcher->num_of_mt; i++) + if (mlx5dr_definer_compare(mt[i].definer, mt[i - 1].definer)) + matcher->flags |= MLX5DR_MATCHER_FLAGS_HASH_DEFINER; + + if (!(matcher->flags & MLX5DR_MATCHER_FLAGS_HASH_DEFINER)) + return 0; + + /* Insert by index requires all MT using the same definer */ + if (matcher->attr.insert_mode == MLX5DR_MATCHER_INSERT_BY_INDEX) { + DR_LOG(ERR, "Insert by index not supported with MT combination"); + rte_errno = EOPNOTSUPP; + return rte_errno; + } + + matcher->hash_definer = simple_calloc(1, sizeof(*matcher->hash_definer)); + if (!matcher->hash_definer) { + DR_LOG(ERR, "Failed to allocate memory for hash definer"); + rte_errno = ENOMEM; + return rte_errno; + } + + /* Calculate intersection between all match templates bitmasks. + * We will use mt[0] as reference and intersect it with mt[1..n]. + * From this we will get: + * hash_definer.selectors = mt[0].selecotrs + * hash_definer.mask = mt[0].mask & mt[0].mask & ... & mt[n].mask + */ + + /* Use first definer which should also contain intersection fields */ + memcpy(matcher->hash_definer, mt->definer, sizeof(struct mlx5dr_definer)); + + /* Calculate intersection between first to all match templates bitmasks */ + for (i = 1; i < matcher->num_of_mt; i++) { + bit_mask = (uint8_t *)&mt[i].definer->mask; + for (j = 0; j < MLX5DR_JUMBO_TAG_SZ; j++) + ((uint8_t *)&matcher->hash_definer->mask)[j] &= bit_mask[j]; + } + + def_attr.match_mask = matcher->hash_definer->mask.jumbo; + def_attr.dw_selector = matcher->hash_definer->dw_selector; + def_attr.byte_selector = matcher->hash_definer->byte_selector; + matcher->hash_definer->obj = mlx5dr_cmd_definer_create(ibv_ctx, &def_attr); + if (!matcher->hash_definer->obj) { + DR_LOG(ERR, "Failed to create hash definer"); + goto free_hash_definer; + } + + return 0; + +free_hash_definer: + simple_free(matcher->hash_definer); + return rte_errno; +} + +static void +mlx5dr_definer_matcher_hash_uninit(struct mlx5dr_matcher *matcher) +{ + if (!matcher->hash_definer) + return; + + mlx5dr_cmd_destroy_obj(matcher->hash_definer->obj); + simple_free(matcher->hash_definer); +} + int mlx5dr_definer_matcher_init(struct mlx5dr_context *ctx, struct mlx5dr_matcher *matcher) { @@ -2093,8 +2188,17 @@ int mlx5dr_definer_matcher_init(struct mlx5dr_context *ctx, goto free_fc; } + /* Calculate partial hash definer */ + ret = mlx5dr_definer_matcher_hash_init(ctx, matcher); + if (ret) { + DR_LOG(ERR, "Failed to init hash definer"); + goto uninit_match_definer; + } + return 0; +uninit_match_definer: + mlx5dr_definer_matcher_match_uninit(matcher); free_fc: for (i = 0; i < matcher->num_of_mt; i++) simple_free(matcher->mt[i].fc); @@ -2109,6 +2213,7 @@ void mlx5dr_definer_matcher_uninit(struct mlx5dr_matcher *matcher) if (matcher->flags & MLX5DR_MATCHER_FLAGS_COLISION) return; + mlx5dr_definer_matcher_hash_uninit(matcher); mlx5dr_definer_matcher_match_uninit(matcher); for (i = 0; i < matcher->num_of_mt; i++) diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.c b/drivers/net/mlx5/hws/mlx5dr_matcher.c index 7e332052b2..e860c274cf 100644 --- a/drivers/net/mlx5/hws/mlx5dr_matcher.c +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.c @@ -337,6 +337,42 @@ static int mlx5dr_matcher_disconnect(struct mlx5dr_matcher *matcher) return 0; } +static bool mlx5dr_matcher_supp_fw_wqe(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_cmd_query_caps *caps = matcher->tbl->ctx->caps; + + if (matcher->flags & MLX5DR_MATCHER_FLAGS_HASH_DEFINER) { + if (matcher->hash_definer->type == MLX5DR_DEFINER_TYPE_MATCH && + !IS_BIT_SET(caps->supp_ste_fromat_gen_wqe, MLX5_IFC_RTC_STE_FORMAT_8DW)) { + DR_LOG(ERR, "Gen WQE MATCH format not supported"); + return false; + } + + if (matcher->hash_definer->type == MLX5DR_DEFINER_TYPE_JUMBO) { + DR_LOG(ERR, "Gen WQE JUMBO format not supported"); + return false; + } + } + + if (matcher->attr.insert_mode != MLX5DR_MATCHER_INSERT_BY_HASH || + matcher->attr.distribute_mode != MLX5DR_MATCHER_DISTRIBUTE_BY_HASH) { + DR_LOG(ERR, "Gen WQE must be inserted and distribute by hash"); + return false; + } + + if (!(caps->supp_type_gen_wqe & MLX5_GENERATE_WQE_TYPE_FLOW_UPDATE)) { + DR_LOG(ERR, "Gen WQE command not supporting GTA"); + return false; + } + + if (!caps->rtc_max_hash_def_gen_wqe) { + DR_LOG(ERR, "Hash definer not supported"); + return false; + } + + return true; +} + static void mlx5dr_matcher_set_rtc_attr_sz(struct mlx5dr_matcher *matcher, struct mlx5dr_cmd_rtc_create_attr *rtc_attr, enum mlx5dr_matcher_rtc_type rtc_type, @@ -432,8 +468,16 @@ static int mlx5dr_matcher_create_rtc(struct mlx5dr_matcher *matcher, if (attr->insert_mode == MLX5DR_MATCHER_INSERT_BY_HASH) { /* The usual Hash Table */ rtc_attr.update_index_mode = MLX5_IFC_RTC_STE_UPDATE_MODE_BY_HASH; - /* The first match template is used since all share the same definer */ - rtc_attr.match_definer_0 = mlx5dr_definer_get_id(mt->definer); + if (matcher->hash_definer) { + /* Specify definer_id_0 is used for hashing */ + rtc_attr.fw_gen_wqe = true; + rtc_attr.num_hash_definer = 1; + rtc_attr.match_definer_0 = + mlx5dr_definer_get_id(matcher->hash_definer); + } else { + /* The first mt is used since all share the same definer */ + rtc_attr.match_definer_0 = mlx5dr_definer_get_id(mt->definer); + } } else if (attr->insert_mode == MLX5DR_MATCHER_INSERT_BY_INDEX) { rtc_attr.update_index_mode = MLX5_IFC_RTC_STE_UPDATE_MODE_BY_OFFSET; rtc_attr.num_hash_definer = 1; @@ -640,6 +684,12 @@ static int mlx5dr_matcher_bind_at(struct mlx5dr_matcher *matcher) if (!matcher->action_ste.max_stes) return 0; + if (mlx5dr_matcher_req_fw_wqe(matcher)) { + DR_LOG(ERR, "FW extended matcher cannot be binded to complex at"); + rte_errno = ENOTSUP; + return rte_errno; + } + /* Allocate action STE mempool */ pool_attr.table_type = tbl->type; pool_attr.pool_type = MLX5DR_POOL_TYPE_STE; @@ -701,13 +751,21 @@ static int mlx5dr_matcher_bind_mt(struct mlx5dr_matcher *matcher) struct mlx5dr_pool_attr pool_attr = {0}; int ret; - /* Calculate match definers */ + /* Calculate match and hash definers */ ret = mlx5dr_definer_matcher_init(ctx, matcher); if (ret) { DR_LOG(ERR, "Failed to set matcher templates with match definers"); return ret; } + if (mlx5dr_matcher_req_fw_wqe(matcher) && + !mlx5dr_matcher_supp_fw_wqe(matcher)) { + DR_LOG(ERR, "Matcher requires FW WQE which is not supported"); + rte_errno = ENOTSUP; + ret = rte_errno; + goto uninit_match_definer; + } + /* Create an STE pool per matcher*/ pool_attr.table_type = matcher->tbl->type; pool_attr.pool_type = MLX5DR_POOL_TYPE_STE; @@ -719,6 +777,7 @@ static int mlx5dr_matcher_bind_mt(struct mlx5dr_matcher *matcher) matcher->match_ste.pool = mlx5dr_pool_create(ctx, &pool_attr); if (!matcher->match_ste.pool) { DR_LOG(ERR, "Failed to allocate matcher STE pool"); + ret = ENOTSUP; goto uninit_match_definer; } @@ -932,6 +991,7 @@ mlx5dr_matcher_create_col_matcher(struct mlx5dr_matcher *matcher) col_matcher->at = matcher->at; col_matcher->num_of_at = matcher->num_of_at; col_matcher->num_of_mt = matcher->num_of_mt; + col_matcher->hash_definer = matcher->hash_definer; col_matcher->attr.priority = matcher->attr.priority; col_matcher->flags = matcher->flags; col_matcher->flags |= MLX5DR_MATCHER_FLAGS_COLISION; diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.h b/drivers/net/mlx5/hws/mlx5dr_matcher.h index 4bdb33b11f..c012c0c193 100644 --- a/drivers/net/mlx5/hws/mlx5dr_matcher.h +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.h @@ -23,7 +23,8 @@ #define MLX5DR_MATCHER_ASSURED_MAIN_TBL_DEPTH 2 enum mlx5dr_matcher_flags { - MLX5DR_MATCHER_FLAGS_COLISION = 1 << 0, + MLX5DR_MATCHER_FLAGS_HASH_DEFINER = 1 << 0, + MLX5DR_MATCHER_FLAGS_COLISION = 1 << 1, }; struct mlx5dr_match_template { @@ -69,6 +70,7 @@ struct mlx5dr_matcher { struct mlx5dr_matcher *col_matcher; struct mlx5dr_matcher_match_ste match_ste; struct mlx5dr_matcher_action_ste action_ste; + struct mlx5dr_definer *hash_definer; LIST_ENTRY(mlx5dr_matcher) next; }; @@ -78,6 +80,12 @@ mlx5dr_matcher_mt_is_jumbo(struct mlx5dr_match_template *mt) return mlx5dr_definer_is_jumbo(mt->definer); } +static inline bool mlx5dr_matcher_req_fw_wqe(struct mlx5dr_matcher *matcher) +{ + /* Currently HWS doesn't support hash different from match or range */ + return unlikely(matcher->flags & MLX5DR_MATCHER_FLAGS_HASH_DEFINER); +} + int mlx5dr_matcher_conv_items_to_prm(uint64_t *match_buf, struct rte_flow_item *items, uint8_t *match_criteria, From patchwork Tue Jan 31 09:33:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Vesker X-Patchwork-Id: 122736 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 56FA441B8D; Tue, 31 Jan 2023 10:36:32 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2871E42FB6; Tue, 31 Jan 2023 10:35:07 +0100 (CET) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2053.outbound.protection.outlook.com [40.107.243.53]) by mails.dpdk.org (Postfix) with ESMTP id 9C1854114A for ; Tue, 31 Jan 2023 10:35:05 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=WoHqx4zxnsJWI1DM64TdypJ2GtlzvYOiqVoVh6VF7v5YfRRoy81aeDh3hfjY6QMgFUebLs/NXCMk8RZFFyOXOBEitD+SmhrIucFxKXl8/ZwczM5OVsY1ugZCyO5KxBWbsTTtM4WmC+CR1xDdAkCu6ISF+Se/yreby571qrqCaWObUbuM19l/qrfa51l45WT9vfpN4m/p8GTAtIuTA+vlbrSRoivTMfDYaJQbzaOtOUXUAfEBMe4I62UYj383TZHzJxWYfmzp6ZVygUCp0wB5GXtrPxE+dIHlyd/KfhkBv2SD9wIp4jb1iSLcPc4hUwl6UcIZfMzV7sM5dZ+kIQYxzw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=0AnuZ0eGhlCXHFhn0pwST0oL5Q5/MR/5TgDUUcWe3OE=; b=SVv5wVnxUX8LaEWkapmt6MKCwtvbnun+NQ+WQhKvSWz8YG7skh0ck87O7UMqPItrE5tRVi2orK6z/ObFfD9/Q3yb12BUy7SBsQFYGOQWupppsbsVD5Ry1hlHuQdGL+giwIQ+W2184BW0GFgDHqmtuj6YJ5rvd4U9FTxZ/tc6lq6JmI49MydoSk4FtGs+bISMfY0S0paWKIqosnzjKAt3v93zqfA8hAxWDYe5yH61bw/QpBv+ixQiBgGXvvuG87Wys5U7MeSEvJzp3t9JvGU9elBp+EVzaSlkqe7FGfSHWDti6hMS2qZkKupAApltUN5UrCp8mVIz9BMk3umtj1g0Kg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=0AnuZ0eGhlCXHFhn0pwST0oL5Q5/MR/5TgDUUcWe3OE=; b=uWYKkKWqAJWK2Sqe6hDw1PSjP7ZpgnK42tgg93HVCDUkeaQcRgFgLprymRQFINU0Lcc1J9D8DZstEQbO9egNsx0qf724dRjy99lQAjf6NIhQfXcz1IbjJ4ID07WnBDdhgqUHftGSk6LaN24PaM2IaOb2NOO45H8bpEM/cZGSi0A2aYzGogPApOGf9y9WeV+WJwGoDksMjUF4IIC9U/HI4/AsIsO2DNnJ+2kx8oo5kHHOUQi476Q9ibcadwHf62GI+UpRD3XbFuPx6n8/kGYPmUMiZNSeIBT8CnDCToxDGh7/TygO3gOUiLhsjXQATWfBAC5/tCgDMvLgnBfYFSwfdQ== Received: from MW4PR04CA0180.namprd04.prod.outlook.com (2603:10b6:303:85::35) by DM6PR12MB4546.namprd12.prod.outlook.com (2603:10b6:5:2ae::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38; Tue, 31 Jan 2023 09:34:55 +0000 Received: from CO1NAM11FT022.eop-nam11.prod.protection.outlook.com (2603:10b6:303:85:cafe::a5) by MW4PR04CA0180.outlook.office365.com (2603:10b6:303:85::35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38 via Frontend Transport; Tue, 31 Jan 2023 09:34:55 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT022.mail.protection.outlook.com (10.13.175.199) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.21 via Frontend Transport; Tue, 31 Jan 2023 09:34:55 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:33 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:31 -0800 From: Alex Vesker To: , , , "Matan Azrad" CC: , Subject: [v1 12/16] net/mlx5/hws: add range definer creation support Date: Tue, 31 Jan 2023 11:33:41 +0200 Message-ID: <20230131093346.1261066-13-valex@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230131093346.1261066-1-valex@nvidia.com> References: <20230131093346.1261066-1-valex@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT022:EE_|DM6PR12MB4546:EE_ X-MS-Office365-Filtering-Correlation-Id: 8680e8e1-b293-4bd6-7880-08db036e6981 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: sn7OF8SajlK1QfVW+Z8oJDPQdFTtcmg8aW91ysHfRuaUdU9gBATf0nlud4nm3knmI3KfXWmBjyd27KrB83f89tCztSlXPvAXRwQWdKmlb5svmQ1EmFnDwISp5xBX28UceDoFzGkYjPu/ANcOc0F7088DbsP67hKSYuOtCez73lhLfCuezt6A+Z6OghP88fQ0C9E2VTm197XyjI6t4pCFnpQVsoZrwYuYIu1U/pi6lnW+mWLMuReZqbVGKcjUV1p5b+yD5iRFXpFVd2nLvT7KpMBtqjSM3f5PsqRmbA41on6yBTNJ4xBhRbtZk4iPoch3m0nRm6hHsLGcSiR7kYpdOyxXkw9NI1Jvd4AwqJII/RfqJlGuOKHE7MJ3lYPsEVxCO7M34SYmvjCZcPQFjbGp8HLEHUue4hskPD6X1HuxUhrng/I3wVmIzEBGgn1s4tCkka7CIupL8bH6F4DjaUGoC5niMerbkrIAxeN20lzyEk0conI+6e6Kv02ZiuKa75cXgUvMopeBKyamJ6kAs8uSk58loZSLNX2ssNuthpZybNL08cG6ubxzzk1QfcIIGOzvvTIau/u9mTzjspP81v+xVpwOZfNIAbhUJfYaiBc2aA30ch/daZmjyF2qnGLIZf7mTHeZIg270MprsGq7sV3Vc9Omj+IxJcg65DMVjeCIptBgeDR2QZzNJO2jDOvNjbGiFUyBO+t/UUrBcqzaKPxjDQ== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(376002)(346002)(396003)(136003)(39860400002)(451199018)(36840700001)(40470700004)(46966006)(86362001)(36756003)(356005)(36860700001)(82740400003)(7636003)(426003)(107886003)(336012)(83380400001)(47076005)(186003)(26005)(16526019)(6286002)(6666004)(2616005)(478600001)(1076003)(54906003)(110136005)(41300700001)(7696005)(30864003)(2906002)(70206006)(6636002)(8936002)(5660300002)(82310400005)(40480700001)(55016003)(316002)(70586007)(4326008)(8676002)(40460700003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 09:34:55.2717 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 8680e8e1-b293-4bd6-7880-08db036e6981 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT022.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4546 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Calculate and create an additional definer used for range check during matcher creation. In such case two definers will be created one for specific matching and a range definer. Since range HW GTA WQE doesn't support the needed range support rule insertion rule insertion is done using the FW GTA WQE command. Signed-off-by: Alex Vesker --- drivers/net/mlx5/hws/mlx5dr_definer.c | 255 +++++++++++++++++++++++--- drivers/net/mlx5/hws/mlx5dr_definer.h | 16 +- drivers/net/mlx5/hws/mlx5dr_matcher.c | 27 ++- drivers/net/mlx5/hws/mlx5dr_matcher.h | 17 +- 4 files changed, 281 insertions(+), 34 deletions(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c index 260e6c5d1d..cf84fbea71 100644 --- a/drivers/net/mlx5/hws/mlx5dr_definer.c +++ b/drivers/net/mlx5/hws/mlx5dr_definer.c @@ -1508,26 +1508,33 @@ mlx5dr_definer_mt_set_fc(struct mlx5dr_match_template *mt, struct mlx5dr_definer_fc *fc, uint8_t *hl) { - uint32_t fc_sz = 0; + uint32_t fc_sz = 0, fcr_sz = 0; int i; for (i = 0; i < MLX5DR_DEFINER_FNAME_MAX; i++) if (fc[i].tag_set) - fc_sz++; + fc[i].is_range ? fcr_sz++ : fc_sz++; - mt->fc = simple_calloc(fc_sz, sizeof(*mt->fc)); + mt->fc = simple_calloc(fc_sz + fcr_sz, sizeof(*mt->fc)); if (!mt->fc) { rte_errno = ENOMEM; return rte_errno; } + mt->fcr = mt->fc + fc_sz; + for (i = 0; i < MLX5DR_DEFINER_FNAME_MAX; i++) { if (!fc[i].tag_set) continue; fc[i].fname = i; - memcpy(&mt->fc[mt->fc_sz++], &fc[i], sizeof(*mt->fc)); - DR_SET(hl, -1, fc[i].byte_off, fc[i].bit_off, fc[i].bit_mask); + + if (fc[i].is_range) { + memcpy(&mt->fcr[mt->fcr_sz++], &fc[i], sizeof(*mt->fcr)); + } else { + memcpy(&mt->fc[mt->fc_sz++], &fc[i], sizeof(*mt->fc)); + DR_SET(hl, -1, fc[i].byte_off, fc[i].bit_off, fc[i].bit_mask); + } } return 0; @@ -1686,7 +1693,7 @@ mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx, mt->item_flags = item_flags; - /* Fill in headers layout and allocate fc array on mt */ + /* Fill in headers layout and allocate fc & fcr array on mt */ ret = mlx5dr_definer_mt_set_fc(mt, fc, hl); if (ret) { DR_LOG(ERR, "Failed to set field copy to match template"); @@ -1855,9 +1862,92 @@ mlx5dr_definer_copy_sel_ctrl(struct mlx5dr_definer_sel_ctrl *ctrl, } static int -mlx5dr_definer_find_best_hl_fit(struct mlx5dr_context *ctx, - struct mlx5dr_definer *definer, - uint8_t *hl) +mlx5dr_definer_find_best_range_fit(struct mlx5dr_definer *definer, + struct mlx5dr_matcher *matcher) +{ + uint8_t tag_byte_offset[MLX5DR_DEFINER_FNAME_MAX] = {0}; + uint8_t field_select[MLX5DR_DEFINER_FNAME_MAX] = {0}; + struct mlx5dr_definer_sel_ctrl ctrl = {0}; + uint32_t byte_offset, algn_byte_off; + struct mlx5dr_definer_fc *fcr; + bool require_dw; + int idx, i, j; + + /* Try to create a range definer */ + ctrl.allowed_full_dw = DW_SELECTORS_RANGE; + ctrl.allowed_bytes = BYTE_SELECTORS_RANGE; + + /* Multiple fields cannot share the same DW for range match. + * The HW doesn't recognize each field but compares the full dw. + * For example definer DW consists of FieldA_FieldB + * FieldA: Mask 0xFFFF range 0x1 to 0x2 + * FieldB: Mask 0xFFFF range 0x3 to 0x4 + * STE DW range will be 0x00010003 - 0x00020004 + * This will cause invalid match for FieldB if FieldA=1 and FieldB=8 + * Since 0x10003 < 0x10008 < 0x20004 + */ + for (i = 0; i < matcher->num_of_mt; i++) { + for (j = 0; j < matcher->mt[i].fcr_sz; j++) { + fcr = &matcher->mt[i].fcr[j]; + + /* Found - Reuse previous mt binding */ + if (field_select[fcr->fname]) { + fcr->byte_off = tag_byte_offset[fcr->fname]; + continue; + } + + /* Not found */ + require_dw = fcr->byte_off >= (64 * DW_SIZE); + if (require_dw || ctrl.used_bytes == ctrl.allowed_bytes) { + /* Try to cover using DW selector */ + if (ctrl.used_full_dw == ctrl.allowed_full_dw) + goto not_supported; + + ctrl.full_dw_selector[ctrl.used_full_dw++] = + fcr->byte_off / DW_SIZE; + + /* Bind DW */ + idx = ctrl.used_full_dw - 1; + byte_offset = fcr->byte_off % DW_SIZE; + byte_offset += DW_SIZE * (DW_SELECTORS - idx - 1); + } else { + /* Try to cover using Bytes selectors */ + if (ctrl.used_bytes == ctrl.allowed_bytes) + goto not_supported; + + algn_byte_off = DW_SIZE * (fcr->byte_off / DW_SIZE); + ctrl.byte_selector[ctrl.used_bytes++] = algn_byte_off + 3; + ctrl.byte_selector[ctrl.used_bytes++] = algn_byte_off + 2; + ctrl.byte_selector[ctrl.used_bytes++] = algn_byte_off + 1; + ctrl.byte_selector[ctrl.used_bytes++] = algn_byte_off; + + /* Bind BYTE */ + byte_offset = DW_SIZE * DW_SELECTORS; + byte_offset += BYTE_SELECTORS - ctrl.used_bytes; + byte_offset += fcr->byte_off % DW_SIZE; + } + + fcr->byte_off = byte_offset; + tag_byte_offset[fcr->fname] = byte_offset; + field_select[fcr->fname] = 1; + } + } + + mlx5dr_definer_copy_sel_ctrl(&ctrl, definer); + definer->type = MLX5DR_DEFINER_TYPE_RANGE; + + return 0; + +not_supported: + DR_LOG(ERR, "Unable to find supporting range definer combination"); + rte_errno = ENOTSUP; + return rte_errno; +} + +static int +mlx5dr_definer_find_best_match_fit(struct mlx5dr_context *ctx, + struct mlx5dr_definer *definer, + uint8_t *hl) { struct mlx5dr_definer_sel_ctrl ctrl = {0}; bool found; @@ -1923,6 +2013,43 @@ void mlx5dr_definer_create_tag(const struct rte_flow_item *items, } } +static uint32_t mlx5dr_definer_get_range_byte_off(uint32_t match_byte_off) +{ + uint8_t curr_dw_idx = match_byte_off / DW_SIZE; + uint8_t new_dw_idx; + + /* Range DW can have the following values 7,8,9,10 + * -DW7 is mapped to DW9 + * -DW8 is mapped to DW7 + * -DW9 is mapped to DW5 + * -DW10 is mapped to DW3 + * To reduce calculation the following formula is used: + */ + new_dw_idx = curr_dw_idx * (-2) + 23; + + return new_dw_idx * DW_SIZE + match_byte_off % DW_SIZE; +} + +void mlx5dr_definer_create_tag_range(const struct rte_flow_item *items, + struct mlx5dr_definer_fc *fc, + uint32_t fc_sz, + uint8_t *tag) +{ + struct mlx5dr_definer_fc tmp_fc; + uint32_t i; + + for (i = 0; i < fc_sz; i++) { + tmp_fc = *fc; + /* Set MAX value */ + tmp_fc.byte_off = mlx5dr_definer_get_range_byte_off(fc->byte_off); + tmp_fc.tag_set(&tmp_fc, items[fc->item_idx].last, tag); + /* Set MIN value */ + tmp_fc.byte_off += DW_SIZE; + tmp_fc.tag_set(&tmp_fc, items[fc->item_idx].spec, tag); + fc++; + } +} + int mlx5dr_definer_get_id(struct mlx5dr_definer *definer) { return definer->obj->id; @@ -1951,27 +2078,26 @@ mlx5dr_definer_compare(struct mlx5dr_definer *definer_a, static int mlx5dr_definer_calc_layout(struct mlx5dr_matcher *matcher, - struct mlx5dr_definer *match_definer) + struct mlx5dr_definer *match_definer, + struct mlx5dr_definer *range_definer) { struct mlx5dr_context *ctx = matcher->tbl->ctx; struct mlx5dr_match_template *mt = matcher->mt; - uint8_t *match_hl, *hl_buff; + uint8_t *match_hl; int i, ret; /* Union header-layout (hl) is used for creating a single definer * field layout used with different bitmasks for hash and match. */ - hl_buff = simple_calloc(1, MLX5_ST_SZ_BYTES(definer_hl)); - if (!hl_buff) { + match_hl = simple_calloc(1, MLX5_ST_SZ_BYTES(definer_hl)); + if (!match_hl) { DR_LOG(ERR, "Failed to allocate memory for header layout"); rte_errno = ENOMEM; return rte_errno; } - match_hl = hl_buff; - /* Convert all mt items to header layout (hl) - * and allocate the match field copy array (fc). + * and allocate the match and range field copy array (fc & fcr). */ for (i = 0; i < matcher->num_of_mt; i++) { ret = mlx5dr_definer_conv_items_to_hl(ctx, &mt[i], match_hl); @@ -1982,13 +2108,20 @@ mlx5dr_definer_calc_layout(struct mlx5dr_matcher *matcher, } /* Find the match definer layout for header layout match union */ - ret = mlx5dr_definer_find_best_hl_fit(ctx, match_definer, match_hl); + ret = mlx5dr_definer_find_best_match_fit(ctx, match_definer, match_hl); if (ret) { DR_LOG(ERR, "Failed to create match definer from header layout"); goto free_fc; } - simple_free(hl_buff); + /* Find the range definer layout for match templates fcrs */ + ret = mlx5dr_definer_find_best_range_fit(range_definer, matcher); + if (ret) { + DR_LOG(ERR, "Failed to create range definer from header layout"); + goto free_fc; + } + + simple_free(match_hl); return 0; free_fc: @@ -1996,7 +2129,7 @@ mlx5dr_definer_calc_layout(struct mlx5dr_matcher *matcher, if (mt[i].fc) simple_free(mt[i].fc); - simple_free(hl_buff); + simple_free(match_hl); return rte_errno; } @@ -2005,7 +2138,8 @@ mlx5dr_definer_alloc(struct ibv_context *ibv_ctx, struct mlx5dr_definer_fc *fc, int fc_sz, struct rte_flow_item *items, - struct mlx5dr_definer *layout) + struct mlx5dr_definer *layout, + bool bind_fc) { struct mlx5dr_cmd_definer_create_attr def_attr = {0}; struct mlx5dr_definer *definer; @@ -2021,10 +2155,12 @@ mlx5dr_definer_alloc(struct ibv_context *ibv_ctx, memcpy(definer, layout, sizeof(*definer)); /* Align field copy array based on given layout */ - ret = mlx5dr_definer_fc_bind(definer, fc, fc_sz); - if (ret) { - DR_LOG(ERR, "Failed to bind field copy to definer"); - goto free_definer; + if (bind_fc) { + ret = mlx5dr_definer_fc_bind(definer, fc, fc_sz); + if (ret) { + DR_LOG(ERR, "Failed to bind field copy to definer"); + goto free_definer; + } } /* Create the tag mask used for definer creation */ @@ -2067,7 +2203,8 @@ mlx5dr_definer_matcher_match_init(struct mlx5dr_context *ctx, mt[i].fc, mt[i].fc_sz, mt[i].items, - match_layout); + match_layout, + true); if (!mt[i].definer) { DR_LOG(ERR, "Failed to create match definer"); goto free_definers; @@ -2091,6 +2228,58 @@ mlx5dr_definer_matcher_match_uninit(struct mlx5dr_matcher *matcher) mlx5dr_definer_free(matcher->mt[i].definer); } +static int +mlx5dr_definer_matcher_range_init(struct mlx5dr_context *ctx, + struct mlx5dr_matcher *matcher, + struct mlx5dr_definer *range_layout) +{ + struct mlx5dr_match_template *mt = matcher->mt; + int i; + + /* Create optional range definers */ + for (i = 0; i < matcher->num_of_mt; i++) { + if (!mt[i].fcr_sz) + continue; + + /* All must use range if requested */ + if (i && !mt[i - 1].range_definer) { + DR_LOG(ERR, "Using range and non range templates is not allowed"); + goto free_definers; + } + + matcher->flags |= MLX5DR_MATCHER_FLAGS_RANGE_DEFINER; + /* Create definer without fcr binding, already binded */ + mt[i].range_definer = mlx5dr_definer_alloc(ctx->ibv_ctx, + mt[i].fcr, + mt[i].fcr_sz, + mt[i].items, + range_layout, + false); + if (!mt[i].range_definer) { + DR_LOG(ERR, "Failed to create match definer"); + goto free_definers; + } + } + return 0; + +free_definers: + while (i--) + if (mt[i].range_definer) + mlx5dr_definer_free(mt[i].range_definer); + + return rte_errno; +} + +static void +mlx5dr_definer_matcher_range_uninit(struct mlx5dr_matcher *matcher) +{ + int i; + + for (i = 0; i < matcher->num_of_mt; i++) + if (matcher->mt[i].range_definer) + mlx5dr_definer_free(matcher->mt[i].range_definer); +} + static int mlx5dr_definer_matcher_hash_init(struct mlx5dr_context *ctx, struct mlx5dr_matcher *matcher) @@ -2169,13 +2358,13 @@ int mlx5dr_definer_matcher_init(struct mlx5dr_context *ctx, struct mlx5dr_matcher *matcher) { struct mlx5dr_definer match_layout = {0}; + struct mlx5dr_definer range_layout = {0}; int ret, i; if (matcher->flags & MLX5DR_MATCHER_FLAGS_COLISION) return 0; - /* Calculate header layout based on matcher items */ - ret = mlx5dr_definer_calc_layout(matcher, &match_layout); + ret = mlx5dr_definer_calc_layout(matcher, &match_layout, &range_layout); if (ret) { DR_LOG(ERR, "Failed to calculate matcher definer layout"); return ret; @@ -2188,15 +2377,24 @@ int mlx5dr_definer_matcher_init(struct mlx5dr_context *ctx, goto free_fc; } + /* Calculate definers needed for range */ + ret = mlx5dr_definer_matcher_range_init(ctx, matcher, &range_layout); + if (ret) { + DR_LOG(ERR, "Failed to init range definers"); + goto uninit_match_definer; + } + /* Calculate partial hash definer */ ret = mlx5dr_definer_matcher_hash_init(ctx, matcher); if (ret) { DR_LOG(ERR, "Failed to init hash definer"); - goto uninit_match_definer; + goto uninit_range_definer; } return 0; +uninit_range_definer: + mlx5dr_definer_matcher_range_uninit(matcher); uninit_match_definer: mlx5dr_definer_matcher_match_uninit(matcher); free_fc: @@ -2214,6 +2412,7 @@ void mlx5dr_definer_matcher_uninit(struct mlx5dr_matcher *matcher) return; mlx5dr_definer_matcher_hash_uninit(matcher); + mlx5dr_definer_matcher_range_uninit(matcher); mlx5dr_definer_matcher_match_uninit(matcher); for (i = 0; i < matcher->num_of_mt; i++) diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.h b/drivers/net/mlx5/hws/mlx5dr_definer.h index a14a08838a..dd9a297007 100644 --- a/drivers/net/mlx5/hws/mlx5dr_definer.h +++ b/drivers/net/mlx5/hws/mlx5dr_definer.h @@ -5,11 +5,17 @@ #ifndef MLX5DR_DEFINER_H_ #define MLX5DR_DEFINER_H_ +/* Max available selecotrs */ +#define DW_SELECTORS 9 +#define BYTE_SELECTORS 8 + /* Selectors based on match TAG */ #define DW_SELECTORS_MATCH 6 #define DW_SELECTORS_LIMITED 3 -#define DW_SELECTORS 9 -#define BYTE_SELECTORS 8 + +/* Selectors based on range TAG */ +#define DW_SELECTORS_RANGE 2 +#define BYTE_SELECTORS_RANGE 8 enum mlx5dr_definer_fname { MLX5DR_DEFINER_FNAME_ETH_SMAC_48_16_O, @@ -112,6 +118,7 @@ enum mlx5dr_definer_fname { enum mlx5dr_definer_type { MLX5DR_DEFINER_TYPE_MATCH, MLX5DR_DEFINER_TYPE_JUMBO, + MLX5DR_DEFINER_TYPE_RANGE, }; struct mlx5dr_definer_fc { @@ -573,6 +580,11 @@ void mlx5dr_definer_create_tag(const struct rte_flow_item *items, uint32_t fc_sz, uint8_t *tag); +void mlx5dr_definer_create_tag_range(const struct rte_flow_item *items, + struct mlx5dr_definer_fc *fc, + uint32_t fc_sz, + uint8_t *tag); + int mlx5dr_definer_get_id(struct mlx5dr_definer *definer); int mlx5dr_definer_matcher_init(struct mlx5dr_context *ctx, diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.c b/drivers/net/mlx5/hws/mlx5dr_matcher.c index e860c274cf..de688f6873 100644 --- a/drivers/net/mlx5/hws/mlx5dr_matcher.c +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.c @@ -360,6 +360,12 @@ static bool mlx5dr_matcher_supp_fw_wqe(struct mlx5dr_matcher *matcher) return false; } + if ((matcher->flags & MLX5DR_MATCHER_FLAGS_RANGE_DEFINER) && + !IS_BIT_SET(caps->supp_ste_fromat_gen_wqe, MLX5_IFC_RTC_STE_FORMAT_RANGE)) { + DR_LOG(INFO, "Extended match gen wqe RANGE format not supported"); + return false; + } + if (!(caps->supp_type_gen_wqe & MLX5_GENERATE_WQE_TYPE_FLOW_UPDATE)) { DR_LOG(ERR, "Gen WQE command not supporting GTA"); return false; @@ -460,14 +466,20 @@ static int mlx5dr_matcher_create_rtc(struct mlx5dr_matcher *matcher, ste = &matcher->match_ste.ste; ste->order = attr->table.sz_col_log + attr->table.sz_row_log; + /* Add additional rows due to additional range STE */ + if (mlx5dr_matcher_mt_is_range(mt)) + ste->order++; + rtc_attr.log_size = attr->table.sz_row_log; rtc_attr.log_depth = attr->table.sz_col_log; rtc_attr.is_frst_jumbo = mlx5dr_matcher_mt_is_jumbo(mt); + rtc_attr.is_scnd_range = mlx5dr_matcher_mt_is_range(mt); rtc_attr.miss_ft_id = matcher->end_ft->id; if (attr->insert_mode == MLX5DR_MATCHER_INSERT_BY_HASH) { /* The usual Hash Table */ rtc_attr.update_index_mode = MLX5_IFC_RTC_STE_UPDATE_MODE_BY_HASH; + if (matcher->hash_definer) { /* Specify definer_id_0 is used for hashing */ rtc_attr.fw_gen_wqe = true; @@ -477,6 +489,16 @@ static int mlx5dr_matcher_create_rtc(struct mlx5dr_matcher *matcher, } else { /* The first mt is used since all share the same definer */ rtc_attr.match_definer_0 = mlx5dr_definer_get_id(mt->definer); + + /* This is tricky, instead of passing two definers for + * match and range, we specify that this RTC uses a hash + * definer, this will allow us to use any range definer + * since only first STE is used for hashing anyways. + */ + if (matcher->flags & MLX5DR_MATCHER_FLAGS_RANGE_DEFINER) { + rtc_attr.fw_gen_wqe = true; + rtc_attr.num_hash_definer = 1; + } } } else if (attr->insert_mode == MLX5DR_MATCHER_INSERT_BY_INDEX) { rtc_attr.update_index_mode = MLX5_IFC_RTC_STE_UPDATE_MODE_BY_OFFSET; @@ -751,7 +773,7 @@ static int mlx5dr_matcher_bind_mt(struct mlx5dr_matcher *matcher) struct mlx5dr_pool_attr pool_attr = {0}; int ret; - /* Calculate match and hash definers */ + /* Calculate match, range and hash definers */ ret = mlx5dr_definer_matcher_init(ctx, matcher); if (ret) { DR_LOG(ERR, "Failed to set matcher templates with match definers"); @@ -772,6 +794,9 @@ static int mlx5dr_matcher_bind_mt(struct mlx5dr_matcher *matcher) pool_attr.flags = MLX5DR_POOL_FLAGS_FOR_MATCHER_STE_POOL; pool_attr.alloc_log_sz = matcher->attr.table.sz_col_log + matcher->attr.table.sz_row_log; + /* Add additional rows due to additional range STE */ + if (matcher->flags & MLX5DR_MATCHER_FLAGS_RANGE_DEFINER) + pool_attr.alloc_log_sz++; mlx5dr_matcher_set_pool_attr(&pool_attr, matcher); matcher->match_ste.pool = mlx5dr_pool_create(ctx, &pool_attr); diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.h b/drivers/net/mlx5/hws/mlx5dr_matcher.h index c012c0c193..a95cfdec6f 100644 --- a/drivers/net/mlx5/hws/mlx5dr_matcher.h +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.h @@ -23,8 +23,9 @@ #define MLX5DR_MATCHER_ASSURED_MAIN_TBL_DEPTH 2 enum mlx5dr_matcher_flags { - MLX5DR_MATCHER_FLAGS_HASH_DEFINER = 1 << 0, - MLX5DR_MATCHER_FLAGS_COLISION = 1 << 1, + MLX5DR_MATCHER_FLAGS_RANGE_DEFINER = 1 << 0, + MLX5DR_MATCHER_FLAGS_HASH_DEFINER = 1 << 1, + MLX5DR_MATCHER_FLAGS_COLISION = 1 << 2, }; struct mlx5dr_match_template { @@ -32,7 +33,9 @@ struct mlx5dr_match_template { struct mlx5dr_definer *definer; struct mlx5dr_definer *range_definer; struct mlx5dr_definer_fc *fc; + struct mlx5dr_definer_fc *fcr; uint16_t fc_sz; + uint16_t fcr_sz; uint64_t item_flags; uint8_t vport_item_id; enum mlx5dr_match_template_flags flags; @@ -80,10 +83,18 @@ mlx5dr_matcher_mt_is_jumbo(struct mlx5dr_match_template *mt) return mlx5dr_definer_is_jumbo(mt->definer); } +static inline bool +mlx5dr_matcher_mt_is_range(struct mlx5dr_match_template *mt) +{ + return (!!mt->range_definer); +} + static inline bool mlx5dr_matcher_req_fw_wqe(struct mlx5dr_matcher *matcher) { /* Currently HWS doesn't support hash different from match or range */ - return unlikely(matcher->flags & MLX5DR_MATCHER_FLAGS_HASH_DEFINER); + return unlikely(matcher->flags & + (MLX5DR_MATCHER_FLAGS_HASH_DEFINER | + MLX5DR_MATCHER_FLAGS_RANGE_DEFINER)); } int mlx5dr_matcher_conv_items_to_prm(uint64_t *match_buf, From patchwork Tue Jan 31 09:33:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Vesker X-Patchwork-Id: 122737 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C7BEE41B8D; Tue, 31 Jan 2023 10:36:38 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4C3FC42FBE; Tue, 31 Jan 2023 10:35:08 +0100 (CET) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2041.outbound.protection.outlook.com [40.107.236.41]) by mails.dpdk.org (Postfix) with ESMTP id 1994542FB6 for ; Tue, 31 Jan 2023 10:35:06 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=UbrfKTUI45bRBbd34I+J79Ot9GXG1F7W69WPjTO/BB5mLflmBK8p9780pd/BFBhQ87d2KC6bpjT4pSo9f9VVsYZkRpZVQpe3lw7GYjVQdCzoTDdUINliWERo0scTpwqVqsoEEDq2G7FrVerDldgND//QttndgiM94zd8P62cVLRDu7cAXDLde+SxfQX90zEsQehlDSbo6PBhap11K7EzVudl2pFX7RorhUvknpm3KtUOhHS5K5xX4bA4+32rXo7he8mBIa8U0n0CwzbTuA4+e1NLngOOBKk0pPu6iV4cHqUy8ppC6w6oGUMm7JQ0xf3iARmnXiE1KU2MCrikW6bdpg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=OxgQ0X7ix7NBxlZTaovIw5RPA+AoAz67E6HKahbO9i8=; b=EK28TS0yP5zhKn3z9YwtMiZY9kzUvkUOAbFnIzyQDc8TC3NyB6ZdqWvkYAHWK2ORm1QBcilbTo5UYZeuktk5132EzZsfr42CE7yz+sHRYH6i4LgqCT8mc5Z5rLAUV1iU+eSgkqssG41WQgrogTxFnb8HmH7iNzwQxlimJs4+vjrW0BfWa81RbhHagCobzxXVxXbL0pktiwaRcB4W0qxq+izfdCWpZwvQPjz5aAcKVV4F6D4dJfAB1GR72Mx7sb2nhzekgjKTLJb4ithFEfcCd+Q4I9TSm/TdT1m9J3feixFMYIrNErYa1byZf2V1nTv7OdBSHVlZBqSNrUo3MizY9g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=OxgQ0X7ix7NBxlZTaovIw5RPA+AoAz67E6HKahbO9i8=; b=lBLLRFF0cinpiTQ7hMimxJvNPTD5iA1QpfJBNCWkrfsiOGSQnYyUZV6vBgHIkIxfh9zqQ8s9zCfimwls+a1PWFDTjpfdQUwAT2Dibu3kYVqzTR28+O+mMIczBO0o6c3tnJ9fWzzwh0rxBTV7T79VfWA8kGXT3uTtfbSKum0yCaJdzIIn4crpq/r371w9Kt81RnMaWcok7QimMrcQl5dZ4XWTpJrJHM4cuOx2VOKY70kcTz6vUnveTpWOYjN1Q3Ui4gK67iVqkJqMjosm5w9HIDlyWvFuCAXMj/wE6OiZ1WN7hPmywz5Vv/48aUHDXl8wFE0k3F+zN9t/PKgcv+CCcg== Received: from MW4PR02CA0029.namprd02.prod.outlook.com (2603:10b6:303:16d::34) by IA1PR12MB6460.namprd12.prod.outlook.com (2603:10b6:208:3a8::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Tue, 31 Jan 2023 09:35:02 +0000 Received: from CO1NAM11FT073.eop-nam11.prod.protection.outlook.com (2603:10b6:303:16d:cafe::1a) by MW4PR02CA0029.outlook.office365.com (2603:10b6:303:16d::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38 via Frontend Transport; Tue, 31 Jan 2023 09:35:02 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT073.mail.protection.outlook.com (10.13.174.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.22 via Frontend Transport; Tue, 31 Jan 2023 09:35:01 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:35 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:33 -0800 From: Alex Vesker To: , , , "Matan Azrad" CC: , Subject: [v1 13/16] net/mlx5/hws: add FW WQE rule creation logic Date: Tue, 31 Jan 2023 11:33:42 +0200 Message-ID: <20230131093346.1261066-14-valex@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230131093346.1261066-1-valex@nvidia.com> References: <20230131093346.1261066-1-valex@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT073:EE_|IA1PR12MB6460:EE_ X-MS-Office365-Filtering-Correlation-Id: f3c843c7-5abb-492a-925f-08db036e6d67 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: +f+Kt965ZyEKjlJ9iKa0ir/4al25v7b7vwTBGjm7XQ1wk3OV7eRz79jP6YHn1BqZElQe5aI+OtKiDpWVcnrXzJz1UXcynJqlwaz9nyq6eSeB4bb9t9KTk0tKi40bj519y/7nfy+8j2aB/S+0VbJJRWVVCkBW9AI/2ak0GCFfBEATL/Z562d46Yn4yin8y2Yovw6C+dFeIJvTd+/TmXt/37U7GtxvoHJHrsYdW1HubJbL/E9RLKDmKu0tL1MMXfE/Akr4h+0uny3IJlvItE2c6fLD7O9p67925JnAaKed6zo7+kYvtQsknxBS6y/froIyXPkJmSlUa6iUXrfyYzQY2jeydQCpY3KVEF5UuuzN87SUk0ingPkOtaVV5VHbyV7ItuBtWSMqEv8kaOIc+ZPTz6kZjwnJQjiwCZx/HJJcHW9LwBZljgojvvD+HCDgbGqLP15PuXzkBRPAZ7D8ki9rO7q3C+BrDZadX6EpHRvU4tZmLGlMfsYC+g80Ykcrtm3DDGs/Of/qe1LDNTRbdvPuECkVixRlm3hPdh+smXO2b2/qKyKOH3+JY4kxurxD9rykDpZEjy0iAhKcCdkOeFEirIfE2cBstf+q5165rGlyJE+7yh5S3KPv9eI0uTU+namVvqgiqQOxnTDWKkiUNIyBRIJ2yRPKb1eJdP35BPh5eRJkXjtkWUHs8HLWgBdvK3+jmAT1YuEl6vkX4d16YkFvlg== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(396003)(346002)(136003)(376002)(39860400002)(451199018)(46966006)(40470700004)(36840700001)(30864003)(83380400001)(47076005)(426003)(336012)(7636003)(86362001)(82740400003)(356005)(82310400005)(36756003)(2906002)(7696005)(40460700003)(36860700001)(1076003)(478600001)(16526019)(2616005)(6286002)(186003)(26005)(107886003)(55016003)(6666004)(8676002)(8936002)(4326008)(41300700001)(70586007)(70206006)(40480700001)(54906003)(5660300002)(6636002)(110136005)(316002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 09:35:01.8284 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f3c843c7-5abb-492a-925f-08db036e6d67 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT073.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB6460 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org FW WQE and HW WQE are done in a similar way but not to jeopardize the performance rule creation is done over the new FW rule creation function. The deletion function is shared between both flows. Signed-off-by: Alex Vesker --- drivers/net/mlx5/hws/mlx5dr_rule.c | 180 +++++++++++++++++++++++++++-- drivers/net/mlx5/hws/mlx5dr_rule.h | 2 + drivers/net/mlx5/hws/mlx5dr_send.h | 9 +- 3 files changed, 180 insertions(+), 11 deletions(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.c b/drivers/net/mlx5/hws/mlx5dr_rule.c index f5a0c46315..9d5e5b11a5 100644 --- a/drivers/net/mlx5/hws/mlx5dr_rule.c +++ b/drivers/net/mlx5/hws/mlx5dr_rule.c @@ -112,6 +112,62 @@ static void mlx5dr_rule_gen_comp(struct mlx5dr_send_engine *queue, mlx5dr_send_engine_gen_comp(queue, user_data, comp_status); } +static void +mlx5dr_rule_save_delete_info(struct mlx5dr_rule *rule, + struct mlx5dr_send_ste_attr *ste_attr) +{ + if (unlikely(mlx5dr_matcher_req_fw_wqe(rule->matcher))) { + uint8_t *src_tag; + + /* Save match definer id and tag for delete */ + rule->tag_ptr = simple_calloc(2, sizeof(*rule->tag_ptr)); + assert(rule->tag_ptr); + + src_tag = (uint8_t *)ste_attr->wqe_data->tag; + memcpy(rule->tag_ptr[0].match, src_tag, MLX5DR_MATCH_TAG_SZ); + rule->tag_ptr[1].reserved[0] = ste_attr->send_attr.match_definer_id; + + /* Save range definer id and tag for delete */ + if (ste_attr->range_wqe_data) { + src_tag = (uint8_t *)ste_attr->range_wqe_data->tag; + memcpy(rule->tag_ptr[1].match, src_tag, MLX5DR_MATCH_TAG_SZ); + rule->tag_ptr[1].reserved[1] = ste_attr->send_attr.range_definer_id; + } + return; + } + + if (ste_attr->wqe_tag_is_jumbo) + memcpy(rule->tag.jumbo, ste_attr->wqe_data->jumbo, MLX5DR_JUMBO_TAG_SZ); + else + memcpy(rule->tag.match, ste_attr->wqe_data->tag, MLX5DR_MATCH_TAG_SZ); +} + +static void +mlx5dr_rule_clear_delete_info(struct mlx5dr_rule *rule) +{ + if (unlikely(mlx5dr_matcher_req_fw_wqe(rule->matcher))) + simple_free(rule->tag_ptr); +} + +static void +mlx5dr_rule_load_delete_info(struct mlx5dr_rule *rule, + struct mlx5dr_send_ste_attr *ste_attr) +{ + if (unlikely(mlx5dr_matcher_req_fw_wqe(rule->matcher))) { + /* Load match definer id and tag for delete */ + ste_attr->wqe_tag = &rule->tag_ptr[0]; + ste_attr->send_attr.match_definer_id = rule->tag_ptr[1].reserved[0]; + + /* Load range definer id and tag for delete */ + if (rule->matcher->flags & MLX5DR_MATCHER_FLAGS_RANGE_DEFINER) { + ste_attr->range_wqe_tag = &rule->tag_ptr[1]; + ste_attr->send_attr.range_definer_id = rule->tag_ptr[1].reserved[1]; + } + } else { + ste_attr->wqe_tag = &rule->tag; + } +} + static int mlx5dr_rule_alloc_action_ste(struct mlx5dr_rule *rule, struct mlx5dr_rule_attr *attr) { @@ -180,6 +236,97 @@ static void mlx5dr_rule_create_init(struct mlx5dr_rule *rule, apply->require_dep = 0; } +static int mlx5dr_rule_create_hws_fw_wqe(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr, + uint8_t mt_idx, + const struct rte_flow_item items[], + uint8_t at_idx, + struct mlx5dr_rule_action rule_actions[]) +{ + struct mlx5dr_action_template *at = &rule->matcher->at[at_idx]; + struct mlx5dr_match_template *mt = &rule->matcher->mt[mt_idx]; + struct mlx5dr_send_ring_dep_wqe range_wqe = {{0}}; + struct mlx5dr_send_ring_dep_wqe match_wqe = {{0}}; + bool is_range = mlx5dr_matcher_mt_is_range(mt); + bool is_jumbo = mlx5dr_matcher_mt_is_jumbo(mt); + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dr_send_ste_attr ste_attr = {0}; + struct mlx5dr_actions_apply_data apply; + struct mlx5dr_send_engine *queue; + + queue = &ctx->send_queue[attr->queue_id]; + if (unlikely(mlx5dr_send_engine_err(queue))) { + rte_errno = EIO; + return rte_errno; + } + + mlx5dr_rule_create_init(rule, &ste_attr, &apply); + mlx5dr_rule_init_dep_wqe(&match_wqe, rule, items, mt, attr->user_data); + mlx5dr_rule_init_dep_wqe(&range_wqe, rule, items, mt, attr->user_data); + + ste_attr.direct_index = 0; + ste_attr.rtc_0 = match_wqe.rtc_0; + ste_attr.rtc_1 = match_wqe.rtc_1; + ste_attr.used_id_rtc_0 = &rule->rtc_0; + ste_attr.used_id_rtc_1 = &rule->rtc_1; + ste_attr.retry_rtc_0 = match_wqe.retry_rtc_0; + ste_attr.retry_rtc_1 = match_wqe.retry_rtc_1; + ste_attr.send_attr.rule = match_wqe.rule; + ste_attr.send_attr.user_data = match_wqe.user_data; + + ste_attr.send_attr.fence = 1; + ste_attr.send_attr.notify_hw = 1; + ste_attr.wqe_tag_is_jumbo = is_jumbo; + + /* Prepare match STE TAG */ + ste_attr.wqe_ctrl = &match_wqe.wqe_ctrl; + ste_attr.wqe_data = &match_wqe.wqe_data; + ste_attr.send_attr.match_definer_id = mlx5dr_definer_get_id(mt->definer); + + mlx5dr_definer_create_tag(items, + mt->fc, + mt->fc_sz, + (uint8_t *)match_wqe.wqe_data.action); + + /* Prepare range STE TAG */ + if (is_range) { + ste_attr.range_wqe_data = &range_wqe.wqe_data; + ste_attr.send_attr.len += MLX5DR_WQE_SZ_GTA_DATA; + ste_attr.send_attr.range_definer_id = mlx5dr_definer_get_id(mt->range_definer); + + mlx5dr_definer_create_tag_range(items, + mt->fcr, + mt->fcr_sz, + (uint8_t *)range_wqe.wqe_data.action); + } + + /* Apply the actions on the last STE */ + apply.queue = queue; + apply.next_direct_idx = 0; + apply.rule_action = rule_actions; + apply.wqe_ctrl = &match_wqe.wqe_ctrl; + apply.wqe_data = (uint32_t *)(is_range ? + &range_wqe.wqe_data : + &match_wqe.wqe_data); + + /* Skip setters[0] used for jumbo STE since not support with FW WQE */ + mlx5dr_action_apply_setter(&apply, &at->setters[1], 0); + + /* Send WQEs to FW */ + mlx5dr_send_stes_fw(queue, &ste_attr); + + /* Backup TAG on the rule for deletion */ + mlx5dr_rule_save_delete_info(rule, &ste_attr); + mlx5dr_send_engine_inc_rule(queue); + + /* Send dependent WQEs */ + if (!attr->burst) + mlx5dr_send_all_dep_wqe(queue); + + return 0; +} + static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, struct mlx5dr_rule_attr *attr, uint8_t mt_idx, @@ -189,7 +336,7 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, { struct mlx5dr_action_template *at = &rule->matcher->at[at_idx]; struct mlx5dr_match_template *mt = &rule->matcher->mt[mt_idx]; - bool is_jumbo = mlx5dr_definer_is_jumbo(mt->definer); + bool is_jumbo = mlx5dr_matcher_mt_is_jumbo(mt); struct mlx5dr_matcher *matcher = rule->matcher; struct mlx5dr_context *ctx = matcher->tbl->ctx; struct mlx5dr_send_ste_attr ste_attr = {0}; @@ -200,6 +347,11 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, uint8_t total_stes, action_stes; int i, ret; + /* Insert rule using FW WQE if cannot use GTA WQE */ + if (unlikely(mlx5dr_matcher_req_fw_wqe(matcher))) + return mlx5dr_rule_create_hws_fw_wqe(rule, attr, mt_idx, items, + at_idx, rule_actions); + queue = &ctx->send_queue[attr->queue_id]; if (unlikely(mlx5dr_send_engine_err(queue))) { rte_errno = EIO; @@ -283,11 +435,7 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, } /* Backup TAG on the rule for deletion */ - if (is_jumbo) - memcpy(rule->tag.jumbo, dep_wqe->wqe_data.action, MLX5DR_JUMBO_TAG_SZ); - else - memcpy(rule->tag.match, dep_wqe->wqe_data.tag, MLX5DR_MATCH_TAG_SZ); - + mlx5dr_rule_save_delete_info(rule, &ste_attr); mlx5dr_send_engine_inc_rule(queue); /* Send dependent WQEs */ @@ -311,6 +459,9 @@ static void mlx5dr_rule_destroy_failed_hws(struct mlx5dr_rule *rule, /* Rule failed now we can safely release action STEs */ mlx5dr_rule_free_action_ste_idx(rule); + /* Clear complex tag */ + mlx5dr_rule_clear_delete_info(rule); + /* If a rule that was indicated as burst (need to trigger HW) has failed * insertion we won't ring the HW as nothing is being written to the WQ. * In such case update the last WQE and ring the HW with that work @@ -327,6 +478,9 @@ static int mlx5dr_rule_destroy_hws(struct mlx5dr_rule *rule, { struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; struct mlx5dr_matcher *matcher = rule->matcher; + bool fw_wqe = mlx5dr_matcher_req_fw_wqe(matcher); + bool is_range = mlx5dr_matcher_mt_is_range(matcher->mt); + bool is_jumbo = mlx5dr_matcher_mt_is_jumbo(matcher->mt); struct mlx5dr_wqe_gta_ctrl_seg wqe_ctrl = {0}; struct mlx5dr_send_ste_attr ste_attr = {0}; struct mlx5dr_send_engine *queue; @@ -361,6 +515,8 @@ static int mlx5dr_rule_destroy_hws(struct mlx5dr_rule *rule, ste_attr.send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE; ste_attr.send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; ste_attr.send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA; + if (unlikely(is_range)) + ste_attr.send_attr.len += MLX5DR_WQE_SZ_GTA_DATA; ste_attr.send_attr.rule = rule; ste_attr.send_attr.notify_hw = !attr->burst; @@ -371,13 +527,19 @@ static int mlx5dr_rule_destroy_hws(struct mlx5dr_rule *rule, ste_attr.used_id_rtc_0 = &rule->rtc_0; ste_attr.used_id_rtc_1 = &rule->rtc_1; ste_attr.wqe_ctrl = &wqe_ctrl; - ste_attr.wqe_tag = &rule->tag; - ste_attr.wqe_tag_is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt->definer); + ste_attr.wqe_tag_is_jumbo = is_jumbo; ste_attr.gta_opcode = MLX5DR_WQE_GTA_OP_DEACTIVATE; if (unlikely(mlx5dr_matcher_is_insert_by_idx(matcher))) ste_attr.direct_index = attr->rule_idx; - mlx5dr_send_ste(queue, &ste_attr); + mlx5dr_rule_load_delete_info(rule, &ste_attr); + + if (unlikely(fw_wqe)) { + mlx5dr_send_stes_fw(queue, &ste_attr); + mlx5dr_rule_clear_delete_info(rule); + } else { + mlx5dr_send_ste(queue, &ste_attr); + } return 0; } diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.h b/drivers/net/mlx5/hws/mlx5dr_rule.h index f2fe418159..886cf77992 100644 --- a/drivers/net/mlx5/hws/mlx5dr_rule.h +++ b/drivers/net/mlx5/hws/mlx5dr_rule.h @@ -36,6 +36,8 @@ struct mlx5dr_rule { struct mlx5dr_matcher *matcher; union { struct mlx5dr_rule_match_tag tag; + /* Pointer to tag to store more than one tag */ + struct mlx5dr_rule_match_tag *tag_ptr; struct ibv_flow *flow; }; uint32_t rtc_0; /* The RTC into which the STE was inserted */ diff --git a/drivers/net/mlx5/hws/mlx5dr_send.h b/drivers/net/mlx5/hws/mlx5dr_send.h index 47bb66b3c7..d0977ec851 100644 --- a/drivers/net/mlx5/hws/mlx5dr_send.h +++ b/drivers/net/mlx5/hws/mlx5dr_send.h @@ -54,8 +54,13 @@ struct mlx5dr_wqe_gta_data_seg_ste { __be32 rsvd0_ctr_id; __be32 rsvd1_definer; __be32 rsvd2[3]; - __be32 action[3]; - __be32 tag[8]; + union { + struct { + __be32 action[3]; + __be32 tag[8]; + }; + __be32 jumbo[11]; + }; }; struct mlx5dr_wqe_gta_data_seg_arg { From patchwork Tue Jan 31 09:33:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Vesker X-Patchwork-Id: 122732 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1FE4A41B8D; Tue, 31 Jan 2023 10:36:01 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C385C42F90; Tue, 31 Jan 2023 10:34:54 +0100 (CET) Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam04on2084.outbound.protection.outlook.com [40.107.101.84]) by mails.dpdk.org (Postfix) with ESMTP id C070B42F8E for ; Tue, 31 Jan 2023 10:34:53 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CQT2T+AxUGXlIbhpe5qWrRNHXct4HiXgksQUWAa327twkgEE6PfoKpZpf5nHO1c0cF0N/jRk56rgFYLGJPfkstEsLGORBEILaGTXrJbKBF56q7FGRxczd65YWmaGZpIuylKbPfoQM3djwi6VbhYL5e3KB6zaBPja9j0/ie/UT9lhii98vm5vhQmHn8gWzEb3PqkSesClz9bNCClt7CWdLmphC0cEWDL7wyQGCzUayZHYH4SFZll7S1GQVoWB2uepTjgJo3KzfdbiVpjcfq8+CMfJNgXYsV1bhYMd+MbBaYjUL7+f0HbtDF+R47CwWpnFe8SxoBdh12gYbXtCQyjGAQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=E0Xs1O7SoIKww+lf8wMv+22OVJC5Qv/ZQfua2bopPAs=; b=agACFaY9lUwbOOyr5+XTQxPjoacHSrzlcl15/B8tuTStk6nfzpqt/bBbtdKZgEvAObuiCEl2vIK6djAy5dLYBeiyvdq2hhh/MZIysw3YLfuOOhDu/nhQWDmvZQtaDeeL/2bXQQDsSBKPmmAu+LcJ7tPAdeDnUhsmMOtmUG0VuPQ95u1bfvKnyAcIAGoht4lisP1VYMMBfnj/YfX8s9BW9wOd5uPVeCeGaKRyJcTwYGI3ZoWG3QmJ+vtKsClmT3qpLPMoFu+lAStv/B4LCjFktclDyU2enxON37QoUcIpbqAU5E5HjkK05yMuCbchAvDDOFQjwzYuRVp1EOJfdiOGLw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=E0Xs1O7SoIKww+lf8wMv+22OVJC5Qv/ZQfua2bopPAs=; b=JR92L5bc1ps5Sal52MU7zzY0+erKlgHPKXHOR4xHNHqxEBqzt7yBsRhMGlWJD3Rs9RGCFrHEF6/c9ljBYdPqA8kYU4Qb02R6Mw9CbuzcVfvRx+bC58fudRCcWpkzRdPkV5s/teqUrZx/E0FLSD8phUGDsJd+jU0WaI4YcsNBuAqCHOBOqMdB7shJTnSWDY9k7AwqKDd1SewRQzTuht/5GCNXlWvRvLTcFSsN0zH5p46Q2q87gBCX+rwEOoW+U2kYlyl9Hwq8n3qHUIR4+6zY52c4Nsu0GIeRWvDMIuZZ9uFV0hDLiuT9/nOFw1zEfLHpxVzPPpupFSuZbkAKgaw/8g== Received: from MW4PR03CA0188.namprd03.prod.outlook.com (2603:10b6:303:b8::13) by LV2PR12MB5944.namprd12.prod.outlook.com (2603:10b6:408:14f::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Tue, 31 Jan 2023 09:34:51 +0000 Received: from CO1NAM11FT039.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b8:cafe::fb) by MW4PR03CA0188.outlook.office365.com (2603:10b6:303:b8::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38 via Frontend Transport; Tue, 31 Jan 2023 09:34:50 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CO1NAM11FT039.mail.protection.outlook.com (10.13.174.110) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.22 via Frontend Transport; Tue, 31 Jan 2023 09:34:50 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:38 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:35 -0800 From: Alex Vesker To: , , , "Matan Azrad" CC: , Subject: [v1 14/16] net/mlx5/hws: add debug dump support for range and hash Date: Tue, 31 Jan 2023 11:33:43 +0200 Message-ID: <20230131093346.1261066-15-valex@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230131093346.1261066-1-valex@nvidia.com> References: <20230131093346.1261066-1-valex@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT039:EE_|LV2PR12MB5944:EE_ X-MS-Office365-Filtering-Correlation-Id: ef4f3f94-10fc-4857-64ad-08db036e66be X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 8mIIKPip8JX01p1Gq7860rufzBqvRhnhIZaSFOP4mojT9mIU7cbJA0eG1Bj3pjykqdhpHFTrSOA0i6AC+qgkW3syD0dM/qbTf1RuWtug0ny0coGWJuV3WmPx/jiAxs5i42dkUuhIccWbo9i01pAoLFSPN75v8vNpmudiZ0NWNOx5l+j8MhbHPhm3+XkEHDX5OjXrio4AiO1qpV1U6pRjmIA3R6Px/qx2dLiDGQIj7vnlUFriNjtwSv9w03LOUOr70b+cnpfdBecO9DmDQBjytRlphQPbf93vmFEW7decrnwAeUaBVUkfTAtuE7gK4lpk115pPpxBGhQR6Pkx80VSSdCi6FsgJlKNcmjjwObnbZ1VyKiL/xIwz6TcYAahKAQmKFkC/9BqbIpg1Roi4jTDXJblxLNd2Izw+m19y+m8PM548qKQY1k82kG4/w3HY95ptO32BPy0+VAHC1JO6LyTv3bqz+RMWCVHBSxpr+2us/yU4l+p1BwmKYySgmJnSBw1jHrS2B7BADru0aOFv6Hb/7URDHTgtt7m+L7D5+AWb+4tawvAwFZR/2CtT4AFEagsGvRq+pf/9ZIeZT7sy3CBq8iTFMm85l1tZS+uaXwFSQGTpYHmWmMPGuqybpDld9jfcq6oq8VD0vCImAmYrab6qYL58Mlu/0ObiRrCjMS420yzbYhmur21aOIIWh4SDPuN7S/w82GaVZZ4Uv0ovCUr2A== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(396003)(39860400002)(376002)(136003)(346002)(451199018)(36840700001)(46966006)(40470700004)(83380400001)(336012)(47076005)(86362001)(2616005)(82310400005)(7636003)(82740400003)(356005)(2906002)(7696005)(26005)(36860700001)(36756003)(1076003)(40460700003)(426003)(186003)(6286002)(16526019)(478600001)(55016003)(107886003)(40480700001)(110136005)(8676002)(8936002)(41300700001)(4326008)(6636002)(54906003)(5660300002)(6666004)(70206006)(316002)(70586007); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 09:34:50.7159 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ef4f3f94-10fc-4857-64ad-08db036e66be X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT039.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV2PR12MB5944 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for dumping range and hash definers objects. Hash definer is a per matcher object describing the fields used for hashing. Range definer is per match template object describing the fields used for range matching. Both are optional based on the given match templates. Signed-off-by: Alex Vesker --- drivers/net/mlx5/hws/mlx5dr_debug.c | 37 ++++++++++++++++++++--------- drivers/net/mlx5/hws/mlx5dr_debug.h | 4 +++- 2 files changed, 29 insertions(+), 12 deletions(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.c b/drivers/net/mlx5/hws/mlx5dr_debug.c index 9199ec16e0..b1d271eebe 100644 --- a/drivers/net/mlx5/hws/mlx5dr_debug.c +++ b/drivers/net/mlx5/hws/mlx5dr_debug.c @@ -34,15 +34,19 @@ const char *mlx5dr_debug_action_type_to_str(enum mlx5dr_action_type action_type) static int mlx5dr_debug_dump_matcher_template_definer(FILE *f, - struct mlx5dr_match_template *mt) + void *parent_obj, + struct mlx5dr_definer *definer, + enum mlx5dr_debug_res_type type) { - struct mlx5dr_definer *definer = mt->definer; int i, ret; + if (!definer) + return 0; + ret = fprintf(f, "%d,0x%" PRIx64 ",0x%" PRIx64 ",%d,%d,", - MLX5DR_DEBUG_RES_TYPE_MATCHER_TEMPLATE_DEFINER, + type, (uint64_t)(uintptr_t)definer, - (uint64_t)(uintptr_t)mt, + (uint64_t)(uintptr_t)parent_obj, definer->obj->id, definer->type); if (ret < 0) { @@ -89,29 +93,40 @@ static int mlx5dr_debug_dump_matcher_match_template(FILE *f, struct mlx5dr_matcher *matcher) { bool is_root = matcher->tbl->level == MLX5DR_ROOT_LEVEL; + enum mlx5dr_debug_res_type res_type; int i, ret; for (i = 0; i < matcher->num_of_mt; i++) { struct mlx5dr_match_template *mt = &matcher->mt[i]; - ret = fprintf(f, "%d,0x%" PRIx64 ",0x%" PRIx64 ",%d,%d\n", + ret = fprintf(f, "%d,0x%" PRIx64 ",0x%" PRIx64 ",%d,%d,%d\n", MLX5DR_DEBUG_RES_TYPE_MATCHER_MATCH_TEMPLATE, (uint64_t)(uintptr_t)mt, (uint64_t)(uintptr_t)matcher, is_root ? 0 : mt->fc_sz, - mt->flags); + mt->flags, + is_root ? 0 : mt->fcr_sz); if (ret < 0) { rte_errno = EINVAL; return rte_errno; } - if (!is_root) { - ret = mlx5dr_debug_dump_matcher_template_definer(f, mt); - if (ret) - return ret; - } + res_type = MLX5DR_DEBUG_RES_TYPE_MATCHER_TEMPLATE_MATCH_DEFINER; + ret = mlx5dr_debug_dump_matcher_template_definer(f, mt, mt->definer, res_type); + if (ret) + return ret; + + res_type = MLX5DR_DEBUG_RES_TYPE_MATCHER_TEMPLATE_RANGE_DEFINER; + ret = mlx5dr_debug_dump_matcher_template_definer(f, mt, mt->range_definer, res_type); + if (ret) + return ret; } + res_type = MLX5DR_DEBUG_RES_TYPE_MATCHER_TEMPLATE_HASH_DEFINER; + ret = mlx5dr_debug_dump_matcher_template_definer(f, matcher, matcher->hash_definer, res_type); + if (ret) + return ret; + return 0; } diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.h b/drivers/net/mlx5/hws/mlx5dr_debug.h index cf00170f7d..2c29ca295c 100644 --- a/drivers/net/mlx5/hws/mlx5dr_debug.h +++ b/drivers/net/mlx5/hws/mlx5dr_debug.h @@ -19,8 +19,10 @@ enum mlx5dr_debug_res_type { MLX5DR_DEBUG_RES_TYPE_MATCHER = 4200, MLX5DR_DEBUG_RES_TYPE_MATCHER_ATTR = 4201, MLX5DR_DEBUG_RES_TYPE_MATCHER_MATCH_TEMPLATE = 4202, + MLX5DR_DEBUG_RES_TYPE_MATCHER_TEMPLATE_MATCH_DEFINER = 4203, MLX5DR_DEBUG_RES_TYPE_MATCHER_ACTION_TEMPLATE = 4204, - MLX5DR_DEBUG_RES_TYPE_MATCHER_TEMPLATE_DEFINER = 4203, + MLX5DR_DEBUG_RES_TYPE_MATCHER_TEMPLATE_HASH_DEFINER = 4205, + MLX5DR_DEBUG_RES_TYPE_MATCHER_TEMPLATE_RANGE_DEFINER = 4206, }; const char *mlx5dr_debug_action_type_to_str(enum mlx5dr_action_type action_type); From patchwork Tue Jan 31 09:33:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Vesker X-Patchwork-Id: 122738 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E63EC41B8D; Tue, 31 Jan 2023 10:36:46 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AAEDC42FC0; Tue, 31 Jan 2023 10:35:10 +0100 (CET) Received: from NAM04-DM6-obe.outbound.protection.outlook.com (mail-dm6nam04on2044.outbound.protection.outlook.com [40.107.102.44]) by mails.dpdk.org (Postfix) with ESMTP id EC64542F9E for ; Tue, 31 Jan 2023 10:35:07 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=e+nDFRbnKp9YbKCrie1Zhh4PA0SQ8UxXBJMSx+hWlnE7pU7TiFgs1/fGYipciXtZ/agRjjiEVNebo35LQoCuWqYR58/nl7fVeWsQunGKZcIWOieVJVtH52z0NlU+6UYFJNvcNZyKXxn284OIqWuglU7KGJXtqNNgF7VTpeqT5tc2zJqmPCgZ4cyGmsKs83jc2WY+MrJpTcqsupoasHHXDuhjyfDvce8u9TzXMZLiiY27sV7y4nZU+M3L/71YR42FW1Jexvudc/5xrONKjCiaCd/dpmmcuG6mPF3xcnTFvBxmlxf3XLxOT8TppFfiliGBP0ptm0vwrVBg8M0VlZDsMg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=gcjdhA+ycRYRGGPWCAIub+IYkgkzRTgwfSrBYJD5gj4=; b=aXfYqH08zpxM7CA17X027+sh3OlLoaHPL2YM/+zZm8s6pCVremoWexZEuCDv989NZpCmHFHmsoVC81JVfZSxmgxKneoYtiKVRLfQXLW1Dgt8nt+ZPxFUKpmF6zhnsk09opIAhaER7XUJ4BrVJLUA+hKZ2mtxcrm9zBYBfHoUO3Eq+PuRuevbLNewc6ypctZhM1jMGKscmbdI57Ku+PKQ78xRMU7gEEuShkMKclSjU0PM+kJgGKuZjIYq3YesCOHO1ppX5vlLhbQeDr7WA2BWfeaYh9cj8/JAwx+2S+jYOk4/6y7Rm+MuwXOJ+eHWCBH+hQMCbi/sQDNy4TWMY0KwGQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=gcjdhA+ycRYRGGPWCAIub+IYkgkzRTgwfSrBYJD5gj4=; b=XfdQFfpc85J4cW9rVhYqPLIs8Op+m+zsfE8XxffQEOqyQtHPXF2nhZ2WNpn848TeDCnrjaGwscVyztzRwFZNc8tCqJRwvca7/A+Gx70Mg141kO4qLein1F5bwZOLqz4zGVI6i3CXlm3rNXtOqYufP32G8YHoItKaTzbIcmSFEsc3Pj1HfR1G17AHwp/5lQ4Jt/CV671SjMg/GFePkHWMYSegoeXHLO4eKU8SVy3ihY3ZtKlpMVensjCZrrfHZUz0T/rL3s90q6eJfGumdiEHhvzVm129Berx28R8i/c/8QZ9MVuAHcMZVn1PHpyNCQvOz00dCSPnu7Wm34BqEyTkBg== Received: from MW4PR03CA0088.namprd03.prod.outlook.com (2603:10b6:303:b6::33) by IA0PR12MB7604.namprd12.prod.outlook.com (2603:10b6:208:438::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Tue, 31 Jan 2023 09:35:06 +0000 Received: from CO1NAM11FT042.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b6:cafe::75) by MW4PR03CA0088.outlook.office365.com (2603:10b6:303:b6::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36 via Frontend Transport; Tue, 31 Jan 2023 09:35:06 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT042.mail.protection.outlook.com (10.13.174.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.23 via Frontend Transport; Tue, 31 Jan 2023 09:35:05 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:40 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:38 -0800 From: Alex Vesker To: , , , "Matan Azrad" CC: , Subject: [v1 15/16] net/mlx5/hws: rename pattern cache object Date: Tue, 31 Jan 2023 11:33:44 +0200 Message-ID: <20230131093346.1261066-16-valex@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230131093346.1261066-1-valex@nvidia.com> References: <20230131093346.1261066-1-valex@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT042:EE_|IA0PR12MB7604:EE_ X-MS-Office365-Filtering-Correlation-Id: 4beeb992-170d-4fad-1f28-08db036e6fdf X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: zgv1VKj5W41f3aeyCWeZsiDIne4Kc2P85hHgmRX8p8nfaaWaigsv417bp4qDGoByoTVn0ZbBlF3hF8G0tb4R5+RJsKpHGSxhwmT0nCFPkrjDWWmahXJGALYMAboEZTqSMYhcbaqXqhHmBqFlxn06tnbn9ahxJ6RdYs8aYzMTyu0fwpdi45yV0hW/QCbVucqKJ1uCukxjKD5MdwNXSdYRq+DxcW3zM1gat4jDKnEhbrQOosnQJ6Yw2mhyk7WVBXXO900rC/1smOV6KslY49E/pMpRVwk0CsOcyap8gqiskSTpF4C/5gy4OtPTciKFtYj3p5ycoOxZ3KtFfZy8AzZ9LKK3q2Ml/dhWF4yJwyBGslXvDmUNe3SfswYShtTIFbYk93TEdmYlaj7WUFos7Vupl/+fAuV0LCJUVVmtf+8PLHQqJjYz1RMiJkm+p68FliIxILXdS7kk0Q7ryZnXLS54RVJw/XLEyVBKK9DF45LeG3++VfnmcEiZgRmIAm25+ZYj+SKwlT6S7uw3/BClv99R+wztfGGuqKZgv9c8nsG8Zi7VS2G2b6t4Ka27egQ7m6d8WJydmb9nGLzFNhHZr4IbxmP7G9uQALX1Of5cJW7C6vU/6vCq2zPT6McQAhoLJNlFMZSG4e9po3BEKpX9cQUO9GCHZNc6d4gzQW12Wn4peYp7VtblDUyCxgpQ2thjd+NqyoUVYh1E9Gk0nXFGV4Bh6w== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(396003)(346002)(376002)(136003)(39860400002)(451199018)(40470700004)(46966006)(36840700001)(6286002)(8936002)(40460700003)(356005)(41300700001)(86362001)(5660300002)(83380400001)(426003)(47076005)(336012)(36860700001)(110136005)(7636003)(54906003)(4326008)(2616005)(82740400003)(6636002)(16526019)(55016003)(70206006)(8676002)(70586007)(40480700001)(316002)(6666004)(186003)(7696005)(1076003)(36756003)(26005)(478600001)(107886003)(82310400005)(2906002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 09:35:05.9674 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 4beeb992-170d-4fad-1f28-08db036e6fdf X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT042.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB7604 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org To have the same name convention for future caches, use cache and cache item naming. Signed-off-by: Alex Vesker --- drivers/net/mlx5/hws/mlx5dr_pat_arg.c | 22 +++++++++++----------- drivers/net/mlx5/hws/mlx5dr_pat_arg.h | 6 +++--- 2 files changed, 14 insertions(+), 14 deletions(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_pat_arg.c b/drivers/net/mlx5/hws/mlx5dr_pat_arg.c index 152025d302..6ed04dac6d 100644 --- a/drivers/net/mlx5/hws/mlx5dr_pat_arg.c +++ b/drivers/net/mlx5/hws/mlx5dr_pat_arg.c @@ -94,13 +94,13 @@ static bool mlx5dr_pat_compare_pattern(enum mlx5dr_action_type cur_type, return true; } -static struct mlx5dr_pat_cached_pattern * +static struct mlx5dr_pattern_cache_item * mlx5dr_pat_find_cached_pattern(struct mlx5dr_pattern_cache *cache, struct mlx5dr_action *action, uint16_t num_of_actions, __be64 *actions) { - struct mlx5dr_pat_cached_pattern *cached_pat; + struct mlx5dr_pattern_cache_item *cached_pat; LIST_FOREACH(cached_pat, &cache->head, next) { if (mlx5dr_pat_compare_pattern(cached_pat->type, @@ -115,13 +115,13 @@ mlx5dr_pat_find_cached_pattern(struct mlx5dr_pattern_cache *cache, return NULL; } -static struct mlx5dr_pat_cached_pattern * +static struct mlx5dr_pattern_cache_item * mlx5dr_pat_get_existing_cached_pattern(struct mlx5dr_pattern_cache *cache, struct mlx5dr_action *action, uint16_t num_of_actions, __be64 *actions) { - struct mlx5dr_pat_cached_pattern *cached_pattern; + struct mlx5dr_pattern_cache_item *cached_pattern; cached_pattern = mlx5dr_pat_find_cached_pattern(cache, action, num_of_actions, actions); if (cached_pattern) { @@ -134,11 +134,11 @@ mlx5dr_pat_get_existing_cached_pattern(struct mlx5dr_pattern_cache *cache, return cached_pattern; } -static struct mlx5dr_pat_cached_pattern * +static struct mlx5dr_pattern_cache_item * mlx5dr_pat_get_cached_pattern_by_action(struct mlx5dr_pattern_cache *cache, struct mlx5dr_action *action) { - struct mlx5dr_pat_cached_pattern *cached_pattern; + struct mlx5dr_pattern_cache_item *cached_pattern; LIST_FOREACH(cached_pattern, &cache->head, next) { if (cached_pattern->mh_data.pattern_obj->id == action->modify_header.pattern_obj->id) @@ -148,14 +148,14 @@ mlx5dr_pat_get_cached_pattern_by_action(struct mlx5dr_pattern_cache *cache, return NULL; } -static struct mlx5dr_pat_cached_pattern * +static struct mlx5dr_pattern_cache_item * mlx5dr_pat_add_pattern_to_cache(struct mlx5dr_pattern_cache *cache, struct mlx5dr_devx_obj *pattern_obj, enum mlx5dr_action_type type, uint16_t num_of_actions, __be64 *actions) { - struct mlx5dr_pat_cached_pattern *cached_pattern; + struct mlx5dr_pattern_cache_item *cached_pattern; cached_pattern = simple_calloc(1, sizeof(*cached_pattern)); if (!cached_pattern) { @@ -189,7 +189,7 @@ mlx5dr_pat_add_pattern_to_cache(struct mlx5dr_pattern_cache *cache, } static void -mlx5dr_pat_remove_pattern(struct mlx5dr_pat_cached_pattern *cached_pattern) +mlx5dr_pat_remove_pattern(struct mlx5dr_pattern_cache_item *cached_pattern) { LIST_REMOVE(cached_pattern, next); simple_free(cached_pattern->mh_data.data); @@ -200,7 +200,7 @@ static void mlx5dr_pat_put_pattern(struct mlx5dr_pattern_cache *cache, struct mlx5dr_action *action) { - struct mlx5dr_pat_cached_pattern *cached_pattern; + struct mlx5dr_pattern_cache_item *cached_pattern; pthread_spin_lock(&cache->lock); cached_pattern = mlx5dr_pat_get_cached_pattern_by_action(cache, action); @@ -225,7 +225,7 @@ static int mlx5dr_pat_get_pattern(struct mlx5dr_context *ctx, size_t pattern_sz, __be64 *pattern) { - struct mlx5dr_pat_cached_pattern *cached_pattern; + struct mlx5dr_pattern_cache_item *cached_pattern; int ret = 0; pthread_spin_lock(&ctx->pattern_cache->lock); diff --git a/drivers/net/mlx5/hws/mlx5dr_pat_arg.h b/drivers/net/mlx5/hws/mlx5dr_pat_arg.h index d9353e9a3e..92db6d6aee 100644 --- a/drivers/net/mlx5/hws/mlx5dr_pat_arg.h +++ b/drivers/net/mlx5/hws/mlx5dr_pat_arg.h @@ -24,10 +24,10 @@ enum { struct mlx5dr_pattern_cache { /* Protect pattern list */ pthread_spinlock_t lock; - LIST_HEAD(pattern_head, mlx5dr_pat_cached_pattern) head; + LIST_HEAD(pattern_head, mlx5dr_pattern_cache_item) head; }; -struct mlx5dr_pat_cached_pattern { +struct mlx5dr_pattern_cache_item { enum mlx5dr_action_type type; struct { struct mlx5dr_devx_obj *pattern_obj; @@ -36,7 +36,7 @@ struct mlx5dr_pat_cached_pattern { uint16_t num_of_actions; } mh_data; uint32_t refcount; - LIST_ENTRY(mlx5dr_pat_cached_pattern) next; + LIST_ENTRY(mlx5dr_pattern_cache_item) next; }; enum mlx5dr_arg_chunk_size From patchwork Tue Jan 31 09:33:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Vesker X-Patchwork-Id: 122734 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A30DF41B8D; Tue, 31 Jan 2023 10:36:16 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5898642D5D; Tue, 31 Jan 2023 10:35:00 +0100 (CET) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2075.outbound.protection.outlook.com [40.107.243.75]) by mails.dpdk.org (Postfix) with ESMTP id D679142D79 for ; Tue, 31 Jan 2023 10:34:58 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ShoCyjhc40/BoRI/o8FiXYJynrC7qJXFrmlKRevwmCCTGIl11Q0h69vOGREuQ7cZI37aW6uEt3EWsOCx82TJsDL9cBnmlZtEMbinEAo5VFicxnPoF7rSfYalrLJDUjehuzGZTn/kNAATLGXjhBRApRe1XLW22YmquRgfAuYgmZtuaOxwy8zQObJ3q0KaA3jeCg2UgsbhG3D2VFLgdRnsBM9mK2MOAPb8kPkcRtckwVjrJUh1n6phmdkoBGFxijSjED7Pirdo9Kv/L3ZscLdlVV9nyblnuwcPoOSi/j8IlvaZXrqkaypES6o9lHCWHnZLFFjq1h/MQJ0M108/R6TFcA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=5Yy7bRajjSEJ1zQr60IYcIE2LC2yH7Cj2mJSbihQsb0=; b=A77sfBGXafvWOaLMUqc2hkRZIE2N5XXPM1oo8Ti+nLZV/dEPoO9LClszyMVwENR8zbSe+1regWE2XZceomd7WpUyYHMnlujYCr1/QuUxjsCCKqIXzr1y6AMujQVDXY2YGsC7tj8dTZI/FAtNZwLkpO3YPNFefCAMIFVSW3Gl57b9IKL7snsiUcpnMKt2gA0eVa4PyPE2MZ9SxuW/uH0q0JJ5cF0CIGW3m+R7PmE3mlP1q/F86/+qkAy8W9xaaxzpc7fbUeDvynmk/F5QuSb5cU7JamjGMPjwgVp+E4Hc3y84brc22MFNrwmRvOUGS9h7tnidSeHV3q88rxnK1b2tGA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=5Yy7bRajjSEJ1zQr60IYcIE2LC2yH7Cj2mJSbihQsb0=; b=MNlutQl/MBP42ytV+Gj+psCk/1k1YJXa10Znb54m41ewd7aZ0JMIuQ/YirQCf5ccoKsUvCimgPGqobSWcpteqOyCNZRyjK3dJf1r+pBk2AAM9e92a2jhzJTruIX22iJRAUEamlC4mEIepOqxV97/qqrKq2frmOchU7Z8oBJgzt1RdNQfsruWUUfFE2r3slOrEW3+jIeJ5DSvfkXw7DIPYG6KC65/LY40h18mhysx5/Sn4hnEf2PUV+KXrg03XsH6rHTah5sDD8laYCc6k5x02NBnJdEWNvvq7kaMxPfF4PxQwdIv+Vqf896U4mTpMlQ+4o8+pbcgEJ37D3jCrVXTdw== Received: from MW4PR03CA0206.namprd03.prod.outlook.com (2603:10b6:303:b8::31) by PH0PR12MB7012.namprd12.prod.outlook.com (2603:10b6:510:21c::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.36; Tue, 31 Jan 2023 09:34:57 +0000 Received: from CO1NAM11FT039.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b8:cafe::1) by MW4PR03CA0206.outlook.office365.com (2603:10b6:303:b8::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38 via Frontend Transport; Tue, 31 Jan 2023 09:34:56 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CO1NAM11FT039.mail.protection.outlook.com (10.13.174.110) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.22 via Frontend Transport; Tue, 31 Jan 2023 09:34:56 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:42 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:40 -0800 From: Alex Vesker To: , , , "Matan Azrad" CC: , Subject: [v1 16/16] net/mlx5/hws: cache definer for reuse Date: Tue, 31 Jan 2023 11:33:45 +0200 Message-ID: <20230131093346.1261066-17-valex@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230131093346.1261066-1-valex@nvidia.com> References: <20230131093346.1261066-1-valex@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT039:EE_|PH0PR12MB7012:EE_ X-MS-Office365-Filtering-Correlation-Id: 40032da9-1d18-4e96-af9e-08db036e6a69 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: bDR9fcGw8iusGzhUSMJMJ0AFI8uaBTicbikeAiNF1rSuwL3l0UrrJpNCNFYhOV+yJVr6YXfamx+eTK8OsLrwbo1ta/29s33Zze7YCws3A1CWAqAwBG6kLbvYWHousdKbuDugcn/fmd94CUhOkljfSPpLbKmaI6bRshz/Va6/zu/vO1XsL5Zd7ZaO/qBeYae5Q/Lo7fqm3uSF13ptJl5Y2WgiUs9e1j3AXkYmw0P7mzurkvN9BlPrKQ1aJmNJD2KE1BH1K1uLARoq60Mp4DxOX2NeSmr0GTDJKUwbsFJdVLBhrMagFoghKPdvGH/8AJ3Pr1SXr/xmAqIduTYeWAYMml0Lq6daUzoZbfNy6cS0CyY3xXda0BbMc1kjrR/JyVhvRShyYfnD5gckckissQaGYBYuMvcxKq8VhA9TSLLPdIjR7rE0tVYAxu6FX6ThWB6b7OBv+XOx5TTcJWxarMntQnSOkCIQt9av2/8kj/fxNMN7Q+GoW/QnvsgZoCM2hZ2G6jQEWHkLooMT/60pwHNxYV8dfF3aJDL/KDab6wicvg+09pQhVFLM2vjTnUb9WolKoyNgSUF7lBn6JIwsu3K2RnuKNXWmoIeEGLtElpKIoo3V8UONG6N7sNxqs1G/DAMepl5Kj5fNYO6TeeauOakImCcXof4vgy9cduz/Lh/xYP4oJXwjmFsOzYY3gAiXHRqv4fbaCz/X+hWtlwkq5GXjXA== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(346002)(136003)(396003)(376002)(39860400002)(451199018)(36840700001)(46966006)(40470700004)(40460700003)(8936002)(5660300002)(41300700001)(336012)(82740400003)(2616005)(8676002)(356005)(54906003)(478600001)(316002)(7636003)(6636002)(36756003)(4326008)(7696005)(2906002)(110136005)(16526019)(426003)(26005)(6286002)(47076005)(55016003)(186003)(107886003)(1076003)(70206006)(83380400001)(70586007)(36860700001)(86362001)(40480700001)(82310400005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 09:34:56.8560 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 40032da9-1d18-4e96-af9e-08db036e6a69 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT039.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB7012 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Definers are a limited resource in the system per GVMI, to avoid failure we try to improve bt checking if it is possible to reuse the definers in some cases. Added a cache on the context for this purpose. Signed-off-by: Alex Vesker --- drivers/net/mlx5/hws/mlx5dr_context.c | 12 ++- drivers/net/mlx5/hws/mlx5dr_context.h | 1 + drivers/net/mlx5/hws/mlx5dr_definer.c | 122 ++++++++++++++++++++++---- drivers/net/mlx5/hws/mlx5dr_definer.h | 14 +++ 4 files changed, 130 insertions(+), 19 deletions(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_context.c b/drivers/net/mlx5/hws/mlx5dr_context.c index 6627337d9e..08a5ee92a5 100644 --- a/drivers/net/mlx5/hws/mlx5dr_context.c +++ b/drivers/net/mlx5/hws/mlx5dr_context.c @@ -13,6 +13,9 @@ static int mlx5dr_context_pools_init(struct mlx5dr_context *ctx) if (mlx5dr_pat_init_pattern_cache(&ctx->pattern_cache)) return rte_errno; + if (mlx5dr_definer_init_cache(&ctx->definer_cache)) + goto uninit_pat_cache; + /* Create an STC pool per FT type */ pool_attr.pool_type = MLX5DR_POOL_TYPE_STC; pool_attr.flags = MLX5DR_POOL_FLAGS_FOR_STC_POOL; @@ -35,8 +38,10 @@ static int mlx5dr_context_pools_init(struct mlx5dr_context *ctx) if (ctx->stc_pool[i]) mlx5dr_pool_destroy(ctx->stc_pool[i]); - mlx5dr_pat_uninit_pattern_cache(ctx->pattern_cache); + mlx5dr_definer_uninit_cache(ctx->definer_cache); +uninit_pat_cache: + mlx5dr_pat_uninit_pattern_cache(ctx->pattern_cache); return rte_errno; } @@ -44,12 +49,13 @@ static void mlx5dr_context_pools_uninit(struct mlx5dr_context *ctx) { int i; - mlx5dr_pat_uninit_pattern_cache(ctx->pattern_cache); - for (i = 0; i < MLX5DR_TABLE_TYPE_MAX; i++) { if (ctx->stc_pool[i]) mlx5dr_pool_destroy(ctx->stc_pool[i]); } + + mlx5dr_definer_uninit_cache(ctx->definer_cache); + mlx5dr_pat_uninit_pattern_cache(ctx->pattern_cache); } static int mlx5dr_context_init_pd(struct mlx5dr_context *ctx, diff --git a/drivers/net/mlx5/hws/mlx5dr_context.h b/drivers/net/mlx5/hws/mlx5dr_context.h index a38d9484b3..0ba8d0c92e 100644 --- a/drivers/net/mlx5/hws/mlx5dr_context.h +++ b/drivers/net/mlx5/hws/mlx5dr_context.h @@ -39,6 +39,7 @@ struct mlx5dr_context { struct mlx5dr_context_common_res common_res[MLX5DR_TABLE_TYPE_MAX]; struct mlx5dr_context_shared_gvmi_res gvmi_res[MLX5DR_TABLE_TYPE_MAX]; struct mlx5dr_pattern_cache *pattern_cache; + struct mlx5dr_definer_cache *definer_cache; pthread_spinlock_t ctrl_lock; enum mlx5dr_context_flags flags; struct mlx5dr_send_engine *send_queue; diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c index cf84fbea71..b91f98ee8f 100644 --- a/drivers/net/mlx5/hws/mlx5dr_definer.c +++ b/drivers/net/mlx5/hws/mlx5dr_definer.c @@ -2061,6 +2061,7 @@ mlx5dr_definer_compare(struct mlx5dr_definer *definer_a, { int i; + /* Future: Optimize by comparing selectors with valid mask only */ for (i = 0; i < BYTE_SELECTORS; i++) if (definer_a->byte_selector[i] != definer_b->byte_selector[i]) return 1; @@ -2133,15 +2134,106 @@ mlx5dr_definer_calc_layout(struct mlx5dr_matcher *matcher, return rte_errno; } +int mlx5dr_definer_init_cache(struct mlx5dr_definer_cache **cache) +{ + struct mlx5dr_definer_cache *new_cache; + + new_cache = simple_calloc(1, sizeof(*new_cache)); + if (!new_cache) { + rte_errno = ENOMEM; + return rte_errno; + } + LIST_INIT(&new_cache->head); + *cache = new_cache; + + return 0; +} + +void mlx5dr_definer_uninit_cache(struct mlx5dr_definer_cache *cache) +{ + simple_free(cache); +} + +static struct mlx5dr_devx_obj * +mlx5dr_definer_get_obj(struct mlx5dr_context *ctx, + struct mlx5dr_definer *definer) +{ + struct mlx5dr_definer_cache *cache = ctx->definer_cache; + struct mlx5dr_cmd_definer_create_attr def_attr = {0}; + struct mlx5dr_definer_cache_item *cached_definer; + struct mlx5dr_devx_obj *obj; + + /* Search definer cache for requested definer */ + LIST_FOREACH(cached_definer, &cache->head, next) { + if (mlx5dr_definer_compare(&cached_definer->definer, definer)) + continue; + + /* Reuse definer and set LRU (move to be first in the list) */ + LIST_REMOVE(cached_definer, next); + LIST_INSERT_HEAD(&cache->head, cached_definer, next); + cached_definer->refcount++; + return cached_definer->definer.obj; + } + + /* Allocate and create definer based on the bitmask tag */ + def_attr.match_mask = definer->mask.jumbo; + def_attr.dw_selector = definer->dw_selector; + def_attr.byte_selector = definer->byte_selector; + + obj = mlx5dr_cmd_definer_create(ctx->ibv_ctx, &def_attr); + if (!obj) + return NULL; + + cached_definer = simple_calloc(1, sizeof(*cached_definer)); + if (!cached_definer) { + rte_errno = ENOMEM; + goto free_definer_obj; + } + + memcpy(&cached_definer->definer, definer, sizeof(*definer)); + cached_definer->definer.obj = obj; + cached_definer->refcount = 1; + LIST_INSERT_HEAD(&cache->head, cached_definer, next); + + return obj; + +free_definer_obj: + mlx5dr_cmd_destroy_obj(obj); + return NULL; +} + +static void +mlx5dr_definer_put_obj(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *obj) +{ + struct mlx5dr_definer_cache_item *cached_definer; + + LIST_FOREACH(cached_definer, &ctx->definer_cache->head, next) { + if (cached_definer->definer.obj != obj) + continue; + + /* Object found */ + if (--cached_definer->refcount) + return; + + LIST_REMOVE(cached_definer, next); + mlx5dr_cmd_destroy_obj(cached_definer->definer.obj); + simple_free(cached_definer); + return; + } + + /* Programming error, object must be part of cache */ + assert(false); +} + static struct mlx5dr_definer * -mlx5dr_definer_alloc(struct ibv_context *ibv_ctx, +mlx5dr_definer_alloc(struct mlx5dr_context *ctx, struct mlx5dr_definer_fc *fc, int fc_sz, struct rte_flow_item *items, struct mlx5dr_definer *layout, bool bind_fc) { - struct mlx5dr_cmd_definer_create_attr def_attr = {0}; struct mlx5dr_definer *definer; int ret; @@ -2166,12 +2258,7 @@ mlx5dr_definer_alloc(struct ibv_context *ibv_ctx, /* Create the tag mask used for definer creation */ mlx5dr_definer_create_tag_mask(items, fc, fc_sz, definer->mask.jumbo); - /* Create definer based on the bitmask tag */ - def_attr.match_mask = definer->mask.jumbo; - def_attr.dw_selector = layout->dw_selector; - def_attr.byte_selector = layout->byte_selector; - - definer->obj = mlx5dr_cmd_definer_create(ibv_ctx, &def_attr); + definer->obj = mlx5dr_definer_get_obj(ctx, definer); if (!definer->obj) goto free_definer; @@ -2183,9 +2270,10 @@ mlx5dr_definer_alloc(struct ibv_context *ibv_ctx, } static void -mlx5dr_definer_free(struct mlx5dr_definer *definer) +mlx5dr_definer_free(struct mlx5dr_context *ctx, + struct mlx5dr_definer *definer) { - mlx5dr_cmd_destroy_obj(definer->obj); + mlx5dr_definer_put_obj(ctx, definer->obj); simple_free(definer); } @@ -2199,7 +2287,7 @@ mlx5dr_definer_matcher_match_init(struct mlx5dr_context *ctx, /* Create mendatory match definer */ for (i = 0; i < matcher->num_of_mt; i++) { - mt[i].definer = mlx5dr_definer_alloc(ctx->ibv_ctx, + mt[i].definer = mlx5dr_definer_alloc(ctx, mt[i].fc, mt[i].fc_sz, mt[i].items, @@ -2214,7 +2302,7 @@ mlx5dr_definer_matcher_match_init(struct mlx5dr_context *ctx, free_definers: while (i--) - mlx5dr_definer_free(mt[i].definer); + mlx5dr_definer_free(ctx, mt[i].definer); return rte_errno; } @@ -2222,10 +2310,11 @@ mlx5dr_definer_matcher_match_init(struct mlx5dr_context *ctx, static void mlx5dr_definer_matcher_match_uninit(struct mlx5dr_matcher *matcher) { + struct mlx5dr_context *ctx = matcher->tbl->ctx; int i; for (i = 0; i < matcher->num_of_mt; i++) - mlx5dr_definer_free(matcher->mt[i].definer); + mlx5dr_definer_free(ctx, matcher->mt[i].definer); } static int @@ -2249,7 +2338,7 @@ mlx5dr_definer_matcher_range_init(struct mlx5dr_context *ctx, matcher->flags |= MLX5DR_MATCHER_FLAGS_RANGE_DEFINER; /* Create definer without fcr binding, already binded */ - mt[i].range_definer = mlx5dr_definer_alloc(ctx->ibv_ctx, + mt[i].range_definer = mlx5dr_definer_alloc(ctx, mt[i].fcr, mt[i].fcr_sz, mt[i].items, @@ -2265,7 +2354,7 @@ mlx5dr_definer_matcher_range_init(struct mlx5dr_context *ctx, free_definers: while (i--) if (mt[i].range_definer) - mlx5dr_definer_free(mt[i].range_definer); + mlx5dr_definer_free(ctx, mt[i].range_definer); return rte_errno; } @@ -2273,11 +2362,12 @@ mlx5dr_definer_matcher_range_init(struct mlx5dr_context *ctx, static void mlx5dr_definer_matcher_range_uninit(struct mlx5dr_matcher *matcher) { + struct mlx5dr_context *ctx = matcher->tbl->ctx; int i; for (i = 0; i < matcher->num_of_mt; i++) if (matcher->mt[i].range_definer) - mlx5dr_definer_free(matcher->mt[i].range_definer); + mlx5dr_definer_free(ctx, matcher->mt[i].range_definer); } static int diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.h b/drivers/net/mlx5/hws/mlx5dr_definer.h index dd9a297007..464872acd6 100644 --- a/drivers/net/mlx5/hws/mlx5dr_definer.h +++ b/drivers/net/mlx5/hws/mlx5dr_definer.h @@ -569,6 +569,16 @@ struct mlx5dr_definer { struct mlx5dr_devx_obj *obj; }; +struct mlx5dr_definer_cache { + LIST_HEAD(definer_head, mlx5dr_definer_cache_item) head; +}; + +struct mlx5dr_definer_cache_item { + struct mlx5dr_definer definer; + uint32_t refcount; + LIST_ENTRY(mlx5dr_definer_cache_item) next; +}; + static inline bool mlx5dr_definer_is_jumbo(struct mlx5dr_definer *definer) { @@ -592,4 +602,8 @@ int mlx5dr_definer_matcher_init(struct mlx5dr_context *ctx, void mlx5dr_definer_matcher_uninit(struct mlx5dr_matcher *matcher); +int mlx5dr_definer_init_cache(struct mlx5dr_definer_cache **cache); + +void mlx5dr_definer_uninit_cache(struct mlx5dr_definer_cache *cache); + #endif