From patchwork Wed Mar 6 20:21:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dariusz Sosnowski X-Patchwork-Id: 138061 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9047943C5A; Wed, 6 Mar 2024 21:23:08 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 72E0642E3F; Wed, 6 Mar 2024 21:23:08 +0100 (CET) Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam04on2057.outbound.protection.outlook.com [40.107.100.57]) by mails.dpdk.org (Postfix) with ESMTP id E35B442DC3; Wed, 6 Mar 2024 21:23:06 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Kj7kuJIZ9++cHiZ9qtOMUtp18LLM98xvLjY0fkq0nDS7zBcyPDlhNSRneVK9JWc4FrEG/Nd5CC+MVK+JRb7dWjMVi5pg1/aPRZ9+MIoVqppzOD7Rg1CAxlEm5c/oOTirNghJZQKDjxbceMOa8hGWFsafHmYZSRjPuxSDYa+qPRkYbd2bXTdd6L3jagZNaJJ/2iVOBcuYYGgh0A5naTKQ9EArlkZYYKI7iWx6kBapTZeBm2tvAtXlbSWv+7TvL61wbTdwN2HKHU6YREvPFwrt5E4/TYT8F+3HnOe3DfC/Ww9nTAHlmABf3ApKJM+bK5hrkQTQ8FsUoHBNGFmw4FAdQw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=L/Kxcvr3plEsgLBwUQfS/4YJM34SeVvcmvXaPqy7qeU=; b=JJ29bpgRHRu+tepRctw7q5HzAkUB3jh3VQCywIsy7WF7LdzfRRQGyfTXjQH/QhV8tJrcuAQrjOapfkFZOl8VCh+mySp+hKMXgmih9jkGd7h2ItMvUEDwaBYc3/GQnuR6oP0Bxn8oQjK5ycxG8rg8sDH5JkBbJ/RZbYabRXRaXXeqtlnc1TA1+y4zUvKfgaL7g9HuFqkchjPWtxpLKjgWHV77AmzF48cSNNwO/zauGFZBIUbyq4dqexAk29fuuZzrTH5XZ52xzYXhC9RqtUNtLm5cUa2mpZs852fs3bBTq+J3lXz7xZNytH9Vli6TUdDQDVRsXETFT7zgkdz7A+Xk+g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=L/Kxcvr3plEsgLBwUQfS/4YJM34SeVvcmvXaPqy7qeU=; b=NcFZW8Ph684Y9/kOt6tSPUJPGk4Pfn0wN2B2SnT5/eJSGR787FPMzOQ9U39DJUZ4M3vsQb++X6u2nInRfwr4/OM3S4ID8hV1exXqwJBUTLlr0ePXHZzoVUfdVOlahMdWVQy87GInXl2RzRnfaAnAqryoR07Sq3K9wZdvsHRXBUZCa+Wf1r3OePMfXZd7VVgm/9NDkCodXo7vkTFgHMTtLgaqBFbboH9ldFS2Wswg3nuv1kNL9lvGTnQt6TicEKpxClOELkUXIvdf8f7h+Dc0tVji+d7jSheRcvAkrqP+RE/wWgYpawnVS4O+g2ajsmebfWa0mvMeXi1G/WSyOaPRSQ== Received: from DM6PR12CA0027.namprd12.prod.outlook.com (2603:10b6:5:1c0::40) by SA1PR12MB6680.namprd12.prod.outlook.com (2603:10b6:806:253::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7316.34; Wed, 6 Mar 2024 20:23:04 +0000 Received: from CY4PEPF0000FCBE.namprd03.prod.outlook.com (2603:10b6:5:1c0:cafe::a4) by DM6PR12CA0027.outlook.office365.com (2603:10b6:5:1c0::40) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7339.39 via Frontend Transport; Wed, 6 Mar 2024 20:23:04 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CY4PEPF0000FCBE.mail.protection.outlook.com (10.167.242.100) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7362.11 via Frontend Transport; Wed, 6 Mar 2024 20:23:04 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Wed, 6 Mar 2024 12:22:50 -0800 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Wed, 6 Mar 2024 12:22:48 -0800 From: Dariusz Sosnowski To: Viacheslav Ovsiienko , Ori Kam , Suanming Mou , Matan Azrad , "Yevgeny Kliteynik" CC: , Alex Vesker , Subject: [PATCH 1/4] net/mlx5/hws: fix direct index insert on dep wqe Date: Wed, 6 Mar 2024 21:21:47 +0100 Message-ID: <20240306202150.79577-1-dsosnowski@nvidia.com> X-Mailer: git-send-email 2.39.2 MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000FCBE:EE_|SA1PR12MB6680:EE_ X-MS-Office365-Filtering-Correlation-Id: 1fd2d668-e85f-411f-78d2-08dc3e1b3a47 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 9iZtGXIoPo1Ldm1j3qlsy/Ru+EvXHWF7lzSblgthVlbfFkpMr/bmK4tiBxX23ptfjIBuLF/jCE+JDSaJnD3czogRIEraoM20n/bpANo8wxZQheK/nBg2Jm+LISUqy4hikp5iYZjuBlu9xmFQjdDX+ok0E7pwNavs23gDmhhRFWFL6dy5JCZ93D+YSlhTOCo12yqlH0DmCV7wxgTEUhMslFJzynpHiGporhPpxAd5S4ueftq6NVtWG8RQk2UbYTr7ttBMeIjRbiYMCauaoy8/Al+kVuj71VcqhL2eG334FBWoJiydhSAk7cxVFLNaRtycsfKeO5/ABP6HZSK/90LQIcSD8VssX72ZoojHPKSMkYQr+G63zs+bSfUeAvb1pJRi/TmMb+EPflZcNcIlKV3RIU0qnQP6iQV2pjJeL8dA5hCD0hzTUsrs/wQIxuAiK1sPAAX5To1rciXgqfD0nto0r3D9j3buuD/3SeGinO4kpmUvFtODirYBg5wYdbtoAdm2/pEb5b/7qJTPSTGAW6khk/MI7vm9TMxwtcTuQxorYMCFEGoa6LQLPM9/87VXqEYlL3lzyHPLoQpRs5xhfvnj5ZnmhdIVGtrWnMuuFHknqb6HABguFVeTO0TKQVXmP1Cr9xHQIwy9UuOIgjZJ3+03lresUDlcxqb5TdZA5R2XTPxkY+IZOmpvOB0kTafq/myzI2cx6x4giY6VCy1+rT/Ms0Ss3n5hpH0zA30uvUdlysQ0VNgAheusB86MudlJFc4r X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(36860700004)(376005)(82310400014); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Mar 2024 20:23:04.1631 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1fd2d668-e85f-411f-78d2-08dc3e1b3a47 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000FCBE.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB6680 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Alex Vesker In case a depend WQE was required and direct index was needed we would not set the direct index on the dep_wqe. This leads to incorrect insertion to index zero. Fixes: 38b5bf6452a6 ("net/mlx5/hws: support insert/distribute RTC properties") Cc: stable@dpdk.org Signed-off-by: Alex Vesker Acked-by: Dariusz Sosnowski --- drivers/net/mlx5/hws/mlx5dr_rule.c | 15 ++++++++------- drivers/net/mlx5/hws/mlx5dr_send.c | 1 + drivers/net/mlx5/hws/mlx5dr_send.h | 1 + 3 files changed, 10 insertions(+), 7 deletions(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.c b/drivers/net/mlx5/hws/mlx5dr_rule.c index aa00c54e53..f14e1e6ecd 100644 --- a/drivers/net/mlx5/hws/mlx5dr_rule.c +++ b/drivers/net/mlx5/hws/mlx5dr_rule.c @@ -58,14 +58,16 @@ static void mlx5dr_rule_init_dep_wqe(struct mlx5dr_send_ring_dep_wqe *dep_wqe, struct mlx5dr_rule *rule, const struct rte_flow_item *items, struct mlx5dr_match_template *mt, - void *user_data) + struct mlx5dr_rule_attr *attr) { struct mlx5dr_matcher *matcher = rule->matcher; struct mlx5dr_table *tbl = matcher->tbl; bool skip_rx, skip_tx; dep_wqe->rule = rule; - dep_wqe->user_data = user_data; + dep_wqe->user_data = attr->user_data; + dep_wqe->direct_index = mlx5dr_matcher_is_insert_by_idx(matcher) ? + attr->rule_idx : 0; if (!items) { /* rule update */ dep_wqe->rtc_0 = rule->rtc_0; @@ -374,8 +376,8 @@ static int mlx5dr_rule_create_hws_fw_wqe(struct mlx5dr_rule *rule, } mlx5dr_rule_create_init(rule, &ste_attr, &apply, false); - mlx5dr_rule_init_dep_wqe(&match_wqe, rule, items, mt, attr->user_data); - mlx5dr_rule_init_dep_wqe(&range_wqe, rule, items, mt, attr->user_data); + mlx5dr_rule_init_dep_wqe(&match_wqe, rule, items, mt, attr); + mlx5dr_rule_init_dep_wqe(&range_wqe, rule, items, mt, attr); ste_attr.direct_index = 0; ste_attr.rtc_0 = match_wqe.rtc_0; @@ -482,7 +484,7 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, * dep_wqe buffers (ctrl, data) are also reused for all STE writes. */ dep_wqe = mlx5dr_send_add_new_dep_wqe(queue); - mlx5dr_rule_init_dep_wqe(dep_wqe, rule, items, mt, attr->user_data); + mlx5dr_rule_init_dep_wqe(dep_wqe, rule, items, mt, attr); ste_attr.wqe_ctrl = &dep_wqe->wqe_ctrl; ste_attr.wqe_data = &dep_wqe->wqe_data; @@ -544,8 +546,7 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, ste_attr.used_id_rtc_1 = &rule->rtc_1; ste_attr.retry_rtc_0 = dep_wqe->retry_rtc_0; ste_attr.retry_rtc_1 = dep_wqe->retry_rtc_1; - ste_attr.direct_index = mlx5dr_matcher_is_insert_by_idx(matcher) ? - attr->rule_idx : 0; + ste_attr.direct_index = dep_wqe->direct_index; } else { apply.next_direct_idx = --ste_attr.direct_index; } diff --git a/drivers/net/mlx5/hws/mlx5dr_send.c b/drivers/net/mlx5/hws/mlx5dr_send.c index 64138279a1..f749401c6f 100644 --- a/drivers/net/mlx5/hws/mlx5dr_send.c +++ b/drivers/net/mlx5/hws/mlx5dr_send.c @@ -50,6 +50,7 @@ void mlx5dr_send_all_dep_wqe(struct mlx5dr_send_engine *queue) ste_attr.used_id_rtc_1 = &dep_wqe->rule->rtc_1; ste_attr.wqe_ctrl = &dep_wqe->wqe_ctrl; ste_attr.wqe_data = &dep_wqe->wqe_data; + ste_attr.direct_index = dep_wqe->direct_index; mlx5dr_send_ste(queue, &ste_attr); diff --git a/drivers/net/mlx5/hws/mlx5dr_send.h b/drivers/net/mlx5/hws/mlx5dr_send.h index c1e8616f7e..c4eaea52ab 100644 --- a/drivers/net/mlx5/hws/mlx5dr_send.h +++ b/drivers/net/mlx5/hws/mlx5dr_send.h @@ -106,6 +106,7 @@ struct mlx5dr_send_ring_dep_wqe { uint32_t rtc_1; uint32_t retry_rtc_0; uint32_t retry_rtc_1; + uint32_t direct_index; void *user_data; }; From patchwork Wed Mar 6 20:21:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dariusz Sosnowski X-Patchwork-Id: 138064 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 091AE43C5A; Wed, 6 Mar 2024 21:23:37 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5CA5042ECE; Wed, 6 Mar 2024 21:23:24 +0100 (CET) Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1nam02on2069.outbound.protection.outlook.com [40.107.96.69]) by mails.dpdk.org (Postfix) with ESMTP id 2248A42E95; Wed, 6 Mar 2024 21:23:21 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=esmalNeTfb8Kw6bQjvvK/MrN1MLk0eJNNwMG30jVXGXz3FEh83tmuvKuvsWX3jH06y053QeTzFm9YtA5rTjiqyYwIoEqhELFW/eIEcIfjLg9mCRhSI+VoSjkdtfCY3qWN7tJgB4HC6ygjLl72LV88yVcqqpKgCJIljEmTXj0a1X/sSzDr44jEeaf7Cc4H/aqieg7+c0eIjNuh499UMNp4YGilvQ86bzmRI464ccBtT3NKZAQURVfSPwBGmPvq0t+GqED0KWHnmJ2WXsoJczsYN6aqT8lILbHD/WUKtpjmJbHafsrIWkLtGXAt9ZYhv0IKBPwexLx4kRNgz2SAGqMUw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=gUaXMhLYfo6eNy9/dhAl4X/Tfy0neJ6D0HEcnN5hemk=; b=BJKnUBn5jZ3dwrjJOPP+L8ob1TcSWNgp/C8NYJ1IwJeDCfcU8Bu8FdApQBn6fwMNF31aGKZn9/MM8Br9fRhSWJ0QurME0VVV+3srE0MwLKNdgd9RM6BfPv6umU/AidvC5xRz6zENAsWUIEfmxDRR3VY23w1i2/TERJX3nZY4tRv3ygiw1kjWWpeB/0n4h9fXB6MqkRjWVQnXXtYAfFoaWDrl6MyAOOzR9xnuaEuhccIbkbsLhQIH3tvazYyyiqS2JlsfI9Cxdbi5WRpoVFGIZwAcZbunFGcshh3QqMeK07+G6WW8WB2GKGqfP94l89Rc7voHeEj6Iewjt681QPaoJA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=gUaXMhLYfo6eNy9/dhAl4X/Tfy0neJ6D0HEcnN5hemk=; b=iG9BLPBUN2vI0pg9vraxfqGJ49jF6ylCenlAQrhKfs447nZ+MouBZmKj8HiHJRAS3qSDVB8yO/eDt4zGeN7Au9iadqukRMj1krkLlfwLKqj8zbt9eW3qS73YGdnNZjO3s5aNiKa44C4K5FgCRPhfn8UFZ+TEy2mIofhxIAaXqBWHTydQ0b6+R+DiaCZCSrPqzKsp6buEF4MRoCp2MynWds2BtGlrsTaLZFdYcrqwl4ihDv6Kwo8JPljhH26/n4o/rMcjqfHytff4llxwWCD4DhrPJNT/ADavhcOtMdrvt3U0IvnLaCpwjVaZ9yotFyoaF2RMPhCVb13lLKkYQrbtAw== Received: from DM6PR12CA0033.namprd12.prod.outlook.com (2603:10b6:5:1c0::46) by PH7PR12MB7212.namprd12.prod.outlook.com (2603:10b6:510:207::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7316.24; Wed, 6 Mar 2024 20:23:08 +0000 Received: from CY4PEPF0000FCBE.namprd03.prod.outlook.com (2603:10b6:5:1c0:cafe::96) by DM6PR12CA0033.outlook.office365.com (2603:10b6:5:1c0::46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7339.40 via Frontend Transport; Wed, 6 Mar 2024 20:23:08 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CY4PEPF0000FCBE.mail.protection.outlook.com (10.167.242.100) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7362.11 via Frontend Transport; Wed, 6 Mar 2024 20:23:08 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Wed, 6 Mar 2024 12:22:53 -0800 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Wed, 6 Mar 2024 12:22:51 -0800 From: Dariusz Sosnowski To: Viacheslav Ovsiienko , Ori Kam , Suanming Mou , Matan Azrad , "Bing Zhao" CC: , Subject: [PATCH 2/4] net/mlx5: fix templates clean up of FDB control flow rules Date: Wed, 6 Mar 2024 21:21:48 +0100 Message-ID: <20240306202150.79577-2-dsosnowski@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240306202150.79577-1-dsosnowski@nvidia.com> References: <20240306202150.79577-1-dsosnowski@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000FCBE:EE_|PH7PR12MB7212:EE_ X-MS-Office365-Filtering-Correlation-Id: 654039ea-2755-4499-fbfe-08dc3e1b3c91 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: WBVaSZLEGDfeax58TFwlUXVk13Fyu55R/mQRTujY7ZJ8iFEgwNPjVDDRJuuDcUua+8Y1OLP6bJA7DpcnSElZQEAbmG+5QYwjPCfqQENQShFiqoqmoOCJ4oJfsqvWiVnUIDJVFc6RIjZFjxcpLcT2HeirOTsddv1v1OlOY5NknhlMtJV5URHkrI2qVHQi/fHqajeYaLzWakckxq+aP0t2ksxezhQXFEJLoZW4hrWPAdIM3iodE8D9KsJXU45qyugdo+tg70uO0o/Gk7ganq1eop0VSsEZSoZVZq1y2UwjPHPz+WPlFPABoKAphjw6/BeHaOB0xd0nyplm2BdKGec4NSJyzkomnJloD63Gp2QN4wGPYzaEbXbRf9tbrsCeowXzK8UQQ493e/uKvjNgZekgJlox2sfxmauVKAKFjeH83yun0QpMzQNkGYJDkisnJXuk8M25VZmbTlo5abd08tG42cV/7fC1GFlsN+sOjqmA3fHfHQ2l6SQilxcRUqthBisHUBh6POXMD6zujcBYyt3MIxo2Lvkfp3kUKyOxqZ2quUxk9UggN3wWG9P0dFNIuJqdSCbRfkcrgeKTziuoEkSYUiC6+d+H7ZmjItOuchjkejeLskaU78wWSdhy7EYuIEKCQX0EQ23XGJrBX63PvVXUdVf55butb7i0Ikf+KW2DKGGBM/qzPTADXFM2KSLRbzcdHEuBk8nrS+DM5Qkrjj6WteBnIQ/bswEXwZuYAfvuULwArpFSiOJ93QpEYfwwzQO+ X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(82310400014)(376005)(36860700004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Mar 2024 20:23:08.0069 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 654039ea-2755-4499-fbfe-08dc3e1b3c91 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000FCBE.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB7212 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch refactors the creation and clean up of templates used for FDB control flow rules, when HWS is enabled. All pattern and actions templates, and template tables are stored in a separate structure, `mlx5_flow_hw_ctrl_fdb`. It is allocated if and only if E-Switch is enabled. During HWS clean up, all of these templates are explicitly destroyed, instead of relying on templates general templates clean up. Fixes: 1939eb6f660c ("net/mlx5: support flow port action with HWS") Fixes: 49dffadf4b0c ("net/mlx5: fix LACP redirection in Rx domain") Cc: stable@dpdk.org Signed-off-by: Dariusz Sosnowski Acked-by: Ori Kam --- drivers/net/mlx5/mlx5.h | 6 +- drivers/net/mlx5/mlx5_flow.h | 19 +++ drivers/net/mlx5/mlx5_flow_hw.c | 255 ++++++++++++++++++-------------- 3 files changed, 166 insertions(+), 114 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 2fb3bb65cc..db68c8f884 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1894,11 +1894,7 @@ struct mlx5_priv { rte_spinlock_t hw_ctrl_lock; LIST_HEAD(hw_ctrl_flow, mlx5_hw_ctrl_flow) hw_ctrl_flows; LIST_HEAD(hw_ext_ctrl_flow, mlx5_hw_ctrl_flow) hw_ext_ctrl_flows; - struct rte_flow_template_table *hw_esw_sq_miss_root_tbl; - struct rte_flow_template_table *hw_esw_sq_miss_tbl; - struct rte_flow_template_table *hw_esw_zero_tbl; - struct rte_flow_template_table *hw_tx_meta_cpy_tbl; - struct rte_flow_template_table *hw_lacp_rx_tbl; + struct mlx5_flow_hw_ctrl_fdb *hw_ctrl_fdb; struct rte_flow_pattern_template *hw_tx_repr_tagging_pt; struct rte_flow_actions_template *hw_tx_repr_tagging_at; struct rte_flow_template_table *hw_tx_repr_tagging_tbl; diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 714a41e997..d58290e5b4 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -2775,6 +2775,25 @@ struct mlx5_flow_hw_ctrl_rx { [MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_MAX]; }; +/* Contains all templates required for control flow rules in FDB with HWS. */ +struct mlx5_flow_hw_ctrl_fdb { + struct rte_flow_pattern_template *esw_mgr_items_tmpl; + struct rte_flow_actions_template *regc_jump_actions_tmpl; + struct rte_flow_template_table *hw_esw_sq_miss_root_tbl; + struct rte_flow_pattern_template *regc_sq_items_tmpl; + struct rte_flow_actions_template *port_actions_tmpl; + struct rte_flow_template_table *hw_esw_sq_miss_tbl; + struct rte_flow_pattern_template *port_items_tmpl; + struct rte_flow_actions_template *jump_one_actions_tmpl; + struct rte_flow_template_table *hw_esw_zero_tbl; + struct rte_flow_pattern_template *tx_meta_items_tmpl; + struct rte_flow_actions_template *tx_meta_actions_tmpl; + struct rte_flow_template_table *hw_tx_meta_cpy_tbl; + struct rte_flow_pattern_template *lacp_rx_items_tmpl; + struct rte_flow_actions_template *lacp_rx_actions_tmpl; + struct rte_flow_template_table *hw_lacp_rx_tbl; +}; + #define MLX5_CTRL_PROMISCUOUS (RTE_BIT32(0)) #define MLX5_CTRL_ALL_MULTICAST (RTE_BIT32(1)) #define MLX5_CTRL_BROADCAST (RTE_BIT32(2)) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 4216433c6e..21c37b7539 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -9327,6 +9327,72 @@ flow_hw_create_ctrl_jump_table(struct rte_eth_dev *dev, return flow_hw_table_create(dev, &cfg, &it, 1, &at, 1, error); } +/** + * Cleans up all template tables and pattern, and actions templates used for + * FDB control flow rules. + * + * @param dev + * Pointer to Ethernet device. + */ +static void +flow_hw_cleanup_ctrl_fdb_tables(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_hw_ctrl_fdb *hw_ctrl_fdb; + + if (!priv->hw_ctrl_fdb) + return; + hw_ctrl_fdb = priv->hw_ctrl_fdb; + /* Clean up templates used for LACP default miss table. */ + if (hw_ctrl_fdb->hw_lacp_rx_tbl) + claim_zero(flow_hw_table_destroy(dev, hw_ctrl_fdb->hw_lacp_rx_tbl, NULL)); + if (hw_ctrl_fdb->lacp_rx_actions_tmpl) + claim_zero(flow_hw_actions_template_destroy(dev, hw_ctrl_fdb->lacp_rx_actions_tmpl, + NULL)); + if (hw_ctrl_fdb->lacp_rx_items_tmpl) + claim_zero(flow_hw_pattern_template_destroy(dev, hw_ctrl_fdb->lacp_rx_items_tmpl, + NULL)); + /* Clean up templates used for default Tx metadata copy. */ + if (hw_ctrl_fdb->hw_tx_meta_cpy_tbl) + claim_zero(flow_hw_table_destroy(dev, hw_ctrl_fdb->hw_tx_meta_cpy_tbl, NULL)); + if (hw_ctrl_fdb->tx_meta_actions_tmpl) + claim_zero(flow_hw_actions_template_destroy(dev, hw_ctrl_fdb->tx_meta_actions_tmpl, + NULL)); + if (hw_ctrl_fdb->tx_meta_items_tmpl) + claim_zero(flow_hw_pattern_template_destroy(dev, hw_ctrl_fdb->tx_meta_items_tmpl, + NULL)); + /* Clean up templates used for default FDB jump rule. */ + if (hw_ctrl_fdb->hw_esw_zero_tbl) + claim_zero(flow_hw_table_destroy(dev, hw_ctrl_fdb->hw_esw_zero_tbl, NULL)); + if (hw_ctrl_fdb->jump_one_actions_tmpl) + claim_zero(flow_hw_actions_template_destroy(dev, hw_ctrl_fdb->jump_one_actions_tmpl, + NULL)); + if (hw_ctrl_fdb->port_items_tmpl) + claim_zero(flow_hw_pattern_template_destroy(dev, hw_ctrl_fdb->port_items_tmpl, + NULL)); + /* Clean up templates used for default SQ miss flow rules - non-root table. */ + if (hw_ctrl_fdb->hw_esw_sq_miss_tbl) + claim_zero(flow_hw_table_destroy(dev, hw_ctrl_fdb->hw_esw_sq_miss_tbl, NULL)); + if (hw_ctrl_fdb->regc_sq_items_tmpl) + claim_zero(flow_hw_pattern_template_destroy(dev, hw_ctrl_fdb->regc_sq_items_tmpl, + NULL)); + if (hw_ctrl_fdb->port_actions_tmpl) + claim_zero(flow_hw_actions_template_destroy(dev, hw_ctrl_fdb->port_actions_tmpl, + NULL)); + /* Clean up templates used for default SQ miss flow rules - root table. */ + if (hw_ctrl_fdb->hw_esw_sq_miss_root_tbl) + claim_zero(flow_hw_table_destroy(dev, hw_ctrl_fdb->hw_esw_sq_miss_root_tbl, NULL)); + if (hw_ctrl_fdb->regc_jump_actions_tmpl) + claim_zero(flow_hw_actions_template_destroy(dev, + hw_ctrl_fdb->regc_jump_actions_tmpl, NULL)); + if (hw_ctrl_fdb->esw_mgr_items_tmpl) + claim_zero(flow_hw_pattern_template_destroy(dev, hw_ctrl_fdb->esw_mgr_items_tmpl, + NULL)); + /* Clean up templates structure for FDB control flow rules. */ + mlx5_free(hw_ctrl_fdb); + priv->hw_ctrl_fdb = NULL; +} + /* * Create a table on the root group to for the LACP traffic redirecting. * @@ -9376,110 +9442,109 @@ flow_hw_create_lacp_rx_table(struct rte_eth_dev *dev, * @return * 0 on success, negative values otherwise */ -static __rte_unused int +static int flow_hw_create_ctrl_tables(struct rte_eth_dev *dev, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; - struct rte_flow_pattern_template *esw_mgr_items_tmpl = NULL; - struct rte_flow_pattern_template *regc_sq_items_tmpl = NULL; - struct rte_flow_pattern_template *port_items_tmpl = NULL; - struct rte_flow_pattern_template *tx_meta_items_tmpl = NULL; - struct rte_flow_pattern_template *lacp_rx_items_tmpl = NULL; - struct rte_flow_actions_template *regc_jump_actions_tmpl = NULL; - struct rte_flow_actions_template *port_actions_tmpl = NULL; - struct rte_flow_actions_template *jump_one_actions_tmpl = NULL; - struct rte_flow_actions_template *tx_meta_actions_tmpl = NULL; - struct rte_flow_actions_template *lacp_rx_actions_tmpl = NULL; + struct mlx5_flow_hw_ctrl_fdb *hw_ctrl_fdb; uint32_t xmeta = priv->sh->config.dv_xmeta_en; uint32_t repr_matching = priv->sh->config.repr_matching; - int ret; + MLX5_ASSERT(priv->hw_ctrl_fdb == NULL); + hw_ctrl_fdb = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*hw_ctrl_fdb), 0, SOCKET_ID_ANY); + if (!hw_ctrl_fdb) { + DRV_LOG(ERR, "port %u failed to allocate memory for FDB control flow templates", + dev->data->port_id); + rte_errno = ENOMEM; + goto err; + } + priv->hw_ctrl_fdb = hw_ctrl_fdb; /* Create templates and table for default SQ miss flow rules - root table. */ - esw_mgr_items_tmpl = flow_hw_create_ctrl_esw_mgr_pattern_template(dev, error); - if (!esw_mgr_items_tmpl) { + hw_ctrl_fdb->esw_mgr_items_tmpl = flow_hw_create_ctrl_esw_mgr_pattern_template(dev, error); + if (!hw_ctrl_fdb->esw_mgr_items_tmpl) { DRV_LOG(ERR, "port %u failed to create E-Switch Manager item" " template for control flows", dev->data->port_id); goto err; } - regc_jump_actions_tmpl = flow_hw_create_ctrl_regc_jump_actions_template(dev, error); - if (!regc_jump_actions_tmpl) { + hw_ctrl_fdb->regc_jump_actions_tmpl = flow_hw_create_ctrl_regc_jump_actions_template + (dev, error); + if (!hw_ctrl_fdb->regc_jump_actions_tmpl) { DRV_LOG(ERR, "port %u failed to create REG_C set and jump action template" " for control flows", dev->data->port_id); goto err; } - MLX5_ASSERT(priv->hw_esw_sq_miss_root_tbl == NULL); - priv->hw_esw_sq_miss_root_tbl = flow_hw_create_ctrl_sq_miss_root_table - (dev, esw_mgr_items_tmpl, regc_jump_actions_tmpl, error); - if (!priv->hw_esw_sq_miss_root_tbl) { + hw_ctrl_fdb->hw_esw_sq_miss_root_tbl = flow_hw_create_ctrl_sq_miss_root_table + (dev, hw_ctrl_fdb->esw_mgr_items_tmpl, hw_ctrl_fdb->regc_jump_actions_tmpl, + error); + if (!hw_ctrl_fdb->hw_esw_sq_miss_root_tbl) { DRV_LOG(ERR, "port %u failed to create table for default sq miss (root table)" " for control flows", dev->data->port_id); goto err; } /* Create templates and table for default SQ miss flow rules - non-root table. */ - regc_sq_items_tmpl = flow_hw_create_ctrl_regc_sq_pattern_template(dev, error); - if (!regc_sq_items_tmpl) { + hw_ctrl_fdb->regc_sq_items_tmpl = flow_hw_create_ctrl_regc_sq_pattern_template(dev, error); + if (!hw_ctrl_fdb->regc_sq_items_tmpl) { DRV_LOG(ERR, "port %u failed to create SQ item template for" " control flows", dev->data->port_id); goto err; } - port_actions_tmpl = flow_hw_create_ctrl_port_actions_template(dev, error); - if (!port_actions_tmpl) { + hw_ctrl_fdb->port_actions_tmpl = flow_hw_create_ctrl_port_actions_template(dev, error); + if (!hw_ctrl_fdb->port_actions_tmpl) { DRV_LOG(ERR, "port %u failed to create port action template" " for control flows", dev->data->port_id); goto err; } - MLX5_ASSERT(priv->hw_esw_sq_miss_tbl == NULL); - priv->hw_esw_sq_miss_tbl = flow_hw_create_ctrl_sq_miss_table(dev, regc_sq_items_tmpl, - port_actions_tmpl, error); - if (!priv->hw_esw_sq_miss_tbl) { + hw_ctrl_fdb->hw_esw_sq_miss_tbl = flow_hw_create_ctrl_sq_miss_table + (dev, hw_ctrl_fdb->regc_sq_items_tmpl, hw_ctrl_fdb->port_actions_tmpl, + error); + if (!hw_ctrl_fdb->hw_esw_sq_miss_tbl) { DRV_LOG(ERR, "port %u failed to create table for default sq miss (non-root table)" " for control flows", dev->data->port_id); goto err; } /* Create templates and table for default FDB jump flow rules. */ - port_items_tmpl = flow_hw_create_ctrl_port_pattern_template(dev, error); - if (!port_items_tmpl) { + hw_ctrl_fdb->port_items_tmpl = flow_hw_create_ctrl_port_pattern_template(dev, error); + if (!hw_ctrl_fdb->port_items_tmpl) { DRV_LOG(ERR, "port %u failed to create SQ item template for" " control flows", dev->data->port_id); goto err; } - jump_one_actions_tmpl = flow_hw_create_ctrl_jump_actions_template + hw_ctrl_fdb->jump_one_actions_tmpl = flow_hw_create_ctrl_jump_actions_template (dev, MLX5_HW_LOWEST_USABLE_GROUP, error); - if (!jump_one_actions_tmpl) { + if (!hw_ctrl_fdb->jump_one_actions_tmpl) { DRV_LOG(ERR, "port %u failed to create jump action template" " for control flows", dev->data->port_id); goto err; } - MLX5_ASSERT(priv->hw_esw_zero_tbl == NULL); - priv->hw_esw_zero_tbl = flow_hw_create_ctrl_jump_table(dev, port_items_tmpl, - jump_one_actions_tmpl, - error); - if (!priv->hw_esw_zero_tbl) { + hw_ctrl_fdb->hw_esw_zero_tbl = flow_hw_create_ctrl_jump_table + (dev, hw_ctrl_fdb->port_items_tmpl, hw_ctrl_fdb->jump_one_actions_tmpl, + error); + if (!hw_ctrl_fdb->hw_esw_zero_tbl) { DRV_LOG(ERR, "port %u failed to create table for default jump to group 1" " for control flows", dev->data->port_id); goto err; } /* Create templates and table for default Tx metadata copy flow rule. */ if (!repr_matching && xmeta == MLX5_XMETA_MODE_META32_HWS) { - tx_meta_items_tmpl = + hw_ctrl_fdb->tx_meta_items_tmpl = flow_hw_create_tx_default_mreg_copy_pattern_template(dev, error); - if (!tx_meta_items_tmpl) { + if (!hw_ctrl_fdb->tx_meta_items_tmpl) { DRV_LOG(ERR, "port %u failed to Tx metadata copy pattern" " template for control flows", dev->data->port_id); goto err; } - tx_meta_actions_tmpl = + hw_ctrl_fdb->tx_meta_actions_tmpl = flow_hw_create_tx_default_mreg_copy_actions_template(dev, error); - if (!tx_meta_actions_tmpl) { + if (!hw_ctrl_fdb->tx_meta_actions_tmpl) { DRV_LOG(ERR, "port %u failed to Tx metadata copy actions" " template for control flows", dev->data->port_id); goto err; } - MLX5_ASSERT(priv->hw_tx_meta_cpy_tbl == NULL); - priv->hw_tx_meta_cpy_tbl = - flow_hw_create_tx_default_mreg_copy_table(dev, tx_meta_items_tmpl, - tx_meta_actions_tmpl, error); - if (!priv->hw_tx_meta_cpy_tbl) { + hw_ctrl_fdb->hw_tx_meta_cpy_tbl = + flow_hw_create_tx_default_mreg_copy_table + (dev, hw_ctrl_fdb->tx_meta_items_tmpl, + hw_ctrl_fdb->tx_meta_actions_tmpl, error); + if (!hw_ctrl_fdb->hw_tx_meta_cpy_tbl) { DRV_LOG(ERR, "port %u failed to create table for default" " Tx metadata copy flow rule", dev->data->port_id); goto err; @@ -9487,71 +9552,34 @@ flow_hw_create_ctrl_tables(struct rte_eth_dev *dev, struct rte_flow_error *error } /* Create LACP default miss table. */ if (!priv->sh->config.lacp_by_user && priv->pf_bond >= 0 && priv->master) { - lacp_rx_items_tmpl = flow_hw_create_lacp_rx_pattern_template(dev, error); - if (!lacp_rx_items_tmpl) { + hw_ctrl_fdb->lacp_rx_items_tmpl = + flow_hw_create_lacp_rx_pattern_template(dev, error); + if (!hw_ctrl_fdb->lacp_rx_items_tmpl) { DRV_LOG(ERR, "port %u failed to create pattern template" " for LACP Rx traffic", dev->data->port_id); goto err; } - lacp_rx_actions_tmpl = flow_hw_create_lacp_rx_actions_template(dev, error); - if (!lacp_rx_actions_tmpl) { + hw_ctrl_fdb->lacp_rx_actions_tmpl = + flow_hw_create_lacp_rx_actions_template(dev, error); + if (!hw_ctrl_fdb->lacp_rx_actions_tmpl) { DRV_LOG(ERR, "port %u failed to create actions template" " for LACP Rx traffic", dev->data->port_id); goto err; } - priv->hw_lacp_rx_tbl = flow_hw_create_lacp_rx_table(dev, lacp_rx_items_tmpl, - lacp_rx_actions_tmpl, error); - if (!priv->hw_lacp_rx_tbl) { + hw_ctrl_fdb->hw_lacp_rx_tbl = flow_hw_create_lacp_rx_table + (dev, hw_ctrl_fdb->lacp_rx_items_tmpl, + hw_ctrl_fdb->lacp_rx_actions_tmpl, error); + if (!hw_ctrl_fdb->hw_lacp_rx_tbl) { DRV_LOG(ERR, "port %u failed to create template table for" " for LACP Rx traffic", dev->data->port_id); goto err; } } return 0; + err: - /* Do not overwrite the rte_errno. */ - ret = -rte_errno; - if (ret == 0) - ret = rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Failed to create control tables."); - if (priv->hw_tx_meta_cpy_tbl) { - flow_hw_table_destroy(dev, priv->hw_tx_meta_cpy_tbl, NULL); - priv->hw_tx_meta_cpy_tbl = NULL; - } - if (priv->hw_esw_zero_tbl) { - flow_hw_table_destroy(dev, priv->hw_esw_zero_tbl, NULL); - priv->hw_esw_zero_tbl = NULL; - } - if (priv->hw_esw_sq_miss_tbl) { - flow_hw_table_destroy(dev, priv->hw_esw_sq_miss_tbl, NULL); - priv->hw_esw_sq_miss_tbl = NULL; - } - if (priv->hw_esw_sq_miss_root_tbl) { - flow_hw_table_destroy(dev, priv->hw_esw_sq_miss_root_tbl, NULL); - priv->hw_esw_sq_miss_root_tbl = NULL; - } - if (lacp_rx_actions_tmpl) - flow_hw_actions_template_destroy(dev, lacp_rx_actions_tmpl, NULL); - if (tx_meta_actions_tmpl) - flow_hw_actions_template_destroy(dev, tx_meta_actions_tmpl, NULL); - if (jump_one_actions_tmpl) - flow_hw_actions_template_destroy(dev, jump_one_actions_tmpl, NULL); - if (port_actions_tmpl) - flow_hw_actions_template_destroy(dev, port_actions_tmpl, NULL); - if (regc_jump_actions_tmpl) - flow_hw_actions_template_destroy(dev, regc_jump_actions_tmpl, NULL); - if (lacp_rx_items_tmpl) - flow_hw_pattern_template_destroy(dev, lacp_rx_items_tmpl, NULL); - if (tx_meta_items_tmpl) - flow_hw_pattern_template_destroy(dev, tx_meta_items_tmpl, NULL); - if (port_items_tmpl) - flow_hw_pattern_template_destroy(dev, port_items_tmpl, NULL); - if (regc_sq_items_tmpl) - flow_hw_pattern_template_destroy(dev, regc_sq_items_tmpl, NULL); - if (esw_mgr_items_tmpl) - flow_hw_pattern_template_destroy(dev, esw_mgr_items_tmpl, NULL); - return ret; + flow_hw_cleanup_ctrl_fdb_tables(dev); + return -EINVAL; } static void @@ -10583,6 +10611,7 @@ flow_hw_configure(struct rte_eth_dev *dev, action_template_drop_release(dev); mlx5_flow_quota_destroy(dev); flow_hw_destroy_send_to_kernel_action(priv); + flow_hw_cleanup_ctrl_fdb_tables(dev); flow_hw_free_vport_actions(priv); for (i = 0; i < MLX5_HW_ACTION_FLAG_MAX; i++) { if (priv->hw_drop[i]) @@ -10645,6 +10674,7 @@ flow_hw_resource_release(struct rte_eth_dev *dev) dev->flow_fp_ops = &rte_flow_fp_default_ops; flow_hw_rxq_flag_set(dev, false); flow_hw_flush_all_ctrl_flows(dev); + flow_hw_cleanup_ctrl_fdb_tables(dev); flow_hw_cleanup_tx_repr_tagging(dev); flow_hw_cleanup_ctrl_rx_tables(dev); action_template_drop_release(dev); @@ -13211,8 +13241,9 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool proxy_port_id, port_id); return 0; } - if (!proxy_priv->hw_esw_sq_miss_root_tbl || - !proxy_priv->hw_esw_sq_miss_tbl) { + if (!proxy_priv->hw_ctrl_fdb || + !proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_root_tbl || + !proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_tbl) { DRV_LOG(ERR, "Transfer proxy port (port %u) of port %u was configured, but " "default flow tables were not created.", proxy_port_id, port_id); @@ -13244,7 +13275,8 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool actions[2] = (struct rte_flow_action) { .type = RTE_FLOW_ACTION_TYPE_END, }; - ret = flow_hw_create_ctrl_flow(dev, proxy_dev, proxy_priv->hw_esw_sq_miss_root_tbl, + ret = flow_hw_create_ctrl_flow(dev, proxy_dev, + proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_root_tbl, items, 0, actions, 0, &flow_info, external); if (ret) { DRV_LOG(ERR, "Port %u failed to create root SQ miss flow rule for SQ %u, ret %d", @@ -13275,7 +13307,8 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool .type = RTE_FLOW_ACTION_TYPE_END, }; flow_info.type = MLX5_HW_CTRL_FLOW_TYPE_SQ_MISS; - ret = flow_hw_create_ctrl_flow(dev, proxy_dev, proxy_priv->hw_esw_sq_miss_tbl, + ret = flow_hw_create_ctrl_flow(dev, proxy_dev, + proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_tbl, items, 0, actions, 0, &flow_info, external); if (ret) { DRV_LOG(ERR, "Port %u failed to create HWS SQ miss flow rule for SQ %u, ret %d", @@ -13321,8 +13354,9 @@ mlx5_flow_hw_esw_destroy_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn) proxy_priv = proxy_dev->data->dev_private; if (!proxy_priv->dr_ctx) return 0; - if (!proxy_priv->hw_esw_sq_miss_root_tbl || - !proxy_priv->hw_esw_sq_miss_tbl) + if (!proxy_priv->hw_ctrl_fdb || + !proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_root_tbl || + !proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_tbl) return 0; cf = LIST_FIRST(&proxy_priv->hw_ctrl_flows); while (cf != NULL) { @@ -13389,7 +13423,7 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev) proxy_port_id, port_id); return 0; } - if (!proxy_priv->hw_esw_zero_tbl) { + if (!proxy_priv->hw_ctrl_fdb || !proxy_priv->hw_ctrl_fdb->hw_esw_zero_tbl) { DRV_LOG(ERR, "Transfer proxy port (port %u) of port %u was configured, but " "default flow tables were not created.", proxy_port_id, port_id); @@ -13397,7 +13431,7 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev) return -rte_errno; } return flow_hw_create_ctrl_flow(dev, proxy_dev, - proxy_priv->hw_esw_zero_tbl, + proxy_priv->hw_ctrl_fdb->hw_esw_zero_tbl, items, 0, actions, 0, &flow_info, false); } @@ -13449,10 +13483,12 @@ mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev) }; MLX5_ASSERT(priv->master); - if (!priv->dr_ctx || !priv->hw_tx_meta_cpy_tbl) + if (!priv->dr_ctx || + !priv->hw_ctrl_fdb || + !priv->hw_ctrl_fdb->hw_tx_meta_cpy_tbl) return 0; return flow_hw_create_ctrl_flow(dev, dev, - priv->hw_tx_meta_cpy_tbl, + priv->hw_ctrl_fdb->hw_tx_meta_cpy_tbl, eth_all, 0, copy_reg_action, 0, &flow_info, false); } @@ -13544,10 +13580,11 @@ mlx5_flow_hw_lacp_rx_flow(struct rte_eth_dev *dev) .type = MLX5_HW_CTRL_FLOW_TYPE_LACP_RX, }; - if (!priv->dr_ctx || !priv->hw_lacp_rx_tbl) + if (!priv->dr_ctx || !priv->hw_ctrl_fdb || !priv->hw_ctrl_fdb->hw_lacp_rx_tbl) return 0; - return flow_hw_create_ctrl_flow(dev, dev, priv->hw_lacp_rx_tbl, eth_lacp, 0, - miss_action, 0, &flow_info, false); + return flow_hw_create_ctrl_flow(dev, dev, + priv->hw_ctrl_fdb->hw_lacp_rx_tbl, + eth_lacp, 0, miss_action, 0, &flow_info, false); } static uint32_t From patchwork Wed Mar 6 20:21:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dariusz Sosnowski X-Patchwork-Id: 138062 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E9BE643C5A; Wed, 6 Mar 2024 21:23:18 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D7E9042EA5; Wed, 6 Mar 2024 21:23:18 +0100 (CET) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2063.outbound.protection.outlook.com [40.107.237.63]) by mails.dpdk.org (Postfix) with ESMTP id CE2FF42EAA; Wed, 6 Mar 2024 21:23:17 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=NZ9mJNYdXvbQi81Q/5225/2Y5gPWIgX8WlpCysicpqdAIym++3UIklBJ6i7cZWCFXJ9zLqLa2wWCoRUH4kKsKpwxr0xGblVmHQxCjoRslB2nGqVhdODVbqp2O/6h+lyNgqQmdhDGTTOi9y+ve1cGFoyH06IShlpS2i/z35FIlmF50Wlc6RxYWh9cXlZikz57zpWINMEykn5gGzLXFwoEEigq/GMb8ziPxx5091ku307KtC21RGzbmgwxoOnl1o38fOU6iTOhboCRMxSBdiHuEQ61+fE52bfHYttr0l/VOu7cwVWlggAAtWDad6Ryh1y5BXST6iMs6y9YsVP0O3FT0A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=2QLf9leDflKTCzZDBpnzS3DPBaS8V58/UUp995wYEO0=; b=HioFzzoR7vY834zeRDcmcP5QX8tUPLVdAzQlpgkuj79Ukdt3KjlfGrcdSGk6ozDIcClWj6BCN/659kmTzOtcCU/y8oK1D+1X7w0syUpNyg7z//XZIVHL1yygo0qFrxM1eGINkzTK+E+g2Pcr3EaAd6tAKEdsMrgsB3rKu6J3ETx3HR7qaTbc3V79wx7sa5DmgZKHnEtRW5SDOxkTzcsQsMBtTtrm4GyV68XDa0gSb+WtOJAxPOnwBYX9vygTZsHTDnsheXm9FtmA/y9zz5NIVeMAkMei8u56ST//AoYtPytFvcJFfEBB5GZRuhn1FABZCrsTUynxls8Z9QUpsFCajA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2QLf9leDflKTCzZDBpnzS3DPBaS8V58/UUp995wYEO0=; b=tGD3Dmq7lZyg/nkQW2e0tcc3LMPDbOCtt75BFwuxs6QTs1Bn8Tb9iGmrVoHJeeIRddYO0ftJkCgFnEtpMdSP/Z8lcylAOiPTtEe0jevsXV7woU8UZbc5I0zKH7tAnLxFaF7XvWFc9sAHK0awtjH376d+xrm+o3uAYvOBXj4R9nJRWef6jzCfk5Zr5IAw5gLqDXPKf6DDK4qXu+TjwQCqPsa3+2j8X1QsDghWP/4vp4n7Grnfwcg6mpTXz6QfU5ulz9jKCD903I2mSyNI/nbR619x3XuGwGFO19HNIyamEgNmzsGlV7ah8wu5PwUOh6zDpD0HzuoGyq90L6uGovL5cw== Received: from CY5PR15CA0097.namprd15.prod.outlook.com (2603:10b6:930:7::19) by PH7PR12MB6442.namprd12.prod.outlook.com (2603:10b6:510:1fa::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7362.24; Wed, 6 Mar 2024 20:23:11 +0000 Received: from CY4PEPF0000FCC4.namprd03.prod.outlook.com (2603:10b6:930:7:cafe::1a) by CY5PR15CA0097.outlook.office365.com (2603:10b6:930:7::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7362.24 via Frontend Transport; Wed, 6 Mar 2024 20:23:11 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CY4PEPF0000FCC4.mail.protection.outlook.com (10.167.242.106) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7362.11 via Frontend Transport; Wed, 6 Mar 2024 20:23:11 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Wed, 6 Mar 2024 12:22:56 -0800 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Wed, 6 Mar 2024 12:22:53 -0800 From: Dariusz Sosnowski To: Viacheslav Ovsiienko , Ori Kam , Suanming Mou , Matan Azrad , "Bing Zhao" , Gregory Etelson , Michael Baum CC: , Subject: [PATCH 3/4] net/mlx5: fix rollback on failed flow configure Date: Wed, 6 Mar 2024 21:21:49 +0100 Message-ID: <20240306202150.79577-3-dsosnowski@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240306202150.79577-1-dsosnowski@nvidia.com> References: <20240306202150.79577-1-dsosnowski@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000FCC4:EE_|PH7PR12MB6442:EE_ X-MS-Office365-Filtering-Correlation-Id: 13a81b27-8b51-4a1b-bf2c-08dc3e1b3e6f X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: VL3wfs4ThzkHxkXTHvPsMwTuBkLyeHEsUW37ObruTEEwX4Prk3VzzSCZmVMns2O6qzJEcCfSIRxGiPILfT+/A6a8qcNKUJSPR9TwBZaNUi8DaHDcW/vQUkpyvEeb/TIYTUnadM6ih9uFn9YnJM8/l/EvOqZw9AGVjrtN+bx/YgmV1j69AHWkwGBjWwRCsPognWyxb/O7eV9HQPOLFlqU9zaJ0WSz9INFcufZgjn6fakMZvEMkzoYkXL9pJ9SLoQN+XmA7Z3T7UA1IRGiiGajtzYoGrcMWByMMBvGlgtPgok5dMQ9oXamwETMw1It0n5BYghjUzi1JXjCn/i0PHGerJR8wFpM/N6Hlcbk90pryBUdDvFyhHJEfbMfPIjJN3yXyCZttZ+F/Rr+tV12lUdjlldBZD8/0aI3oOKMpiEzB4vH15qaBIw+hPB20foJMH6PL6w7wEbUZfSou0HjQVJAk7uaghGk5BC1Ax05M/zcq1qsa+9viBSGjnvgP3x7mMKGTds/STUSbn4izE1/QWGu6G/NnukDtMx5fB71AoFsjKmmQBglNvmybLYFZWMOSM5Dx+Q4Rc3/MSH4PPzSWKlly2iayLPqqcVMV5VBsfSygHjo1Hx8/ziK1ma6FF3j62tZHCYfbhEnvmx4i1XqPtlt7HD17+c+G+WGYWQR4bgHtwXErpuTWLt+KE7bLkH29stlZP6tjnfnyplmunRRUtoIqiAdU3+I6NiRxHZSLUYGT173uJRcUExgbPFKU5wBiUE3 X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(376005)(82310400014)(36860700004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Mar 2024 20:23:11.1223 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 13a81b27-8b51-4a1b-bf2c-08dc3e1b3e6f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000FCC4.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB6442 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org If rte_flow_configure() failed, then some port resources were either not freed, nor reset to the default state. As a result, assumptions in other places in PMD were invalidated and that lead to segmentation faults during release of HW Steering resources when port was closed. This patch adds missing resource release to rollback procedure in mlx5 PMD implementation of rte_flow_configure(). Whole rollback procedure is reordered for clarity, to resemble reverse order of resource allocation. Fixes: 1939eb6f660c ("net/mlx5: support flow port action with HWS") Fixes: 8a5c816691e7 ("net/mlx5: create NAT64 actions during configuration") Fixes: 773ca0e91ba1 ("net/mlx5: support VLAN push/pop/modify with HWS") Fixes: 04a4de756e14 ("net/mlx5: support flow age action with HWS") Fixes: c3f085a4858c ("net/mlx5: improve pattern template validation") Cc: stable@dpdk.org Signed-off-by: Dariusz Sosnowski Acked-by: Ori Kam --- drivers/net/mlx5/mlx5_flow_hw.c | 65 ++++++++++++++++++++------------- 1 file changed, 40 insertions(+), 25 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 21c37b7539..17ab3a98fe 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -10188,7 +10188,7 @@ flow_hw_compare_config(const struct mlx5_flow_hw_attr *hw_attr, * mlx5_dev_close -> flow_hw_resource_release -> flow_hw_actions_template_destroy */ static void -action_template_drop_release(struct rte_eth_dev *dev) +flow_hw_action_template_drop_release(struct rte_eth_dev *dev) { int i; struct mlx5_priv *priv = dev->data->dev_private; @@ -10204,7 +10204,7 @@ action_template_drop_release(struct rte_eth_dev *dev) } static int -action_template_drop_init(struct rte_eth_dev *dev, +flow_hw_action_template_drop_init(struct rte_eth_dev *dev, struct rte_flow_error *error) { const struct rte_flow_action drop[2] = { @@ -10466,7 +10466,7 @@ flow_hw_configure(struct rte_eth_dev *dev, rte_spinlock_init(&priv->hw_ctrl_lock); LIST_INIT(&priv->hw_ctrl_flows); LIST_INIT(&priv->hw_ext_ctrl_flows); - ret = action_template_drop_init(dev, error); + ret = flow_hw_action_template_drop_init(dev, error); if (ret) goto err; ret = flow_hw_create_ctrl_rx_tables(dev); @@ -10594,6 +10594,15 @@ flow_hw_configure(struct rte_eth_dev *dev, dev->flow_fp_ops = &mlx5_flow_hw_fp_ops; return 0; err: + priv->hws_strict_queue = 0; + flow_hw_destroy_nat64_actions(priv); + flow_hw_destroy_vlan(dev); + if (priv->hws_age_req) + mlx5_hws_age_pool_destroy(priv); + if (priv->hws_cpool) { + mlx5_hws_cnt_pool_destroy(priv->sh, priv->hws_cpool); + priv->hws_cpool = NULL; + } if (priv->hws_ctpool) { flow_hw_ct_pool_destroy(dev, priv->hws_ctpool); priv->hws_ctpool = NULL; @@ -10602,29 +10611,38 @@ flow_hw_configure(struct rte_eth_dev *dev, flow_hw_ct_mng_destroy(dev, priv->ct_mng); priv->ct_mng = NULL; } - if (priv->hws_age_req) - mlx5_hws_age_pool_destroy(priv); - if (priv->hws_cpool) { - mlx5_hws_cnt_pool_destroy(priv->sh, priv->hws_cpool); - priv->hws_cpool = NULL; - } - action_template_drop_release(dev); - mlx5_flow_quota_destroy(dev); flow_hw_destroy_send_to_kernel_action(priv); flow_hw_cleanup_ctrl_fdb_tables(dev); flow_hw_free_vport_actions(priv); + if (priv->hw_def_miss) { + mlx5dr_action_destroy(priv->hw_def_miss); + priv->hw_def_miss = NULL; + } + flow_hw_cleanup_tx_repr_tagging(dev); for (i = 0; i < MLX5_HW_ACTION_FLAG_MAX; i++) { - if (priv->hw_drop[i]) + if (priv->hw_drop[i]) { mlx5dr_action_destroy(priv->hw_drop[i]); - if (priv->hw_tag[i]) + priv->hw_drop[i] = NULL; + } + if (priv->hw_tag[i]) { mlx5dr_action_destroy(priv->hw_tag[i]); + priv->hw_tag[i] = NULL; + } } - if (priv->hw_def_miss) - mlx5dr_action_destroy(priv->hw_def_miss); - flow_hw_destroy_nat64_actions(priv); - flow_hw_destroy_vlan(dev); - if (dr_ctx) + mlx5_flow_meter_uninit(dev); + mlx5_flow_quota_destroy(dev); + flow_hw_cleanup_ctrl_rx_tables(dev); + flow_hw_action_template_drop_release(dev); + if (dr_ctx) { claim_zero(mlx5dr_context_close(dr_ctx)); + priv->dr_ctx = NULL; + } + if (priv->shared_host) { + struct mlx5_priv *host_priv = priv->shared_host->data->dev_private; + + __atomic_fetch_sub(&host_priv->shared_refcnt, 1, __ATOMIC_RELAXED); + priv->shared_host = NULL; + } for (i = 0; i < nb_q_updated; i++) { rte_ring_free(priv->hw_q[i].indir_iq); rte_ring_free(priv->hw_q[i].indir_cq); @@ -10637,14 +10655,11 @@ flow_hw_configure(struct rte_eth_dev *dev, mlx5_ipool_destroy(priv->acts_ipool); priv->acts_ipool = NULL; } - if (_queue_attr) - mlx5_free(_queue_attr); - if (priv->shared_host) { - __atomic_fetch_sub(&host_priv->shared_refcnt, 1, __ATOMIC_RELAXED); - priv->shared_host = NULL; - } mlx5_free(priv->hw_attr); priv->hw_attr = NULL; + priv->nb_queue = 0; + if (_queue_attr) + mlx5_free(_queue_attr); /* Do not overwrite the internal errno information. */ if (ret) return ret; @@ -10677,7 +10692,7 @@ flow_hw_resource_release(struct rte_eth_dev *dev) flow_hw_cleanup_ctrl_fdb_tables(dev); flow_hw_cleanup_tx_repr_tagging(dev); flow_hw_cleanup_ctrl_rx_tables(dev); - action_template_drop_release(dev); + flow_hw_action_template_drop_release(dev); while (!LIST_EMPTY(&priv->flow_hw_grp)) { grp = LIST_FIRST(&priv->flow_hw_grp); flow_hw_group_unset_miss_group(dev, grp, NULL); From patchwork Wed Mar 6 20:21:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dariusz Sosnowski X-Patchwork-Id: 138063 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F2D9743C5A; Wed, 6 Mar 2024 21:23:28 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E302242ECC; Wed, 6 Mar 2024 21:23:21 +0100 (CET) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2055.outbound.protection.outlook.com [40.107.94.55]) by mails.dpdk.org (Postfix) with ESMTP id 5D37342EAE; Wed, 6 Mar 2024 21:23:19 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=D8HeQUGaFjgFSq/rchAaYtNUclGQM+d/iC31avTjTSMsLuA/yd6X+9zeODGr34ZWq7dwUwtNlxYKEhZaKSgaLPaiUMQYKi+BUrH3JXCif1CzXC8BkfigScpvx3j6Gh9xLYaSBQCRSaMZll7GOYC9ZeA2yAnYgRTpsf5SBI84Vr7zGUiXPapKgq7kNUTqn99nuTSEC1HInYwZPERFobUP3suBb1AfurGssZmQZPapD+Jxpi4Hb4BOC3VR/EaWo+LZ2rUVEEUZ0us9krgQm+ISZJ7yKgTDm/NR1mBMDr9CHwCw8fW7zIfD0CAHSPfYMIFOQ70wdWH4FLdhViVVJzeyJA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=xsfFbK9H2+h+/7xIPrCEPQbANHdGLT5TTcGwU6TbxCg=; b=mN6l358o0/rH+C2xGKQwsWf727jOeJkdFIAG7o2mNeCtUpApWzd3dCb1R33pQaa6Ba6k+Xxgtrzgc2duER4NKUWi/BYok/66LeQ2ffEwsJb9sjAPGgZ2ayZE+q8uxDDLJQ87sCwquh126nWhlUcVOoO1OZn+xlAUie8AiWKRdy6ga8VXW3d8WAdHWm0BhKY9Kn4xhY4msiyJ0cpCNzoTx+1SC0qtPRjjAiA2XAyKqOpr6b1KEyBc4h9txi0hAXf8K1s/S8piN0gIHDgasZaWaJFEFvPXvYC88tUMks3R4TToRWintZE84EZNbEo+NxXdht7j7/a0KOC5Bqk4Pnyx6Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=xsfFbK9H2+h+/7xIPrCEPQbANHdGLT5TTcGwU6TbxCg=; b=V1mi2T74pkgSB1GAvLdAC17PeXR5Ezanrr21znPv62wOvXkODve8pSGO5+nAH7jyALs0ifTpL7gMJcI8UH9gYaYh44jl9DrVZFfiNvDE4LB96qGF4rel992xSl/lRMV6pMYbwV6FKrmTdbn50acZjlmzukbfd3lvWf/Jbq7h6ajk++Nc9STr+lyxqBplfODLtLQG5wAbcJOPhR8H3UTWZH2FLlreLOgLrPIj2ka5lmI53oirdJpOPXzDUGTgqPnCQ3bJ1Lhioni/+crOvaMn+KTbwxub2yDZ4oA5a2mN8+0xx2eIg8LaGF9t+ZIDtWOG+bhfMEm4tPFK1dsiFIoQoA== Received: from CH5P223CA0021.NAMP223.PROD.OUTLOOK.COM (2603:10b6:610:1f3::22) by DS0PR12MB7925.namprd12.prod.outlook.com (2603:10b6:8:14b::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7339.39; Wed, 6 Mar 2024 20:23:16 +0000 Received: from CH1PEPF0000AD7E.namprd04.prod.outlook.com (2603:10b6:610:1f3:cafe::27) by CH5P223CA0021.outlook.office365.com (2603:10b6:610:1f3::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7362.26 via Frontend Transport; Wed, 6 Mar 2024 20:23:15 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CH1PEPF0000AD7E.mail.protection.outlook.com (10.167.244.87) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7362.11 via Frontend Transport; Wed, 6 Mar 2024 20:23:15 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Wed, 6 Mar 2024 12:22:58 -0800 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Wed, 6 Mar 2024 12:22:56 -0800 From: Dariusz Sosnowski To: Viacheslav Ovsiienko , Ori Kam , Suanming Mou , Matan Azrad CC: , Subject: [PATCH 4/4] net/mlx5: fix flow configure validation Date: Wed, 6 Mar 2024 21:21:50 +0100 Message-ID: <20240306202150.79577-4-dsosnowski@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240306202150.79577-1-dsosnowski@nvidia.com> References: <20240306202150.79577-1-dsosnowski@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH1PEPF0000AD7E:EE_|DS0PR12MB7925:EE_ X-MS-Office365-Filtering-Correlation-Id: 478b7541-2774-406b-618b-08dc3e1b4145 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: nzlzqD17d9iC7pAhk+LVWITKRiSJPxNIehBMEYXGbkdl5rP6UADgqlUmsW+MIDDOn2C0zk0/lFi0BsSAYDoja7bo6SVfgDd3oU30QtlgymWKSf5ldR+EIK9feGUq1IkSi8dKPGW+9Jx5G1VcOqc5SwwGDeWG6YgD5I/SCa5WH4+2PqA6crdsTLuxPnz3kaR7n3FdP2I5ECa84XtOKPhxv7wa2gnDbAKfHPjkeeM4aXtEsp8ELR9IHXuJHhsqjDQffsgTx1sOWkbLPRZvprq6FllynLe4TkDp3kEyB+jttzOhnc2aJE6e5oNqIpVUA1hMt5uTDHZSSNziDQDPG8qb1RyY1tmlXFkdsUREKk+cv2+g6z53ZcV+okrNTHn+lm3bBbPRLv0EdhsumAGPzzjzUALOnsoewgtoplNKzVTI167/nSf8Ie1gr46VUqOjkwCawMJrhTpnqvEBVZ5MSDDpuo40Q1ZXwR7l2MRKo11d5oaIBzl1PoVQLPabqzKkZVtkExmEU3k5oYjNriyQcOoCD4ymbgiTsJelglO5NY/5/rjPxhXQFIWnOL0DNeym0ch68vmNdiNC9X27Gr06lJOfXuh0kliH4MRaI0GSO3m7b3lUMzo7d1skyogzvOBygNwRMCg92u/05qWAL4ik8UaotBTactjxEgNYam4WkwlgaK9p2KCvPN8f4nwyUy6tq6mNhgyqvgbFB9roQxau+BP8DSwjBR10jsF70a7OLoaY0ELXSh8B7OXdEQgRXHk983Ar X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230031)(82310400014)(376005)(36860700004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Mar 2024 20:23:15.8310 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 478b7541-2774-406b-618b-08dc3e1b4145 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CH1PEPF0000AD7E.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB7925 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org There's an existing limitation in mlx5 PMD, that all configured flow queues must have the same size. Even though this condition is checked, some allocations are done before that. This lead to segmentation fault during rollback on error in rte_flow_configure() implementation. This patch fixes that by reorganizing validation, so that configuration options are validated before any allocations are done and necessary checks for NULL are added to error rollback. Bugzilla ID: 1199 Fixes: b401400db24e ("net/mlx5: add port flow configuration") Cc: stable@dpdk.org Signed-off-by: Dariusz Sosnowski Acked-by: Suanming Mou --- drivers/net/mlx5/mlx5_flow_hw.c | 62 +++++++++++++++++++++++---------- 1 file changed, 43 insertions(+), 19 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 17ab3a98fe..407a843578 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -10253,6 +10253,38 @@ mlx5_hwq_ring_create(uint16_t port_id, uint32_t queue, uint32_t size, const char RING_F_SP_ENQ | RING_F_SC_DEQ | RING_F_EXACT_SZ); } +static int +flow_hw_validate_attributes(const struct rte_flow_port_attr *port_attr, + uint16_t nb_queue, + const struct rte_flow_queue_attr *queue_attr[], + struct rte_flow_error *error) +{ + uint32_t size; + unsigned int i; + + if (port_attr == NULL) + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Port attributes must be non-NULL"); + + if (nb_queue == 0) + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "At least one flow queue is required"); + + if (queue_attr == NULL) + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Queue attributes must be non-NULL"); + + size = queue_attr[0]->size; + for (i = 1; i < nb_queue; ++i) { + if (queue_attr[i]->size != size) + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "All flow queues must have the same size"); + } + + return 0; +} + /** * Configure port HWS resources. * @@ -10304,10 +10336,8 @@ flow_hw_configure(struct rte_eth_dev *dev, int ret = 0; uint32_t action_flags; - if (!port_attr || !nb_queue || !queue_attr) { - rte_errno = EINVAL; - goto err; - } + if (flow_hw_validate_attributes(port_attr, nb_queue, queue_attr, error)) + return -rte_errno; /* * Calling rte_flow_configure() again is allowed if and only if * provided configuration matches the initially provided one. @@ -10354,14 +10384,6 @@ flow_hw_configure(struct rte_eth_dev *dev, /* Allocate the queue job descriptor LIFO. */ mem_size = sizeof(priv->hw_q[0]) * nb_q_updated; for (i = 0; i < nb_q_updated; i++) { - /* - * Check if the queues' size are all the same as the - * limitation from HWS layer. - */ - if (_queue_attr[i]->size != _queue_attr[0]->size) { - rte_errno = EINVAL; - goto err; - } mem_size += (sizeof(struct mlx5_hw_q_job *) + sizeof(struct mlx5_hw_q_job)) * _queue_attr[i]->size; } @@ -10643,14 +10665,16 @@ flow_hw_configure(struct rte_eth_dev *dev, __atomic_fetch_sub(&host_priv->shared_refcnt, 1, __ATOMIC_RELAXED); priv->shared_host = NULL; } - for (i = 0; i < nb_q_updated; i++) { - rte_ring_free(priv->hw_q[i].indir_iq); - rte_ring_free(priv->hw_q[i].indir_cq); - rte_ring_free(priv->hw_q[i].flow_transfer_pending); - rte_ring_free(priv->hw_q[i].flow_transfer_completed); + if (priv->hw_q) { + for (i = 0; i < nb_q_updated; i++) { + rte_ring_free(priv->hw_q[i].indir_iq); + rte_ring_free(priv->hw_q[i].indir_cq); + rte_ring_free(priv->hw_q[i].flow_transfer_pending); + rte_ring_free(priv->hw_q[i].flow_transfer_completed); + } + mlx5_free(priv->hw_q); + priv->hw_q = NULL; } - mlx5_free(priv->hw_q); - priv->hw_q = NULL; if (priv->acts_ipool) { mlx5_ipool_destroy(priv->acts_ipool); priv->acts_ipool = NULL;