From patchwork Mon Feb 13 13:37:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Slava Ovsiienko X-Patchwork-Id: 123798 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C7B6141C89; Mon, 13 Feb 2023 14:38:24 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B44F642D10; Mon, 13 Feb 2023 14:38:24 +0100 (CET) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2047.outbound.protection.outlook.com [40.107.243.47]) by mails.dpdk.org (Postfix) with ESMTP id 1144F42C54 for ; Mon, 13 Feb 2023 14:38:23 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=D+3VxTrBVzJl2a6YbZm3kv9AyE2xPxS87La4kLUdvtJIX16Py9mmyaFjxikqKfFPEhfPXe58l4rG6/aP3U2fqhhrqJFQqlzxzoz0xPUjeakr0c3gFcvMINICzeU5/LyKrXSmpEzU4w6KnRh7L/AcvmaVOWmtnGfMkQUiZDiS3RUhlzJCKXPeM8voa2xQlksHEJjRkFkkQ1ioh2TFUiSL9J7qkw7MfHBqxys9LgMu7WOqktVUn2rR3iJy6wvAoXT1nGE2w2DhpqjHg0EJejUljVwids1x1LQPDxH2Q6BgF2dP84OQ9dnSIBRcjaAL48y3Ll1+JtdHf+nsE8ntSihZKw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=3YycRqNjSW+mqSMhA47dv4i56OXdhOUmBOOAiQkVi1s=; b=IJf9/fN+AkwjSQm4PRr+tZavICQXyLh6Tet3b9/r8UgVwLcAuz6Q2A5621VYGtXHNIiCfpn1UibULzH6zqdhRvBEe5kM3enCizhsUdqlmB5oe0RRMI9S7JpCduCFzqU9Pfv2erru71ZYFVLgSt/Q9Bo6Gx4OHNTij316ExfCBQ3LqqLN7kyMtIOZfiDRTH/9zDMm1ViTCydn/jvaVPETOfoq5tFGA45+X4jSZpPznQOZgjLz20Wbmey2t9JWvbn9sijsFdkbf7LAUgx8jlDaxBLIDE/4RcF4NY4y8asZPm/y5BstiyXFEiXfAmttaq10VGsNZwhEjFWun3n+4GGCgw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=3YycRqNjSW+mqSMhA47dv4i56OXdhOUmBOOAiQkVi1s=; b=o7qUuXi/sTqxNucETpyIEw263d34owMXD43VuidSL0F6aPpCtIbwpYHU+ynvEyqqskqrPwvjtosU270xs5JqduxJh3d3cNJo7BdCCKeTinzO5sUqiMXJ2sr22ItIsr9dozyjQiKLw6utHygqiAZXXJAeL5x7u/+6NVkFaP/dx0Uf46dEQdCipCGE+V6e2mQlKtlyDUKyWvQYu1jU30fsgXCDpK057CzsRo4lye5JG1p1200/tjgclePLSVms0Bk7m4NMzzYvWI8MWW4yPr0pJfFiDW1cWVrA/MkgJ6w6WBYenJBTTxGjaL2DBmqbYhPP9+Cq5fbchhiFqk+l3HcVRw== Received: from DS7PR03CA0327.namprd03.prod.outlook.com (2603:10b6:8:2b::35) by CY5PR12MB6108.namprd12.prod.outlook.com (2603:10b6:930:27::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6086.19; Mon, 13 Feb 2023 13:38:21 +0000 Received: from DS1PEPF0000E636.namprd02.prod.outlook.com (2603:10b6:8:2b:cafe::ee) by DS7PR03CA0327.outlook.office365.com (2603:10b6:8:2b::35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6086.24 via Frontend Transport; Mon, 13 Feb 2023 13:38:21 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DS1PEPF0000E636.mail.protection.outlook.com (10.167.17.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6111.8 via Frontend Transport; Mon, 13 Feb 2023 13:38:21 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Mon, 13 Feb 2023 05:38:05 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Mon, 13 Feb 2023 05:38:03 -0800 From: Viacheslav Ovsiienko To: CC: , , , Erez Shitrit Subject: [PATCH v4 1/5] net/mlx5/hws: free FT from RTC ID before set the new value Date: Mon, 13 Feb 2023 15:37:36 +0200 Message-ID: <20230213133740.27005-2-viacheslavo@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20230213133740.27005-1-viacheslavo@nvidia.com> References: <20230206095229.23027-1-viacheslavo@nvidia.com> <20230213133740.27005-1-viacheslavo@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF0000E636:EE_|CY5PR12MB6108:EE_ X-MS-Office365-Filtering-Correlation-Id: 2a35213a-1d4a-4053-752e-08db0dc7929e X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: DEGC3JfTGiu2p7padu42kpX6uaQ78gtlzlLSdfas/SHDiAiCddmfCgQphGO9p0Fnt8KQ28dpnLt4F6aTkA6Ozr1oLh5fBQmVUpG4qTAM67auKu32Kq4PG9LJ6w9HcnerXISp/yzFZfl2owwHe04EZHqB+cSzqMpxLe9rUMk0gqyk1gFv0kBI8H7UPKHlKjDUn+Ro/niXv1HQvEwbGf1XP4qzcPPguRs3UP5ovH9kcm3AJek8VRAl8xogtlVa2dP1J/GTcpWkw6FilorhoXJIULkEEIyFsnPuEsbdpsimAGdEHEazvAWeIAAaFE0rKrO+fnuP0lhHUpNAU+xTBupcrz2sRl4modNhS3P0SylXr6eQTfEYCHKb+VVU5IAJx8BiBqRoCj/SQUWIMMyWdSFcYGiNTdRg+wdhrkEpyQQ2Wr3dCu9m9M12pOoTqkW3j0/8TZTCzjjhxR33q+OHFHUbnu4oKDdFpB2XLOVhWrL5nlzqf1SOSOBYW71QslBQB4/qARddOZZqAtIEdp44d/8U2VYv8/QHf1ljKQ/KD8p3MKa+UwB41T8gqYfPoAhHBBToAqMG/vj7ovLd0EywKYn49jnNIYGSaylawmrxzRfF0rE4zzMw8MrRNz3QGgNwfeZPpoRXapzvuQg4bZ6TWiGpCA0yaXXgQbiOTkc5wpTTpl00ROueEQkxvi4/f+HHa6kXqnIy/IV/wCCno/xRWQ10Qg== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(346002)(136003)(396003)(39860400002)(376002)(451199018)(36840700001)(40470700004)(46966006)(55016003)(83380400001)(36756003)(6666004)(82310400005)(40480700001)(2906002)(356005)(1076003)(40460700003)(7636003)(36860700001)(82740400003)(16526019)(7696005)(478600001)(26005)(107886003)(186003)(316002)(54906003)(6286002)(5660300002)(8676002)(41300700001)(4326008)(336012)(8936002)(47076005)(86362001)(2616005)(6916009)(70206006)(426003)(70586007); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Feb 2023 13:38:21.1005 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 2a35213a-1d4a-4053-752e-08db0dc7929e X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF0000E636.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY5PR12MB6108 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Erez Shitrit While matcher is being connect/disconnect in shared gvmi flow we set the first ft in the table to point on the first matcher, The FW is increasing the refcount on the first matcher RTC because of that no matcher if it is the same RTC that was set before, and when we will try to release that RTC we will get the following syndrome: 0xaa0093 - destroy_rtc_object: rtc in use or doesn't exist. In order to resolve that we clean the current pointed RTC from that ft and only after that setting it to the new RTC value. Signed-off-by: Erez Shitrit Acked-by: Viacheslav Ovsiienko --- drivers/net/mlx5/hws/mlx5dr_matcher.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.c b/drivers/net/mlx5/hws/mlx5dr_matcher.c index 5508cfe230..6af493d87a 100644 --- a/drivers/net/mlx5/hws/mlx5dr_matcher.c +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.c @@ -334,6 +334,24 @@ static int mlx5dr_matcher_disconnect(struct mlx5dr_matcher *matcher) return ret; } + if (!next) { + /* ft no longer points to any RTC, drop refcount */ + ret = mlx5dr_matcher_free_rtc_pointing(tbl->ctx, + tbl->fw_ft_type, + tbl->type, + prev_ft); + if (ret) { + DR_LOG(ERR, "Failed to reset last RTC refcount"); + return ret; + } + } + + ret = mlx5dr_matcher_shared_update_local_ft(tbl); + if (ret) { + DR_LOG(ERR, "Failed to update local_ft in shared table"); + return ret; + } + return 0; } From patchwork Mon Feb 13 13:37:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Slava Ovsiienko X-Patchwork-Id: 123800 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A72BC41C89; Mon, 13 Feb 2023 14:38:37 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 45EC342D2D; Mon, 13 Feb 2023 14:38:29 +0100 (CET) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2061.outbound.protection.outlook.com [40.107.244.61]) by mails.dpdk.org (Postfix) with ESMTP id EB58C42C4D; Mon, 13 Feb 2023 14:38:25 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ocGdslyjVnVzRke0iFPLiWpONz3xlLBHpbtXY94TaZ5vd3+O+yI3CHRI5eBA9VKxQc5jPYhna4dhnTrigFi1lyXB8agg4k2w/kSrgPPcRRXMeAwbylCW4tdSKbqYouUa1J6xYnUaSk//TvUI/Y46OD4EhkIM5ncvNIy/yP9c5ItU3Gbndws9qMeUFMVCAq1BXxge1860UJ2ZA2ZWbOQQyaJnNHlOjtCyWHcVTSoKkJN1R2U5YpYFhhHcithFWLVYmizUxU/qs7OV5QNrusoIeLOm59gBm+PS1cOymZwg9ZzZNglBTCCDQf1nQAO87A2pIJaJNbJeuBoOJQT4t5EuAQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=XYHseib7D29B/JKABJaJqnzmBMqUfR/LgWHF5qltzXo=; b=MUmVhNGrDDYKDLwrfmcYCDfO+lKVyNLbPjKeP+s/BZQCYC/3/aU9PI4kBTFaSNV01CSVa0uQMQ6lZcUfbnMRQm1CsCwwW5rm+Yj+AEZhg/MKq2xGugT9Uf7APbdhI8Sb+arkm+bA9znHTFr5ivueeBF1sx2/QKKfMICPSJhrj4k0IEfptk7MmPmZlRWg9yP48fj13Kuh2vb6ZRtol+wyDKG48G3eb6PURN8DAD0edUyFV+H8VY3z00E1JOru5TITWxq5gURlGKgMY+D6RqlieD1OC0+GPu1qunV7f5shxQXoQCcE5aVY1z2722jgTzbwmMcBZXD3Ld3E/hp8+YmTeQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=XYHseib7D29B/JKABJaJqnzmBMqUfR/LgWHF5qltzXo=; b=mKjp1+YzvUVdL3+n8TFenfkqjsTL3IJDAbBO2SGVOX1Xq8yEoiV9cUWqks55aHvffFDHMXCa5cYnh2ceEMETOEYPVaf5xFDPfBGiGAZHM8VvPPHrQ5tSf0l3B561WrqUu5ZBYMwewDOxoupmBK9f3kYasnN2ikUXwjlcypEt6PCAYnTgUxL5V+roR+OBj1stIQoffz6taCR6yijSlpg3Anbn3AJDGnXMmQ+Qw7QilJX2nzko1e66PeZVQk1lLhVI7Wxh+qndz/AyKkEu4j8WN3MvGlYtYuMqZdFoJlbvHf+WJ77qmxCofDamJmPnu2y+CZkHitcRJxyQSo7vsWrrFw== Received: from DS7PR03CA0018.namprd03.prod.outlook.com (2603:10b6:5:3b8::23) by DM4PR12MB5723.namprd12.prod.outlook.com (2603:10b6:8:5e::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6086.23; Mon, 13 Feb 2023 13:38:24 +0000 Received: from DS1PEPF0000E63A.namprd02.prod.outlook.com (2603:10b6:5:3b8:cafe::a8) by DS7PR03CA0018.outlook.office365.com (2603:10b6:5:3b8::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6086.24 via Frontend Transport; Mon, 13 Feb 2023 13:38:24 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DS1PEPF0000E63A.mail.protection.outlook.com (10.167.17.72) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6111.8 via Frontend Transport; Mon, 13 Feb 2023 13:38:23 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Mon, 13 Feb 2023 05:38:08 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Mon, 13 Feb 2023 05:38:05 -0800 From: Viacheslav Ovsiienko To: CC: , , , Erez Shitrit , , Dariusz Sosnowski Subject: [PATCH v4 2/5] net/mlx5/hws: fix disconnecting matcher Date: Mon, 13 Feb 2023 15:37:37 +0200 Message-ID: <20230213133740.27005-3-viacheslavo@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20230213133740.27005-1-viacheslavo@nvidia.com> References: <20230206095229.23027-1-viacheslavo@nvidia.com> <20230213133740.27005-1-viacheslavo@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF0000E63A:EE_|DM4PR12MB5723:EE_ X-MS-Office365-Filtering-Correlation-Id: 5ff2bebb-ae72-453b-0e49-08db0dc7944a X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: dRVYpxIWhaaBgQ5g/oPKWT+iIOTQ4Iwu84LSnzWi6KVrFrDkUfLbG3sWroRRFyeQL8OBaKdkZM6SZ4TTmTueBlaWFq6r/bnC6rwSGtRDhjEdQQedv2L2/ODTt42i265L1VtLeCioNQHtvl64+OoP4rwWomsEKlb5pdOBtdVR/xkwux7zTmuJN56AKMVWncTyZN0EivxWo3iaxqYfCmxg1kQrW2a1kZOyw5+nyDMX9ELx38EflbYk19/RVDuW2yHkmSEsd6Qcj0jdLE+RNvrhipCY9LZUGgT9RikLjSJC3835Ub0odOD5hWtYdlWRrdvm2HgkRALnhdMCuQ9cWmPWlEAIMnboAnanS2da1d5TsPt7dt3yLC2Cav+GNZmlw3xA3H2b+TyDocrepvfWPpgPKSMqYICURdbvOlnxUXcKpKBsfmhwXqf4l7iqyti17QC0bIEFxtF6Tn4Yzt2bh2ekfoW2yUPPtSEb1hrndFxhiFLDxBh5lOODJitKLbDxVH2OBgdcZwQmoCG+g91G/oWYZ5kCVp0oF66qnhaQFfC3p+iFdN6UH7D/BB41ssi3/0TPR9wOdlcOnChFAIBPmoejQei+KeDrhQN3tXRNDoUKr8jcMlzpgT2ZrwAkS9alO3gjX+CtoHdNlAKOFwW9FxBisO9thSRyG3N/HMt+ijI7TdO5pUFCwY6QfYXUrvMz3QkE6lSEyq6kdnCYTQteB947zw== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(376002)(396003)(136003)(346002)(39860400002)(451199018)(36840700001)(46966006)(40470700004)(1076003)(478600001)(7696005)(86362001)(6286002)(82310400005)(26005)(16526019)(186003)(316002)(4326008)(54906003)(2616005)(8676002)(70206006)(70586007)(107886003)(336012)(356005)(40480700001)(6916009)(6666004)(450100002)(36756003)(55016003)(47076005)(426003)(8936002)(41300700001)(5660300002)(83380400001)(2906002)(36860700001)(40460700003)(7636003)(82740400003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Feb 2023 13:38:23.9028 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5ff2bebb-ae72-453b-0e49-08db0dc7944a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF0000E63A.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5723 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Erez Shitrit This patch fixes the matcher disconnection handling, by removing the RTC references from flow table if the currently removed matcher was the last one for the given table. As a result RTC in this matcher can be correctly freed, since there are no dangling references to the RTC. Fixes: c467608215b2 ("net/mlx5/hws: add matcher object") Cc: stable@dpdk.org Signed-off-by: Erez Shitrit Signed-off-by: Dariusz Sosnowski Reviewed-by: Alex Vesker Acked-by: Matan Azrad --- drivers/net/mlx5/hws/mlx5dr_matcher.c | 6 ------ 1 file changed, 6 deletions(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.c b/drivers/net/mlx5/hws/mlx5dr_matcher.c index 6af493d87a..1fe7ec1bc3 100644 --- a/drivers/net/mlx5/hws/mlx5dr_matcher.c +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.c @@ -346,12 +346,6 @@ static int mlx5dr_matcher_disconnect(struct mlx5dr_matcher *matcher) } } - ret = mlx5dr_matcher_shared_update_local_ft(tbl); - if (ret) { - DR_LOG(ERR, "Failed to update local_ft in shared table"); - return ret; - } - return 0; } From patchwork Mon Feb 13 13:37:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Slava Ovsiienko X-Patchwork-Id: 123799 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CF79C41C89; Mon, 13 Feb 2023 14:38:30 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 01AB642C76; Mon, 13 Feb 2023 14:38:28 +0100 (CET) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2043.outbound.protection.outlook.com [40.107.94.43]) by mails.dpdk.org (Postfix) with ESMTP id 6670742C76 for ; Mon, 13 Feb 2023 14:38:24 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=f4hLg7kjMU0zj3tfdUVcNcZF04LMJ+vrM0RuLSu4yUn/SjLTJc+iXmO186vcf/ZePoKVg1F3bx/J41DqY3lPqjGn8FivpxvXhYG+EmpciiwvXDAmPM5yzVFc4sFqqHhtcV5+ZPGI2Djxn3IYgAbaKuUTW0tFfILaCJ1EIrkkHPSV7ZYAo8f6Vq0nJngAp+bS4eeKSQK4asgZsKURnE5vSY80ElfdIvb9Z5NoGXbR+aFT1wRl8TbwBBZVlzdKAYrb4ZWCUc0TTH+a53p5nIJWhR7GYTTR0l/ASVPsDzHwz6WvrVfYLA8zq/kbDxSdk9DBBvYFKlkKf6U6Et6PyqXUsA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=xeRjVkNA3xpgl1YfnG8gYkN52hc/O1mRS5o6F/ChX20=; b=f4MpDGC8f91SKBv4sI/KQH1zi/FqRvQiE7iCgAfADrLvJ7vedEGgHx8ffYbiohAQRIcoW/HUiU2adF4X9+GMq+Nptg47/NAAiRMVUJYzzR2lBGaQPx9zHH71kGE+Y6T2B0ZOa/hq57aDKJFQUIfyNcNf651A8CaFL70ZXXMWMWlk2vHzSb8mvR+YPs8jG4KjOSPfDu4MgsEXKhFMRGGG+Z7PNSxKGYdlGjBK6AWPc+LnjTT8qu7f2fTWF8HqSkVFkknGmgt+HxvsQ6BJTg8p07ivE/EcTnrCV195eXbV1Y7cGpLzO/RKeq98bAOciQuRgc6zNadFqUVGqPb1XLI+Qw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=xeRjVkNA3xpgl1YfnG8gYkN52hc/O1mRS5o6F/ChX20=; b=BIlQ//8ezpg5Q0lChz9f97OTT256GW6b7VAQ91wDVT0lKJ7hCvB2so8gCxeb37S+PQZ2SWsbsRUMymqOqy6Q96cEmSTA/+p6y5eDpEEv6Pb4TkBmkUVIhoa293qKlTt+b7FWMZ7YuqH1rQCVsvyMwHe2Tai47odmMMmVIZq8CRvkV9xrDd5X8Ju+t+XRtkJHtax/gokcVtDZoGmSx8QpvpsyQM/EcCuXQAGleeT9aSS1inw2f0HrbSVAtaDw8G28S0/XIWglRbmhQVpo/1lMe/r9c1keb+ckJn5ZNNj1VYsf3AQnDrRMw1HMV7CFN5ntoqz2GPySRCeUyFEwNMorLg== Received: from DM6PR03CA0078.namprd03.prod.outlook.com (2603:10b6:5:333::11) by CH0PR12MB8549.namprd12.prod.outlook.com (2603:10b6:610:182::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6086.23; Mon, 13 Feb 2023 13:38:22 +0000 Received: from DS1PEPF0000E647.namprd02.prod.outlook.com (2603:10b6:5:333:cafe::9c) by DM6PR03CA0078.outlook.office365.com (2603:10b6:5:333::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6086.24 via Frontend Transport; Mon, 13 Feb 2023 13:38:22 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by DS1PEPF0000E647.mail.protection.outlook.com (10.167.18.37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6111.8 via Frontend Transport; Mon, 13 Feb 2023 13:38:22 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Mon, 13 Feb 2023 05:38:10 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Mon, 13 Feb 2023 05:38:09 -0800 From: Viacheslav Ovsiienko To: CC: , , Subject: [PATCH v4 3/5] common/mlx5: add cross port object sharing capability Date: Mon, 13 Feb 2023 15:37:38 +0200 Message-ID: <20230213133740.27005-4-viacheslavo@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20230213133740.27005-1-viacheslavo@nvidia.com> References: <20230206095229.23027-1-viacheslavo@nvidia.com> <20230213133740.27005-1-viacheslavo@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF0000E647:EE_|CH0PR12MB8549:EE_ X-MS-Office365-Filtering-Correlation-Id: 51ccf26d-bf4f-4c55-53e2-08db0dc7932c X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: ES7DTNsG9oDSMMa+hBxZtA4xkdL6BOqF4i0+U43rA7yEPaWkfQY2qpIIHdI7vvweMyxkhkZc+agIeXjfFO2rYng5aaFOgfjS1FJfnbhwsbTTU/QTPJnmRk05MQhGZkMpJ4i6xI7lZuoQBWUU9mWF40OSfIE6igkeisQO4EMDCjhY96G3k6QZ8K9QqMqPWtrk7zsMXOLL3uyhe+xwZzWm8osfdkGWZM3cBl9nwllcaj+6rz5bU857wQ9vXPpatbxk3POpIVA+48skIVBFihOtI+YrUNI/k8gktjkaGC2WwQhsAlUCno/J9zTK+tFvZocnSUeBB0/wUFdSssTxVTdRVt1HJAWkt7zBvp8zA38ZMpVlbiLOgNjxUYEPRLWtHfA79PanvffOCzUOXK6dHDyG+9xEffW6caPMzZcAxutwmiFi1Jt6ePdbsAHqkvZckfm+ji539uNl5b3FWTEx7CSYZK4aF5f9F4XfqwsNvT/gFEnAy5LJcFgk5x+VlMootv3LGdyllCcc88b4eyf8b47KFQimc/JikBM5MUs6sCg2O/tqTW4Tf5YZTFRTIOamPEq8WncX+W81Di8B6rKa6vjl6jw1jD6h0iAjbu0Cv/MrT1BjIJ5h/UDeHu3aOwS725WeWVyGbJ5vAJdyLxsOd12iPQIYPjgmCY7zp5yEnA5OImI5dR85itxj2JoWEFwwN1dgIMeTuEKdC23UePhfdPvSkQ== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(396003)(39860400002)(346002)(136003)(376002)(451199018)(40470700004)(36840700001)(46966006)(107886003)(2616005)(6666004)(356005)(83380400001)(1076003)(478600001)(186003)(26005)(16526019)(336012)(6286002)(40460700003)(86362001)(7696005)(47076005)(82310400005)(82740400003)(426003)(7636003)(8936002)(5660300002)(2906002)(40480700001)(54906003)(316002)(55016003)(41300700001)(8676002)(36756003)(6916009)(4326008)(36860700001)(70586007)(70206006); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Feb 2023 13:38:22.0459 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 51ccf26d-bf4f-4c55-53e2-08db0dc7932c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF0000E647.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR12MB8549 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add query port capabilities to share steering objects between multiple ports of the same physical NIC. Signed-off-by: Viacheslav Ovsiienko Acked-by: Ori Kam --- drivers/common/mlx5/mlx5_devx_cmds.c | 13 +++++++++++++ drivers/common/mlx5/mlx5_devx_cmds.h | 1 + 2 files changed, 14 insertions(+) diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index e3a4927d0f..17128035ec 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -1047,6 +1047,19 @@ mlx5_devx_cmd_query_hca_attr(void *ctx, attr->flow_counter_bulk_log_granularity = MLX5_GET(cmd_hca_cap_2, hcattr, flow_counter_bulk_log_granularity); + rc = MLX5_GET(cmd_hca_cap_2, hcattr, + cross_vhca_object_to_object_supported); + attr->cross_vhca = + (rc & MLX5_CROSS_VHCA_OBJ_TO_OBJ_TYPE_STC_TO_TIR) && + (rc & MLX5_CROSS_VHCA_OBJ_TO_OBJ_TYPE_STC_TO_FT) && + (rc & MLX5_CROSS_VHCA_OBJ_TO_OBJ_TYPE_FT_TO_FT) && + (rc & MLX5_CROSS_VHCA_OBJ_TO_OBJ_TYPE_FT_TO_RTC); + rc = MLX5_GET(cmd_hca_cap_2, hcattr, + allowed_object_for_other_vhca_access); + attr->cross_vhca = attr->cross_vhca && + (rc & MLX5_CROSS_VHCA_ALLOWED_OBJS_TIR) && + (rc & MLX5_CROSS_VHCA_ALLOWED_OBJS_FT) && + (rc & MLX5_CROSS_VHCA_ALLOWED_OBJS_RTC); } if (attr->log_min_stride_wqe_sz == 0) attr->log_min_stride_wqe_sz = MLX5_MPRQ_LOG_MIN_STRIDE_WQE_SIZE; diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h index c94b9eac06..b65ba569bc 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.h +++ b/drivers/common/mlx5/mlx5_devx_cmds.h @@ -288,6 +288,7 @@ struct mlx5_hca_attr { uint32_t alloc_flow_counter_pd:1; uint32_t flow_counter_access_aso:1; uint32_t flow_access_aso_opc_mod:8; + uint32_t cross_vhca:1; }; /* LAG Context. */ From patchwork Mon Feb 13 13:37:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Slava Ovsiienko X-Patchwork-Id: 123801 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6585641C89; Mon, 13 Feb 2023 14:38:43 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 64F8142D32; Mon, 13 Feb 2023 14:38:30 +0100 (CET) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2046.outbound.protection.outlook.com [40.107.94.46]) by mails.dpdk.org (Postfix) with ESMTP id 7403142C4D for ; Mon, 13 Feb 2023 14:38:26 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=fCeu3siNEQjb0wmzhExfYmqYoj8n1VBK2wy8/IKMSVLlOX793ixXD+6x13PJhbhjvYm5jZVN+sk46M92Nb1ppbF8LAE46g/UMn7qnNVSbjqPhPuoufw39Jmg6aT5iVIBB6QVu/EfTlr9GapxRTQ86cFwMeCsneXujvk/UtYIAg4EGRhgjNS4pKM5z9yXPyZJx8r7cNVPt399zMyrNLlSbjJGaBV43c4zRXXAsm62A+1SGTqJ+FfaE3pU0t9coWhKqe3Leid4b1jN8rZNuKV9YsWBqcxNZaqHGUYeD3vRRxh7w0b7gwJ0UUBPm4ASiPy0A944gZ4lYKw9cyGi4rpAfA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=2Ae05bmmuL8qhZGEhzCMX4QEeASmep/lQfK7Z6NEuFE=; b=JvfdznjXVTzyPkP/j/+x/HNT1UDTLVzbBEibf+qtGWAICFKXORgyqDDJVf7girYHHgKWOfRedVPMgySYrXNbvi7Fqr049jYne0OMD70OzVbNWPpozspfL9gxaRfZTsqs+8pA1OpisUEl/dpTwB0TXEVR8ee7An0tCbPeUKjxl/tPGeXbpfAK2VAHtQAxJXiVvmoA2/wurdJqgeV7Jedtt8xtpVLnraHhOBGJuEkgYPDB0slLu/qBb0RMyGrsRSWV7j374QBr31sBpGBELrWbsQuYhquAVVGX4AxBc1CaGe5lyt6nEx4Ticloa/CWQYSODnn13Cj3REuSmSA1B6opQw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2Ae05bmmuL8qhZGEhzCMX4QEeASmep/lQfK7Z6NEuFE=; b=JlchNvzfrbhte7DKSeSvJxdv01F3Glm3UQmIzZB6T7b/gGzLe8wj5hYYBPoFc6i7qKwGMPilqsDoOPugsNCJUDRMRNazu/uQznBP+panMXgwY9ww2bflJ9T45kh64ucJgGIUgXW0dZ53cLtMqm9RaH/jDiCgFfnhQh4U5Z6UM9XydbmmScJlWEkM0mjyXgehpzKGB7sPn4ps/NBD6pzd1b9pVd5Moju8BT+kp7+YY4tGyZnSzjPOvOEUBMYRsqPQNMmwjtoUYUbQZq174vW48QjUdl8hDJLklRO9A9yiq0P3qS9CL0TXKMHE2WkY1TdQtiGzisoqWRIUbjEfqh212g== Received: from DM6PR03CA0097.namprd03.prod.outlook.com (2603:10b6:5:333::30) by DM6PR12MB4138.namprd12.prod.outlook.com (2603:10b6:5:220::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6086.24; Mon, 13 Feb 2023 13:38:23 +0000 Received: from DS1PEPF0000E647.namprd02.prod.outlook.com (2603:10b6:5:333:cafe::67) by DM6PR03CA0097.outlook.office365.com (2603:10b6:5:333::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6086.24 via Frontend Transport; Mon, 13 Feb 2023 13:38:23 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by DS1PEPF0000E647.mail.protection.outlook.com (10.167.18.37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6111.8 via Frontend Transport; Mon, 13 Feb 2023 13:38:23 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Mon, 13 Feb 2023 05:38:13 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Mon, 13 Feb 2023 05:38:11 -0800 From: Viacheslav Ovsiienko To: CC: , , Subject: [PATCH v4 4/5] net/mlx5: add cross port shared mode for HW steering Date: Mon, 13 Feb 2023 15:37:39 +0200 Message-ID: <20230213133740.27005-5-viacheslavo@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20230213133740.27005-1-viacheslavo@nvidia.com> References: <20230206095229.23027-1-viacheslavo@nvidia.com> <20230213133740.27005-1-viacheslavo@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF0000E647:EE_|DM6PR12MB4138:EE_ X-MS-Office365-Filtering-Correlation-Id: d396e81c-1396-4750-f7df-08db0dc79434 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: zMj5p9gAfIaYS8kehtnpaop//D+z4vVVwWzwYiUyjnnQMXa+lrJOxq0PS/TkRFW9TAaRisxGSO/Ayooc5nOkZ+6tGVK65LYiFiWogA31G2CEYDSCaRsYtoCQfAUyXOtXieuQhFF6z97RuRFdAASGMRna+XonFJCrB+JxDIGlQswwoTMY4z4Loz4Vq9HVC0h/a94p6vNwGw1M7GcF+R+V7YqwL8YLz/R2d/VxUjOnqQp0H9AnMfZ7cFOB1w+RjK09WnMcw6U3XihrEMOZD30LUAxg2RobSup1RUf8u1lt5Jgp28AvKgFwl89Vh/QsMwZv5XrGiuN6zHCgkFeOlJ9J4MHlJ4l8YtCBUd23KrUyud2M5+QjOLd9PxGnY+3j6qoHbdRFeCGMCt0BYEJn3P4w82UlbgF5Pr255Q5ALGfrfPUcZDwhlkqX0U1orZiyECBPctqSJ5WcnKoOJh9sxeOAebkPd3YmxZkhV90G8A1lMG93sOvfAUfnOr1rqdjMbPUortUz6re8emh/7K2tJ6GCfDehBYkygZ+qPTUnbcyYXTRnHS101qqz5bY0hcH0vMM6rjPgISLQawUEjWAXF1ZiLSITi/q/YWsCFfOj6MlYWrz4XcjsEKHhmGnglUn951ulVyFAvoyWP0jZuaM7CbiV4H4DDYTLXDqNB4B+ak+5a9SCYhh2O2sQ9Ip5b0m76cS201UEE+gaoAKrDBk6beMY+A== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(136003)(376002)(346002)(396003)(39860400002)(451199018)(40470700004)(36840700001)(46966006)(86362001)(55016003)(40480700001)(36860700001)(82310400005)(7636003)(82740400003)(40460700003)(36756003)(4326008)(70206006)(70586007)(8676002)(356005)(41300700001)(316002)(54906003)(6916009)(8936002)(5660300002)(2906002)(2616005)(83380400001)(336012)(47076005)(426003)(478600001)(7696005)(6666004)(26005)(186003)(16526019)(6286002)(107886003)(1076003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Feb 2023 13:38:23.7803 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d396e81c-1396-4750-f7df-08db0dc79434 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF0000E647.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4138 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add host port option for sharing steering objects between multiple ports of the same physical NIC. Signed-off-by: Viacheslav Ovsiienko Acked-by: Ori Kam --- drivers/net/mlx5/mlx5.c | 6 +++ drivers/net/mlx5/mlx5.h | 2 + drivers/net/mlx5/mlx5_flow_hw.c | 78 +++++++++++++++++++++++++++++++-- drivers/net/mlx5/mlx5_hws_cnt.c | 12 +++++ 4 files changed, 94 insertions(+), 4 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index b8643cebdd..2eca2cceef 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -2013,6 +2013,12 @@ mlx5_dev_close(struct rte_eth_dev *dev) } if (!priv->sh) return 0; + if (priv->shared_refcnt) { + DRV_LOG(ERR, "port %u is shared host in use (%u)", + dev->data->port_id, priv->shared_refcnt); + rte_errno = EBUSY; + return -EBUSY; + } DRV_LOG(DEBUG, "port %u closing device \"%s\"", dev->data->port_id, ((priv->sh->cdev->ctx != NULL) ? diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 16b33e1548..525bdd47f7 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1780,6 +1780,8 @@ struct mlx5_priv { struct mlx5_flow_hw_ctrl_rx *hw_ctrl_rx; /**< HW steering templates used to create control flow rules. */ #endif + struct rte_eth_dev *shared_host; /* Host device for HW steering. */ + uint16_t shared_refcnt; /* HW steering host reference counter. */ }; #define PORT_ID(priv) ((priv)->dev_data->port_id) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index aacde224f2..3b9789aa53 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -5,6 +5,8 @@ #include #include + +#include "mlx5.h" #include "mlx5_defs.h" #include "mlx5_flow.h" #include "mlx5_rx.h" @@ -6303,6 +6305,12 @@ flow_hw_ct_pool_create(struct rte_eth_dev *dev, int reg_id; uint32_t flags; + if (port_attr->flags & RTE_FLOW_PORT_FLAG_SHARE_INDIRECT) { + DRV_LOG(ERR, "Connection tracking is not supported " + "in cross vHCA sharing mode"); + rte_errno = ENOTSUP; + return NULL; + } pool = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*pool), 0, SOCKET_ID_ANY); if (!pool) { rte_errno = ENOMEM; @@ -6787,6 +6795,7 @@ flow_hw_configure(struct rte_eth_dev *dev, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_priv *host_priv = NULL; struct mlx5dr_context *dr_ctx = NULL; struct mlx5dr_context_attr dr_ctx_attr = {0}; struct mlx5_hw_q *hw_q; @@ -6801,7 +6810,8 @@ flow_hw_configure(struct rte_eth_dev *dev, .free = mlx5_free, .type = "mlx5_hw_action_construct_data", }; - /* Adds one queue to be used by PMD. + /* + * Adds one queue to be used by PMD. * The last queue will be used by the PMD. */ uint16_t nb_q_updated = 0; @@ -6920,6 +6930,57 @@ flow_hw_configure(struct rte_eth_dev *dev, dr_ctx_attr.queues = nb_q_updated; /* Queue size should all be the same. Take the first one. */ dr_ctx_attr.queue_size = _queue_attr[0]->size; + if (port_attr->flags & RTE_FLOW_PORT_FLAG_SHARE_INDIRECT) { + struct rte_eth_dev *host_dev = NULL; + uint16_t port_id; + + MLX5_ASSERT(rte_eth_dev_is_valid_port(port_attr->host_port_id)); + if (is_proxy) { + DRV_LOG(ERR, "cross vHCA shared mode not supported " + " for E-Switch confgiurations"); + rte_errno = ENOTSUP; + goto err; + } + MLX5_ETH_FOREACH_DEV(port_id, dev->device) { + if (port_id == port_attr->host_port_id) { + host_dev = &rte_eth_devices[port_id]; + break; + } + } + if (!host_dev || host_dev == dev || + !host_dev->data || !host_dev->data->dev_private) { + DRV_LOG(ERR, "Invalid cross vHCA host port %u", + port_attr->host_port_id); + rte_errno = EINVAL; + goto err; + } + host_priv = host_dev->data->dev_private; + if (host_priv->sh->cdev->ctx == priv->sh->cdev->ctx) { + DRV_LOG(ERR, "Sibling ports %u and %u do not " + "require cross vHCA sharing mode", + dev->data->port_id, port_attr->host_port_id); + rte_errno = EINVAL; + goto err; + } + if (host_priv->shared_host) { + DRV_LOG(ERR, "Host port %u is not the sharing base", + port_attr->host_port_id); + rte_errno = EINVAL; + goto err; + } + if (port_attr->nb_counters || + port_attr->nb_aging_objects || + port_attr->nb_meters || + port_attr->nb_conn_tracks) { + DRV_LOG(ERR, + "Object numbers on guest port must be zeros"); + rte_errno = EINVAL; + goto err; + } + dr_ctx_attr.shared_ibv_ctx = host_priv->sh->cdev->ctx; + priv->shared_host = host_dev; + __atomic_fetch_add(&host_priv->shared_refcnt, 1, __ATOMIC_RELAXED); + } dr_ctx = mlx5dr_context_open(priv->sh->cdev->ctx, &dr_ctx_attr); /* rte_errno has been updated by HWS layer. */ if (!dr_ctx) @@ -6935,7 +6996,7 @@ flow_hw_configure(struct rte_eth_dev *dev, goto err; } /* Initialize meter library*/ - if (port_attr->nb_meters) + if (port_attr->nb_meters || (host_priv && host_priv->hws_mpool)) if (mlx5_flow_meter_init(dev, port_attr->nb_meters, 1, 1, nb_q_updated)) goto err; /* Add global actions. */ @@ -6972,7 +7033,7 @@ flow_hw_configure(struct rte_eth_dev *dev, goto err; } } - if (port_attr->nb_conn_tracks) { + if (port_attr->nb_conn_tracks || (host_priv && host_priv->hws_ctpool)) { mem_size = sizeof(struct mlx5_aso_sq) * nb_q_updated + sizeof(*priv->ct_mng); priv->ct_mng = mlx5_malloc(MLX5_MEM_ZERO, mem_size, @@ -6986,7 +7047,7 @@ flow_hw_configure(struct rte_eth_dev *dev, goto err; priv->sh->ct_aso_en = 1; } - if (port_attr->nb_counters) { + if (port_attr->nb_counters || (host_priv && host_priv->hws_cpool)) { priv->hws_cpool = mlx5_hws_cnt_pool_create(dev, port_attr, nb_queue); if (priv->hws_cpool == NULL) @@ -7055,6 +7116,10 @@ flow_hw_configure(struct rte_eth_dev *dev, } if (_queue_attr) mlx5_free(_queue_attr); + if (priv->shared_host) { + __atomic_fetch_sub(&host_priv->shared_refcnt, 1, __ATOMIC_RELAXED); + priv->shared_host = NULL; + } /* Do not overwrite the internal errno information. */ if (ret) return ret; @@ -7133,6 +7198,11 @@ flow_hw_resource_release(struct rte_eth_dev *dev) mlx5_free(priv->hw_q); priv->hw_q = NULL; claim_zero(mlx5dr_context_close(priv->dr_ctx)); + if (priv->shared_host) { + struct mlx5_priv *host_priv = priv->shared_host->data->dev_private; + __atomic_fetch_sub(&host_priv->shared_refcnt, 1, __ATOMIC_RELAXED); + priv->shared_host = NULL; + } priv->dr_ctx = NULL; priv->nb_queue = 0; } diff --git a/drivers/net/mlx5/mlx5_hws_cnt.c b/drivers/net/mlx5/mlx5_hws_cnt.c index 05cc954903..797844439f 100644 --- a/drivers/net/mlx5/mlx5_hws_cnt.c +++ b/drivers/net/mlx5/mlx5_hws_cnt.c @@ -619,6 +619,12 @@ mlx5_hws_cnt_pool_create(struct rte_eth_dev *dev, int ret = 0; size_t sz; + if (pattr->flags & RTE_FLOW_PORT_FLAG_SHARE_INDIRECT) { + DRV_LOG(ERR, "Counters are not supported " + "in cross vHCA sharing mode"); + rte_errno = ENOTSUP; + return NULL; + } /* init cnt service if not. */ if (priv->sh->cnt_svc == NULL) { ret = mlx5_hws_cnt_svc_init(priv->sh); @@ -1190,6 +1196,12 @@ mlx5_hws_age_pool_init(struct rte_eth_dev *dev, strict_queue = !!(attr->flags & RTE_FLOW_PORT_FLAG_STRICT_QUEUE); MLX5_ASSERT(priv->hws_cpool); + if (attr->flags & RTE_FLOW_PORT_FLAG_SHARE_INDIRECT) { + DRV_LOG(ERR, "Aging sn not supported " + "in cross vHCA sharing mode"); + rte_errno = ENOTSUP; + return -ENOTSUP; + } nb_alloc_cnts = mlx5_hws_cnt_pool_get_size(priv->hws_cpool); if (strict_queue) { rsize = mlx5_hws_aged_out_q_ring_size_get(nb_alloc_cnts, From patchwork Mon Feb 13 13:37:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Slava Ovsiienko X-Patchwork-Id: 123802 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5F3EE41C89; Mon, 13 Feb 2023 14:38:51 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 36DFC42BD9; Mon, 13 Feb 2023 14:38:36 +0100 (CET) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2065.outbound.protection.outlook.com [40.107.92.65]) by mails.dpdk.org (Postfix) with ESMTP id 1422942D2C for ; Mon, 13 Feb 2023 14:38:34 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=e1/ywlxSnlI5VnOJbJP74s8zHE4/9gfd4i2IxwhHq36u9wXpqnvvPOk5VnzOXp8+xFUg7ugJnLGxUnMTc6RDEhvc4hHuc7cHZS2O8pAU+WYXUBJ1fQDY3ISWWexIXxE31vBfd6H3Sv4RjqkU0zz/gXYIP/yydPgYQMMhHiaJxO/06UnOIDpqkeOJRwz07UwdR/6wQ0InVdo0SUlRgycb3QGPTQQYId3ndnXCcra40yRF0PSXngor1a8lJB2k1lM7aVT+JY/8UjtI6LZQ1U+yoav2D/YxkqRW/QDxggbxTU8OYfUqf896L4/sUioP7PQCc1vU8BhgTdqBs602IooxzA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=GucVXizYIKkTLhRYGRJBh7ZgxXOPCKeUs/LoZpKO6fI=; b=TUyU4jdD1gmyIZELZ0FEv9MfSbs0ggtEhxfPH4itufuQUtXeVQE6Cej5BbeG2EL2FHKQLPo5zR43wF1JeMDg7eR+Ek4Cg3gmfNRRKYj1wqaRPrDZNeWPmghVzvEl/ZH6F81gHqHeRmoaHAo0dACgMrlC7xG0qT74CVIgiUMJGH9DxKWTK7f3w7+Ea7XqAg4SvZmXlPv0JrX1wkiIJ0PcDp3Q852lsDTsKHyjk3uICCsUF/SIGGweIFcWRz/neAxgJYBMaH2zw78jqptlExAbhtptBAUMgAzgZ/QUUqkihlR1q9DwlW4+HisBez71NzCckoRkAZDG8Z9ImPl1Csv4jA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=GucVXizYIKkTLhRYGRJBh7ZgxXOPCKeUs/LoZpKO6fI=; b=nOSD1tSU1s7SjlWAFPdAs5TKMb2f4undw2rA8B+Ss2EdOsLV448EoINV8YQqChPC6zHxHm8aQYW8uUwHmVjnQp72z0812jOx4B3pDmBAuDV55tanRjcZ3p6vgLvfZp+D95x5mph9xySqoN+PaNoYpbsyt0to49G2baANPXhbcP87tURZrPbqTfBlAYk3c5M+zLKOWz4ZG3GTH7hCCX3dVBmD7ph2c2KZ9hPWKqK1Wf07sfE4LHll+B/NNHgWudDSau5jdrMyuFnMpvD+apLUppn9DajtLykXkmlVxGuqYN54ZS/1fJpMm4AuYUD3/L08kIQDGjX6fC3izMcXKqRQkQ== Received: from DS7PR07CA0004.namprd07.prod.outlook.com (2603:10b6:5:3af::13) by BY5PR12MB4129.namprd12.prod.outlook.com (2603:10b6:a03:213::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6086.24; Mon, 13 Feb 2023 13:38:31 +0000 Received: from DS1PEPF0000E637.namprd02.prod.outlook.com (2603:10b6:5:3af:cafe::c6) by DS7PR07CA0004.outlook.office365.com (2603:10b6:5:3af::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6086.24 via Frontend Transport; Mon, 13 Feb 2023 13:38:31 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DS1PEPF0000E637.mail.protection.outlook.com (10.167.17.69) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6111.8 via Frontend Transport; Mon, 13 Feb 2023 13:38:31 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Mon, 13 Feb 2023 05:38:17 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Mon, 13 Feb 2023 05:38:14 -0800 From: Viacheslav Ovsiienko To: CC: , , Subject: [PATCH v4 5/5] net/mlx5: support counters in cross port shared mode Date: Mon, 13 Feb 2023 15:37:40 +0200 Message-ID: <20230213133740.27005-6-viacheslavo@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20230213133740.27005-1-viacheslavo@nvidia.com> References: <20230206095229.23027-1-viacheslavo@nvidia.com> <20230213133740.27005-1-viacheslavo@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF0000E637:EE_|BY5PR12MB4129:EE_ X-MS-Office365-Filtering-Correlation-Id: efeaa3d7-08e6-424f-8050-08db0dc798be X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: P+rgOhR1IW8MCTpM1B6nTL+gTuVo7BmL7VVkgrGmzyof9EKEA5HerzpFCQ4E1P8OeMzKAsheVmDqxVT9ku3bhc8y8nubKi2rx8Fs3zVejmJxSFRkymsWsud1melY2ZE0w7cgIs1dtRv+sJNQOoR1dNkxWFGzbAwHKcqw6z0cbOKmAqkqJRBi9qA0AhmhOn84KETYHJ7VIpu6/G9KXSAqCNThPA0zYXxHAkQ6qgrx+mcBp8TV0/ME7npFlNUvnaGllMHHHi+jXjOMsgFoTxcCeSoytoJbkWwfGZf73UclCIqJUp0zPIyLczDdY0c4eaPXKphWM0mLbU5jXWhRk9a/86RXiLCH/dol4Chg59q59DmntbwyeRDBhV6ENP35Hjmhj9+vaCHXictiFoE5Q5MA1jUeBPRpAm8ps3w+ycUb8AKlqVabYunZkxemJ3MP42mqbsWeQPHtXoVQRxdk5EY6//IQbVvKYrnFdprHsI4MZfmOSaL1QRcb6L8x8XNXxbBX5u9kAYZbF0qINCj8/S0ddtAW9wGLYKoy0ZIKW4va3kQzhvhskHl1sr7OwXsIyJ8AocuuUUtP3dB5dE5ickds5M8TmY1YtzTD1CNs4xWbUy51CeQiV55CRKZQiy8zIZqV4M1FQGLQKq3iYwfJnWZCX5/sT/zN+b1sqavALT3eufE= X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(346002)(39850400004)(136003)(396003)(376002)(451199018)(36840700001)(46966006)(83380400001)(70206006)(70586007)(316002)(54906003)(5660300002)(8936002)(4326008)(6916009)(8676002)(41300700001)(6666004)(107886003)(2616005)(478600001)(1076003)(16526019)(6286002)(26005)(186003)(47076005)(426003)(336012)(7696005)(356005)(36756003)(55016003)(40480700001)(82310400005)(86362001)(2906002)(30864003)(36860700001)(7636003)(82740400003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Feb 2023 13:38:31.3769 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: efeaa3d7-08e6-424f-8050-08db0dc798be X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF0000E637.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4129 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In the cross vHCA sharing mode the host counter pool should be used in counter related routines. The local port pool is used to store the dedicated DR action handle, per queue counter caches and query data are ignored and not allocated on local pool. Signed-off-by: Viacheslav Ovsiienko Acked-by: Ori Kam --- drivers/net/mlx5/mlx5_flow_hw.c | 12 ++- drivers/net/mlx5/mlx5_hws_cnt.c | 163 ++++++++++++++++---------------- drivers/net/mlx5/mlx5_hws_cnt.h | 109 +++++++++++---------- 3 files changed, 150 insertions(+), 134 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 3b9789aa53..8ff72871f3 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -2311,8 +2311,10 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, break; /* Fall-through. */ case RTE_FLOW_ACTION_TYPE_COUNT: - ret = mlx5_hws_cnt_pool_get(priv->hws_cpool, &queue, - &cnt_id, age_idx); + ret = mlx5_hws_cnt_pool_get(priv->hws_cpool, + (priv->shared_refcnt || + priv->hws_cpool->cfg.host_cpool) ? + NULL : &queue, &cnt_id, age_idx); if (ret != 0) return ret; ret = mlx5_hws_cnt_pool_get_action_offset @@ -7998,6 +8000,7 @@ static int flow_hw_query_counter(const struct rte_eth_dev *dev, uint32_t counter, void *data, struct rte_flow_error *error) { + struct mlx5_hws_cnt_pool *hpool; struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_hws_cnt *cnt; struct rte_flow_query_count *qc = data; @@ -8008,8 +8011,9 @@ flow_hw_query_counter(const struct rte_eth_dev *dev, uint32_t counter, return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "counter are not available"); - iidx = mlx5_hws_cnt_iidx(priv->hws_cpool, counter); - cnt = &priv->hws_cpool->pool[iidx]; + hpool = mlx5_hws_cnt_host_pool(priv->hws_cpool); + iidx = mlx5_hws_cnt_iidx(hpool, counter); + cnt = &hpool->pool[iidx]; __hws_cnt_query_raw(priv->hws_cpool, counter, &pkts, &bytes); qc->hits_set = 1; qc->bytes_set = 1; diff --git a/drivers/net/mlx5/mlx5_hws_cnt.c b/drivers/net/mlx5/mlx5_hws_cnt.c index 797844439f..d6a017a757 100644 --- a/drivers/net/mlx5/mlx5_hws_cnt.c +++ b/drivers/net/mlx5/mlx5_hws_cnt.c @@ -24,12 +24,8 @@ static void __hws_cnt_id_load(struct mlx5_hws_cnt_pool *cpool) { - uint32_t preload; - uint32_t q_num = cpool->cache->q_num; uint32_t cnt_num = mlx5_hws_cnt_pool_get_size(cpool); - cnt_id_t cnt_id; - uint32_t qidx, iidx = 0; - struct rte_ring *qcache = NULL; + uint32_t iidx; /* * Counter ID order is important for tracking the max number of in used @@ -39,18 +35,9 @@ __hws_cnt_id_load(struct mlx5_hws_cnt_pool *cpool) * and then the global free list. * In the end, user fetch the counter from minimal to the maximum. */ - preload = RTE_MIN(cpool->cache->preload_sz, cnt_num / q_num); - for (qidx = 0; qidx < q_num; qidx++) { - for (; iidx < preload * (qidx + 1); iidx++) { - cnt_id = mlx5_hws_cnt_id_gen(cpool, iidx); - qcache = cpool->cache->qcache[qidx]; - if (qcache) - rte_ring_enqueue_elem(qcache, &cnt_id, - sizeof(cnt_id)); - } - } - for (; iidx < cnt_num; iidx++) { - cnt_id = mlx5_hws_cnt_id_gen(cpool, iidx); + for (iidx = 0; iidx < cnt_num; iidx++) { + cnt_id_t cnt_id = mlx5_hws_cnt_id_gen(cpool, iidx); + rte_ring_enqueue_elem(cpool->free_list, &cnt_id, sizeof(cnt_id)); } @@ -334,7 +321,26 @@ mlx5_hws_cnt_svc(void *opaque) return NULL; } -struct mlx5_hws_cnt_pool * +static void +mlx5_hws_cnt_pool_deinit(struct mlx5_hws_cnt_pool * const cntp) +{ + uint32_t qidx = 0; + if (cntp == NULL) + return; + rte_ring_free(cntp->free_list); + rte_ring_free(cntp->wait_reset_list); + rte_ring_free(cntp->reuse_list); + if (cntp->cache) { + for (qidx = 0; qidx < cntp->cache->q_num; qidx++) + rte_ring_free(cntp->cache->qcache[qidx]); + } + mlx5_free(cntp->cache); + mlx5_free(cntp->raw_mng); + mlx5_free(cntp->pool); + mlx5_free(cntp); +} + +static struct mlx5_hws_cnt_pool * mlx5_hws_cnt_pool_init(struct mlx5_dev_ctx_shared *sh, const struct mlx5_hws_cnt_pool_cfg *pcfg, const struct mlx5_hws_cache_param *ccfg) @@ -352,6 +358,8 @@ mlx5_hws_cnt_pool_init(struct mlx5_dev_ctx_shared *sh, return NULL; cntp->cfg = *pcfg; + if (cntp->cfg.host_cpool) + return cntp; cntp->cache = mlx5_malloc(MLX5_MEM_ANY | MLX5_MEM_ZERO, sizeof(*cntp->cache) + sizeof(((struct mlx5_hws_cnt_pool_caches *)0)->qcache[0]) @@ -387,8 +395,9 @@ mlx5_hws_cnt_pool_init(struct mlx5_dev_ctx_shared *sh, goto error; snprintf(mz_name, sizeof(mz_name), "%s_F_RING", pcfg->name); cntp->free_list = rte_ring_create_elem(mz_name, sizeof(cnt_id_t), - (uint32_t)cnt_num, SOCKET_ID_ANY, - RING_F_SP_ENQ | RING_F_MC_HTS_DEQ | RING_F_EXACT_SZ); + (uint32_t)cnt_num, SOCKET_ID_ANY, + RING_F_MP_HTS_ENQ | RING_F_MC_HTS_DEQ | + RING_F_EXACT_SZ); if (cntp->free_list == NULL) { DRV_LOG(ERR, "failed to create free list ring"); goto error; @@ -404,7 +413,7 @@ mlx5_hws_cnt_pool_init(struct mlx5_dev_ctx_shared *sh, snprintf(mz_name, sizeof(mz_name), "%s_U_RING", pcfg->name); cntp->reuse_list = rte_ring_create_elem(mz_name, sizeof(cnt_id_t), (uint32_t)cnt_num, SOCKET_ID_ANY, - RING_F_SP_ENQ | RING_F_MC_HTS_DEQ | RING_F_EXACT_SZ); + RING_F_MP_HTS_ENQ | RING_F_MC_HTS_DEQ | RING_F_EXACT_SZ); if (cntp->reuse_list == NULL) { DRV_LOG(ERR, "failed to create reuse list ring"); goto error; @@ -427,25 +436,6 @@ mlx5_hws_cnt_pool_init(struct mlx5_dev_ctx_shared *sh, return NULL; } -void -mlx5_hws_cnt_pool_deinit(struct mlx5_hws_cnt_pool * const cntp) -{ - uint32_t qidx = 0; - if (cntp == NULL) - return; - rte_ring_free(cntp->free_list); - rte_ring_free(cntp->wait_reset_list); - rte_ring_free(cntp->reuse_list); - if (cntp->cache) { - for (qidx = 0; qidx < cntp->cache->q_num; qidx++) - rte_ring_free(cntp->cache->qcache[qidx]); - } - mlx5_free(cntp->cache); - mlx5_free(cntp->raw_mng); - mlx5_free(cntp->pool); - mlx5_free(cntp); -} - int mlx5_hws_cnt_service_thread_create(struct mlx5_dev_ctx_shared *sh) { @@ -483,7 +473,7 @@ mlx5_hws_cnt_service_thread_destroy(struct mlx5_dev_ctx_shared *sh) sh->cnt_svc->service_thread = 0; } -int +static int mlx5_hws_cnt_pool_dcs_alloc(struct mlx5_dev_ctx_shared *sh, struct mlx5_hws_cnt_pool *cpool) { @@ -495,6 +485,7 @@ mlx5_hws_cnt_pool_dcs_alloc(struct mlx5_dev_ctx_shared *sh, struct mlx5_devx_counter_attr attr = {0}; struct mlx5_devx_obj *dcs; + MLX5_ASSERT(cpool->cfg.host_cpool == NULL); if (hca_attr->flow_counter_bulk_log_max_alloc == 0) { DRV_LOG(ERR, "Fw doesn't support bulk log max alloc"); return -1; @@ -550,7 +541,7 @@ mlx5_hws_cnt_pool_dcs_alloc(struct mlx5_dev_ctx_shared *sh, return -1; } -void +static void mlx5_hws_cnt_pool_dcs_free(struct mlx5_dev_ctx_shared *sh, struct mlx5_hws_cnt_pool *cpool) { @@ -566,22 +557,39 @@ mlx5_hws_cnt_pool_dcs_free(struct mlx5_dev_ctx_shared *sh, } } -int +static void +mlx5_hws_cnt_pool_action_destroy(struct mlx5_hws_cnt_pool *cpool) +{ + uint32_t idx; + + for (idx = 0; idx < cpool->dcs_mng.batch_total; idx++) { + struct mlx5_hws_cnt_dcs *dcs = &cpool->dcs_mng.dcs[idx]; + + if (dcs->dr_action != NULL) { + mlx5dr_action_destroy(dcs->dr_action); + dcs->dr_action = NULL; + } + } +} + +static int mlx5_hws_cnt_pool_action_create(struct mlx5_priv *priv, struct mlx5_hws_cnt_pool *cpool) { + struct mlx5_hws_cnt_pool *hpool = mlx5_hws_cnt_host_pool(cpool); uint32_t idx; int ret = 0; - struct mlx5_hws_cnt_dcs *dcs; uint32_t flags; flags = MLX5DR_ACTION_FLAG_HWS_RX | MLX5DR_ACTION_FLAG_HWS_TX; if (priv->sh->config.dv_esw_en && priv->master) flags |= MLX5DR_ACTION_FLAG_HWS_FDB; - for (idx = 0; idx < cpool->dcs_mng.batch_total; idx++) { - dcs = &cpool->dcs_mng.dcs[idx]; + for (idx = 0; idx < hpool->dcs_mng.batch_total; idx++) { + struct mlx5_hws_cnt_dcs *hdcs = &hpool->dcs_mng.dcs[idx]; + struct mlx5_hws_cnt_dcs *dcs = &cpool->dcs_mng.dcs[idx]; + dcs->dr_action = mlx5dr_action_create_counter(priv->dr_ctx, - (struct mlx5dr_devx_obj *)dcs->obj, + (struct mlx5dr_devx_obj *)hdcs->obj, flags); if (dcs->dr_action == NULL) { mlx5_hws_cnt_pool_action_destroy(cpool); @@ -592,21 +600,6 @@ mlx5_hws_cnt_pool_action_create(struct mlx5_priv *priv, return ret; } -void -mlx5_hws_cnt_pool_action_destroy(struct mlx5_hws_cnt_pool *cpool) -{ - uint32_t idx; - struct mlx5_hws_cnt_dcs *dcs; - - for (idx = 0; idx < cpool->dcs_mng.batch_total; idx++) { - dcs = &cpool->dcs_mng.dcs[idx]; - if (dcs->dr_action != NULL) { - mlx5dr_action_destroy(dcs->dr_action); - dcs->dr_action = NULL; - } - } -} - struct mlx5_hws_cnt_pool * mlx5_hws_cnt_pool_create(struct rte_eth_dev *dev, const struct rte_flow_port_attr *pattr, uint16_t nb_queue) @@ -619,11 +612,28 @@ mlx5_hws_cnt_pool_create(struct rte_eth_dev *dev, int ret = 0; size_t sz; + mp_name = mlx5_malloc(MLX5_MEM_ZERO, RTE_MEMZONE_NAMESIZE, 0, + SOCKET_ID_ANY); + if (mp_name == NULL) + goto error; + snprintf(mp_name, RTE_MEMZONE_NAMESIZE, "MLX5_HWS_CNT_POOL_%u", + dev->data->port_id); + pcfg.name = mp_name; + pcfg.request_num = pattr->nb_counters; + pcfg.alloc_factor = HWS_CNT_ALLOC_FACTOR_DEFAULT; if (pattr->flags & RTE_FLOW_PORT_FLAG_SHARE_INDIRECT) { - DRV_LOG(ERR, "Counters are not supported " - "in cross vHCA sharing mode"); - rte_errno = ENOTSUP; - return NULL; + struct mlx5_priv *host_priv = + priv->shared_host->data->dev_private; + struct mlx5_hws_cnt_pool *chost = host_priv->hws_cpool; + + pcfg.host_cpool = chost; + cpool = mlx5_hws_cnt_pool_init(priv->sh, &pcfg, &cparam); + if (cpool == NULL) + goto error; + ret = mlx5_hws_cnt_pool_action_create(priv, cpool); + if (ret != 0) + goto error; + return cpool; } /* init cnt service if not. */ if (priv->sh->cnt_svc == NULL) { @@ -636,15 +646,6 @@ mlx5_hws_cnt_pool_create(struct rte_eth_dev *dev, cparam.q_num = nb_queue; cparam.threshold = HWS_CNT_CACHE_THRESHOLD_DEFAULT; cparam.size = HWS_CNT_CACHE_SZ_DEFAULT; - pcfg.alloc_factor = HWS_CNT_ALLOC_FACTOR_DEFAULT; - mp_name = mlx5_malloc(MLX5_MEM_ZERO, RTE_MEMZONE_NAMESIZE, 0, - SOCKET_ID_ANY); - if (mp_name == NULL) - goto error; - snprintf(mp_name, RTE_MEMZONE_NAMESIZE, "MLX5_HWS_CNT_POOL_%u", - dev->data->port_id); - pcfg.name = mp_name; - pcfg.request_num = pattr->nb_counters; cpool = mlx5_hws_cnt_pool_init(priv->sh, &pcfg, &cparam); if (cpool == NULL) goto error; @@ -679,11 +680,15 @@ mlx5_hws_cnt_pool_destroy(struct mlx5_dev_ctx_shared *sh, { if (cpool == NULL) return; - if (--sh->cnt_svc->refcnt == 0) - mlx5_hws_cnt_svc_deinit(sh); + if (cpool->cfg.host_cpool == NULL) { + if (--sh->cnt_svc->refcnt == 0) + mlx5_hws_cnt_svc_deinit(sh); + } mlx5_hws_cnt_pool_action_destroy(cpool); - mlx5_hws_cnt_pool_dcs_free(sh, cpool); - mlx5_hws_cnt_raw_data_free(sh, cpool->raw_mng); + if (cpool->cfg.host_cpool == NULL) { + mlx5_hws_cnt_pool_dcs_free(sh, cpool); + mlx5_hws_cnt_raw_data_free(sh, cpool->raw_mng); + } mlx5_free((void *)cpool->cfg.name); mlx5_hws_cnt_pool_deinit(cpool); } diff --git a/drivers/net/mlx5/mlx5_hws_cnt.h b/drivers/net/mlx5/mlx5_hws_cnt.h index 030dcead86..d35d083eeb 100644 --- a/drivers/net/mlx5/mlx5_hws_cnt.h +++ b/drivers/net/mlx5/mlx5_hws_cnt.h @@ -86,6 +86,7 @@ struct mlx5_hws_cnt_pool_cfg { char *name; uint32_t request_num; uint32_t alloc_factor; + struct mlx5_hws_cnt_pool *host_cpool; }; struct mlx5_hws_cnt_pool_caches { @@ -148,6 +149,22 @@ struct mlx5_hws_age_param { void *context; /* Flow AGE context. */ } __rte_packed __rte_cache_aligned; + +/** + * Return the actual counter pool should be used in cross vHCA sharing mode. + * as index of raw/cnt pool. + * + * @param cnt_id + * The external counter id + * @return + * Internal index + */ +static __always_inline struct mlx5_hws_cnt_pool * +mlx5_hws_cnt_host_pool(struct mlx5_hws_cnt_pool *cpool) +{ + return cpool->cfg.host_cpool ? cpool->cfg.host_cpool : cpool; +} + /** * Translate counter id into internal index (start from 0), which can be used * as index of raw/cnt pool. @@ -160,11 +177,12 @@ struct mlx5_hws_age_param { static __rte_always_inline uint32_t mlx5_hws_cnt_iidx(struct mlx5_hws_cnt_pool *cpool, cnt_id_t cnt_id) { + struct mlx5_hws_cnt_pool *hpool = mlx5_hws_cnt_host_pool(cpool); uint8_t dcs_idx = cnt_id >> MLX5_HWS_CNT_DCS_IDX_OFFSET; uint32_t offset = cnt_id & MLX5_HWS_CNT_IDX_MASK; dcs_idx &= MLX5_HWS_CNT_DCS_IDX_MASK; - return (cpool->dcs_mng.dcs[dcs_idx].iidx + offset); + return (hpool->dcs_mng.dcs[dcs_idx].iidx + offset); } /** @@ -191,7 +209,8 @@ mlx5_hws_cnt_id_valid(cnt_id_t cnt_id) static __rte_always_inline cnt_id_t mlx5_hws_cnt_id_gen(struct mlx5_hws_cnt_pool *cpool, uint32_t iidx) { - struct mlx5_hws_cnt_dcs_mng *dcs_mng = &cpool->dcs_mng; + struct mlx5_hws_cnt_pool *hpool = mlx5_hws_cnt_host_pool(cpool); + struct mlx5_hws_cnt_dcs_mng *dcs_mng = &hpool->dcs_mng; uint32_t idx; uint32_t offset; cnt_id_t cnt_id; @@ -212,7 +231,8 @@ static __rte_always_inline void __hws_cnt_query_raw(struct mlx5_hws_cnt_pool *cpool, cnt_id_t cnt_id, uint64_t *raw_pkts, uint64_t *raw_bytes) { - struct mlx5_hws_cnt_raw_data_mng *raw_mng = cpool->raw_mng; + struct mlx5_hws_cnt_pool *hpool = mlx5_hws_cnt_host_pool(cpool); + struct mlx5_hws_cnt_raw_data_mng *raw_mng = hpool->raw_mng; struct flow_counter_stats s[2]; uint8_t i = 0x1; size_t stat_sz = sizeof(s[0]); @@ -393,22 +413,23 @@ mlx5_hws_cnt_pool_put(struct mlx5_hws_cnt_pool *cpool, uint32_t *queue, cnt_id_t *cnt_id) { unsigned int ret = 0; + struct mlx5_hws_cnt_pool *hpool; struct rte_ring_zc_data zcdc = {0}; struct rte_ring_zc_data zcdr = {0}; struct rte_ring *qcache = NULL; unsigned int wb_num = 0; /* cache write-back number. */ uint32_t iidx; - iidx = mlx5_hws_cnt_iidx(cpool, *cnt_id); - MLX5_ASSERT(cpool->pool[iidx].in_used); - cpool->pool[iidx].in_used = false; - cpool->pool[iidx].query_gen_when_free = - __atomic_load_n(&cpool->query_gen, __ATOMIC_RELAXED); - if (likely(queue != NULL)) - qcache = cpool->cache->qcache[*queue]; + hpool = mlx5_hws_cnt_host_pool(cpool); + iidx = mlx5_hws_cnt_iidx(hpool, *cnt_id); + hpool->pool[iidx].in_used = false; + hpool->pool[iidx].query_gen_when_free = + __atomic_load_n(&hpool->query_gen, __ATOMIC_RELAXED); + if (likely(queue != NULL) && cpool->cfg.host_cpool == NULL) + qcache = hpool->cache->qcache[*queue]; if (unlikely(qcache == NULL)) { - ret = rte_ring_enqueue_elem(cpool->wait_reset_list, cnt_id, - sizeof(cnt_id_t)); + ret = rte_ring_enqueue_elem(hpool->wait_reset_list, cnt_id, + sizeof(cnt_id_t)); MLX5_ASSERT(ret == 0); return; } @@ -465,9 +486,10 @@ mlx5_hws_cnt_pool_get(struct mlx5_hws_cnt_pool *cpool, uint32_t *queue, uint32_t iidx, query_gen = 0; cnt_id_t tmp_cid = 0; - if (likely(queue != NULL)) + if (likely(queue != NULL && cpool->cfg.host_cpool == NULL)) qcache = cpool->cache->qcache[*queue]; if (unlikely(qcache == NULL)) { + cpool = mlx5_hws_cnt_host_pool(cpool); ret = rte_ring_dequeue_elem(cpool->reuse_list, &tmp_cid, sizeof(cnt_id_t)); if (unlikely(ret != 0)) { @@ -534,7 +556,9 @@ mlx5_hws_cnt_pool_get(struct mlx5_hws_cnt_pool *cpool, uint32_t *queue, static __rte_always_inline unsigned int mlx5_hws_cnt_pool_get_size(struct mlx5_hws_cnt_pool *cpool) { - return rte_ring_get_capacity(cpool->free_list); + struct mlx5_hws_cnt_pool *hpool = mlx5_hws_cnt_host_pool(cpool); + + return rte_ring_get_capacity(hpool->free_list); } static __rte_always_inline int @@ -554,51 +578,56 @@ static __rte_always_inline int mlx5_hws_cnt_shared_get(struct mlx5_hws_cnt_pool *cpool, cnt_id_t *cnt_id, uint32_t age_idx) { - int ret; + struct mlx5_hws_cnt_pool *hpool = mlx5_hws_cnt_host_pool(cpool); uint32_t iidx; + int ret; - ret = mlx5_hws_cnt_pool_get(cpool, NULL, cnt_id, age_idx); + ret = mlx5_hws_cnt_pool_get(hpool, NULL, cnt_id, age_idx); if (ret != 0) return ret; - iidx = mlx5_hws_cnt_iidx(cpool, *cnt_id); - cpool->pool[iidx].share = 1; + iidx = mlx5_hws_cnt_iidx(hpool, *cnt_id); + hpool->pool[iidx].share = 1; return 0; } static __rte_always_inline void mlx5_hws_cnt_shared_put(struct mlx5_hws_cnt_pool *cpool, cnt_id_t *cnt_id) { - uint32_t iidx = mlx5_hws_cnt_iidx(cpool, *cnt_id); + struct mlx5_hws_cnt_pool *hpool = mlx5_hws_cnt_host_pool(cpool); + uint32_t iidx = mlx5_hws_cnt_iidx(hpool, *cnt_id); - cpool->pool[iidx].share = 0; - mlx5_hws_cnt_pool_put(cpool, NULL, cnt_id); + hpool->pool[iidx].share = 0; + mlx5_hws_cnt_pool_put(hpool, NULL, cnt_id); } static __rte_always_inline bool mlx5_hws_cnt_is_shared(struct mlx5_hws_cnt_pool *cpool, cnt_id_t cnt_id) { - uint32_t iidx = mlx5_hws_cnt_iidx(cpool, cnt_id); + struct mlx5_hws_cnt_pool *hpool = mlx5_hws_cnt_host_pool(cpool); + uint32_t iidx = mlx5_hws_cnt_iidx(hpool, cnt_id); - return cpool->pool[iidx].share ? true : false; + return hpool->pool[iidx].share ? true : false; } static __rte_always_inline void mlx5_hws_cnt_age_set(struct mlx5_hws_cnt_pool *cpool, cnt_id_t cnt_id, uint32_t age_idx) { - uint32_t iidx = mlx5_hws_cnt_iidx(cpool, cnt_id); + struct mlx5_hws_cnt_pool *hpool = mlx5_hws_cnt_host_pool(cpool); + uint32_t iidx = mlx5_hws_cnt_iidx(hpool, cnt_id); - MLX5_ASSERT(cpool->pool[iidx].share); - cpool->pool[iidx].age_idx = age_idx; + MLX5_ASSERT(hpool->pool[iidx].share); + hpool->pool[iidx].age_idx = age_idx; } static __rte_always_inline uint32_t mlx5_hws_cnt_age_get(struct mlx5_hws_cnt_pool *cpool, cnt_id_t cnt_id) { - uint32_t iidx = mlx5_hws_cnt_iidx(cpool, cnt_id); + struct mlx5_hws_cnt_pool *hpool = mlx5_hws_cnt_host_pool(cpool); + uint32_t iidx = mlx5_hws_cnt_iidx(hpool, cnt_id); - MLX5_ASSERT(cpool->pool[iidx].share); - return cpool->pool[iidx].age_idx; + MLX5_ASSERT(hpool->pool[iidx].share); + return hpool->pool[iidx].age_idx; } static __rte_always_inline cnt_id_t @@ -645,34 +674,12 @@ mlx5_hws_age_is_indirect(uint32_t age_idx) } /* init HWS counter pool. */ -struct mlx5_hws_cnt_pool * -mlx5_hws_cnt_pool_init(struct mlx5_dev_ctx_shared *sh, - const struct mlx5_hws_cnt_pool_cfg *pcfg, - const struct mlx5_hws_cache_param *ccfg); - -void -mlx5_hws_cnt_pool_deinit(struct mlx5_hws_cnt_pool *cntp); - int mlx5_hws_cnt_service_thread_create(struct mlx5_dev_ctx_shared *sh); void mlx5_hws_cnt_service_thread_destroy(struct mlx5_dev_ctx_shared *sh); -int -mlx5_hws_cnt_pool_dcs_alloc(struct mlx5_dev_ctx_shared *sh, - struct mlx5_hws_cnt_pool *cpool); -void -mlx5_hws_cnt_pool_dcs_free(struct mlx5_dev_ctx_shared *sh, - struct mlx5_hws_cnt_pool *cpool); - -int -mlx5_hws_cnt_pool_action_create(struct mlx5_priv *priv, - struct mlx5_hws_cnt_pool *cpool); - -void -mlx5_hws_cnt_pool_action_destroy(struct mlx5_hws_cnt_pool *cpool); - struct mlx5_hws_cnt_pool * mlx5_hws_cnt_pool_create(struct rte_eth_dev *dev, const struct rte_flow_port_attr *pattr, uint16_t nb_queue);